Over the course of the last few years I worked on over 200+ experiments, from a simple change to a Call To Action (CTA) up to complete design overhauls and full feature integrations into products. So far it taught me a lot about how to set up an experiment program and what you can mess up along the way that could have a major impact (good and/or bad). As I get a lot of questions these days on how to set up a new testing program or people asking me how to get started I created a slide deck that I gave a couple times this year at conferences about all the failures that I see & made myself running an experimentation program.
The (well known) process of A/B Testing
You’ve all seen this ‘circle’ process before. It shows the different stages of an experiment, you start with a ton of ideas, you create their hypothesis, you go on to designing & building them (with or without engineers), you do the appropriate Quality Assurance checks before launching, you run an analyze the results of your test. If all goes well you’re able to repeat this process endlessly. Sounds relatively easy, right? It could be, although along the way I’ve made mistakes in all of these steps. In this blog post I’d like to run you through the top 20 mistakes that I’ve (seen being) made.
You can also go through the slidedeck that I’ve presented at LAUNCH SCALE and Growth Marketing Conference:
1. They just launch, they just test.
One of the easiest to spot mistakes, as you’re basically not experimenting but putting features/products live without figuring out if they’re really going to have an impact on what you’re doing. That’s why you basically always want to give a certain feature a test run on a small percentage of your traffic, if your audience is big enough that could be just as little as 1% or for smaller companies run it 50%/%50. In that case it’s easier for you to isolate what the impact is, that’s the solution to this problem.
2. Companies that believe they’re wasting money with experimentation.
One of the most fun arguments to run into in my opinion. Whenever organisations think that by running so many experiments that don’t provide a winner it might kill their bottom line there are still some steps to take that will help them better understand experimentation. Most of the times this is easy to over come, ask them what they think the right way to go is with experimentation and let them pick the winners for a few experiments. Chances are about 100% that at least one of their answers will be proven wrong. Point being that whenever they would have made the decision based on gut feeling or experience it also would cost the organization money (and in most cases even way more money). That’s why it’s still important to quickly overcome this argument and get the buy-in of the whole organization to make sure people believe in experimentation.
3. Expect Big Wins.
It depends in what stage you are with your experimentation program, at the beginning it’s likely that you’ll pick up a lot of low hanging fruit that will provide you with some easy wins. But I promise it won’t get easier of time (read more about the local maximum here). You won’t be achieving big results all the time. But don’t give up, if you can still achieve a lot of small wins over time it will also sum up to a lot of results. If you expect that every test will double your business as you might read in (bad) blog posts, you won’t.
4. My Competitor is Doing X, so that’s why we’re testing X.
Wrong! Chances are your competitor also has no clue what they’re doing, just like you! So focus on what you should be doing best, know your own customers and focus on your own success. Even when you see your competition is running experiments, chances are high that they’re also not sure what will become a winner and what will be a loser. So focusing on repeating their success will only put you behind them as you need to spend maybe even longer then them figuring out what’s working and what’s not.
5. Running tests when you don’t have (enough) traffic.
Probably the most asked question around experimentation: How much traffic do I need to run a successful experiment on my site? Usually followed by: I don’t have that much traffic, should I still be focused on running experiments. What I’d recommend most of the time is figure out if you can successfully launch more than ~20 experiments yearly. If you have to wait too long on results for your experiments you might run into trouble with your analysis (see one of the items on this laster). This is combined most of the time with the fact that these teams are relatively small and don’t always have the capacity to do more with this it might be better to focus first on converting more users or focus on the top of the funnel (acquisition).
6. They don’t create a hypothesis.
I can’t explain writing a hypothesis better than this blog post by Craig Sullivan. Where he lays out the frameworks for a simple and more advanced hypothesis. If you don’t have a hypothesis, you can’t use it to verify later on that your test has been successful or not. That’s why you want to make sure that you have documented how you are going to measure the impact and how you’ll be evaluating that the impact was big enough that you’ll deploy it.
7. Testing multiple variables at the same time, 3 changes require 3 tests.
Great, you realize that you need to test more. That’s a good step in the right direction. But over time changing too many elements on a specific page or across pages can make it hard to figure out what is leading to an actual change in results for an experiment. But if you need to show real results in an experiment you could turn this failure into a winner by running 1 experiment where you change a lot and seeing what the impact is. Which after you do you run more experiments that will prove what specific element brought most of the value. I’d like to do this from time to time, sometimes when you make small incremental changes time after time it could be that there is no clear winner. Running a big experiment will help in that case to see if you can impact the results with that. Once you do that, go back and experiment with smaller changes to see what exactly led to that result so you know going forward what potential areas are for experimentation that will provide big changes.
8. Use numbers as the basis of your research, not your gut feeling.
We like our green buttons more than our red ones. In the early days of experimentation an often heard reply. These days I still hear many variations of the same line. But what you want to make sure is that you use data as the basis for your experiment instead of a gut feeling. If you know based on research that you need to improve the submission rate for a form. You usually won’t be asking more questions but want to make sure that the flow of the form is getting more optimal to boost results. If you noticed in your heat maps or surveys that users are clicking in a certain area or can’t find the answer on a particular question they have you might want to add more buttons or a FAQ. By adding and testing you’re building on top of a hypothesis, like we discussed, before that is data driven.
Design & Engineering
9. Before and After is not an A/B test. We launched, let’s see what the impact is.
The most dangerous way of testing that I see companies do is testing: before > after. You’re testing what the impact is of a certain change by just launching it, which is dangerous considering that many surrounding factors are changing with that as well. With experiments like this it’s near impossible to really isolate the impact on the change, making it basically not an experiment but just a change where you’re hoping to see what the impact is.
10. They go over 71616 revisions for the design.
You want to follow your brand and design guidelines, I get that. It’s important as you don’t want to run something that is not going to open up to the world if it’s a winner. But if you’re trying to figure out what the perfect design solution is to a problem you’re probably wasting your time as that’s exactly why you’re running an experiment, to find the actual best variant. That’s why I would advise to come up with a couple of design ideas that you can experiment with and run the test as soon as possible to learn and adapt to the results as soon as possible.
11. They don’t Q&A their tests. Even your mother can have an opinion this time.
Most of the time your mother shouldn’t be playing a role in your testing program. The chances that she can tell you more about two tiered tests and how you should be interpreting your results then you do as an upcoming testing expert are very minimal. But what she can help you with is make sure that your tests are functionally working. Just make sure she’s segmented in your new variant and run her through the flow of your test. Is everything working as expected? Is nothing breaking? Does your code work in all browsers and across devices? With more complex tests I noticed that usually at least 1 element when you put it through some extensive testing, that’s why this step is so important in your program. Every test that is not working can be a waste of testing days in the years and one not spend on actually optimizing for positive returns.
Run & Analysis
12. Running your tests not long enough, calling the results early.
Technically you can run your test for 1 hour and achieve significance if you had the right amount of users + conversions in your tests. But that doesn’t always mean you should call the results of the test. A lot of business deal with longer lead/sales times which could influence the results, also weekends, weekdays whatever can influence your business is something that might have your results be different. You want to take all of this into account to make sure your results are as trustworthy as possible.
13. Running multiple tests with overlap.. it’s possible, but segment the sh*t out of your tests.
If you have the traffic to run multiple experiments at the same time you’ll likely run into the issue that your tests will overlap. If you run a test on the homepage and at the same time one on your product pages it’s likely that a user might end up in both experiments at the same time. Most people don’t realize that this is influencing the results of the experiment for both tests as theoretically you just ended running a Multivariate test across multiple pages. That’s why it’s important to also use this in your analysis, by creating the right segments where you audience is overlapping in multiple experiments but also by isolating the users in 1 segment.
14. Data is not sent to your main analytics tool, or you’re comparing your A/B testing tool to analytics, good luck.
You’re likely already using a tool for your Web Analytics; Google Analytics, Clicky, Adobe Analytics, Omniture, Amplitude, etc.. chances are that they’re tracking the core metrics that matter to your business. As most A/B testing tools are also measuring similar metrics that are relevant for your tests you’ll likely run into a discrepancy between the metrics, either on revenue (sales, revenue, conversion rate) or regular visitor metrics (clicks, session, users). They’re loading before/after your main analytics tool and/or the definition of the metrics are different, that’s why you’ll always end up with some difference that can’t be explained. What I usually tried was making sure that all the information on an experiment is also captured in your main analytics tool (GA was usually the tool of my liking). Then you don’t have to worry about any discrepancies as you’re using your main analytics tool (which should be tracking everything related to your business) to analyze the impact of an experiment.
15. Going with your results without significance.
Your results are improving with 10% but the significance is only 75%. That’s a problem, it means that 25% of the time you don’t know for sure that the experiment is going to provide the results that you have so far (although you still would never know for sure as reaching 100% is impossible). With experimentation it’s a problem, in simple words: it basically means that you can’t trust the results of your experiment as they aren’t significant enough to say it’s a winner or a loser just yet. When you want to know if your results are significant make sure that you’re using a tool that can calculate this for you, one of these tools is this significance calculator. You enter the data from your experiment and you’ll find out what the impact was.
16. You run your tests for too long… more than 4 weeks is not to be advised, cookie deletion.
For smaller sites that don’t have a ton of traffic it can be hard to reach significance, they just need a lot of data to make a decision that is supported by it. But also for smaller sites that are running experiments on a smaller segment this could become an issue. If your test is running for multiple weeks, let’s say 4+ weeks, it’s going to be hard to measure the impact for this in a realistic way as it could be that people are deleting their cookies and a lot of surrounding variables might be changing during that period of time. What that means is that over time the context of the experiment might change too much which could have an effect on how you’re analyzing the results.
17. Not deploying your winner fast enough, it takes 2 months to launch.
One of the aspects of experimentation is that you have to move fast (and not break things). When you find a winning variant in your experiment you want to have the benefits from it as soon as possible. That’s how you make a testing program worth it for your business. Too often I see companies (usually the bigger ones) having to deal with the rough implementation process to get something implemented for production purposes. A great failure because they can’t get the upside of the experiment and likely by the time they can finally launch the winning variant circumstances have changed so much that it might already need a re-test.
18. They’re not keeping track of their tests. No documentation.
Can you tell me what the variants looked like of the test that ran two months ago and what the significance level was for that specific test? You probably can’t as you didn’t keep track of your testing documentation. Definitely in bigger organizations and when you’re company is testing with multiple teams at the same time this is a big issue. As you’re collecting so many learnings over time it can be super valuable to keep track of them, so document what you’re doing. You don’t want to make the mistake that another team is implementing a clear loser that you’ve tested months ago. You want to prove to them that you’ve already ran the test before. Your testing documentation will help you with that, in addition it can be very helpful in organizing the numbers. If you want to know what you’ve optimized on a certain page it can probably tell you over time changing what elements brought most return.
19. They’re not retesting their previous ideas.
You tested something 5 months ago, but as so many variables changed it might be time to come up a new experiment that is re-testing your original evaluation. This also goes for experiments that did provide a clear winner, over time you still want to know if the uplift that noticed before is still going on or if the results have flattened over time. A retest is great for this as you’re testing your original hypothesis again to see what has been changed. It will provide you usually with even more learnings.
20. They give up.
Never give up, there is so much to learn about your audience when you keep on testing. You’ve never reached the limits! Keep on going whenever a new experiment doesn’t provide a new winner. The compound effect: incremental improvements is what lets most companies win!
That’s it, please don’t make all these mistakes anymore! I already made them for you..
What did I miss, what kind of failures did you have while setting up your experimentation program and what did you learn from them?
How to measure (and over time forecast) the impact of features that you’re building for SEO and how to measure this from start to finish. In this series I already provided some more information on how to measure progress: from creation to traffic (part 1). This blog post (part 2) will go deeper into another aspect of SEO: getting more links and how you can measure the impact of that. We’ll go a bit more into depth on how you can easily (through 4 steps, 1 bonus step) get insights into the links that you’ve acquired and how to measure their impact.
You’ve spent a lot of time writing a new article or working on a new feature/product with your team, so the last thing you want is not to receive search traffic for it and not start ranking. For most keywords you’ll need to do some additional authority building to make sure you’ll get the love that you might be needing. But it’s going to be important to keep track of what’s happening around that to measure the impact of your links on your organic search traffic.
So the first thing you’d like to know if your new page is getting any links, there are multiple ways to track this. For this you can use the regular link research tools, that we’ll talk about more in depth later in this piece. But one of the easiest ways for a link to show real impact is to figure out if you’re receiving traffic from it and when that time was. Just simple and easy to figure out in Google Analytics. Head to the traffic sources report and see for that specific page if you’re getting any referral traffic. Is that the case? Then try to figure out when the first visit was, you’ll be able to monitor more closely then since when you’ll have this link or look at the obvious thing, the published date if you can find it.
How to measure success?
Google Alerts, Mention, Just-Discovered Links (Moz) and as described Google Analytics. They’re are all tools that can be used to identify links that are coming in and might be relatively new. As they’re mentions in the news media or just the newest being picked up by a crawler. It’s important to know more about that as you don’t’ want to be dependent on a link index that is updating on an irregular basis.
Over a longer period of time you want to know how your authority through links is increasing. While I’m not a huge fan of the ‘core metrics’ like Domain Authority, Page Authority, etc. as they can change without providing any context I rather look at the graphs and new and incoming root domains to see how fast that is growing. In the end it is a numbers game (usually more quality + quantity) so that’s the best way to see it. One of my favorite reports in Majestic is the cumulated links + domains so I can get an easy grasp of what’s happening. Are you rapidly growing up and to the right or is progress slow?
How to measure success?
One suggestion that I would have is to look at the cached pages for your links: So by now you’ve figured out what kind of links are sending traffic, so that’s a good first sign. But are they also providing any value for your SEO? Put the actual link into Google and see if the page is being indexed + cached. It is? Good for you, that means the page is of good enough quality and being cached for Google’s sake. It’s not, hmm then there is work to do for no and your actual page might need some authority boosting on its own.
Are you links really impacting what’s happening to the authority and ranking of the page. You would probably want to know. It’s one of the harder tasks to figure out as you have a lot of variables that can be playing a role in this. It’s basically a combination of the value of these links, which you could use one of the link research tools’ metrics for or just looking at the actual changes for search traffic for your landing page. Do you see any changes there?
5. Collect all the Links
In addition to getting insights into what kind of links might be impacting your rankings for a page you’ll likely want to know where all of your links can be find. That’s relatively simple, it’s just a matter of connecting all the tools together and using them in the most efficient way.
So sign up for at least the first three tools, as Google Search Console and Bing Webmaster Tools are free, you can use them to download your link profiles. When you sign up for Majestic you’re able to verify your account with your GSC account and get access to your own data when you connect your properties. So you just unlocked three ways of getting more data.
That’s still not enough? Think about getting a (paid) account at three other services so you can download their data and combine it with the previous data sets, you’re not going to be able to retrieve much more data and get a better overview as you’re now leveraging 6 different indexes.
(P.S. Take notice that all of them grow their indexes over time, a growing link profile might not always mean that you’re getting more links, it might be that they’re just getting better at finding them.)
How to measure success?
Download all the data on a regular basis (weekly, monthly, quarterly) and combine the data sets, as they’re all providing links and root domains you can easily add the sheets together and remove the duplicate values. You won’t have all the metrics per domain + link that way but still can get a pretty good insight into what your most popular linking root domains + links are.
In the previous part I talked more about measuring the impact from creation to getting traffic. Hopefully the next part will provide more information on how to measure business impact & potentially use the data for forecasting. In the end when you merge all these different areas you should be able to measure impact in any stage independently. What steps did I miss in this analysis and could use some more clarification?
How to measure (and over time forecast) the impact of features that you’re building for SEO and how to measure this from start to finish. A topic that I’ve been thinking about a lot for the last few months is. It’s hard, as most of the actual work that we do can’t be measured easily or directly correlated to results. It requires a lot of resources and mostly a lot of investment (time + money). After having a discussion about this on Twitter with Dawn Anderson, Dr. Pete and Pedro Dias I thought it would be time to write up some more ideas on how to get better at measuring SEO progress and see the impact of what you’re doing. What can you do to safely assume that the right things are impacted.
You’ve spent a lot of time writing a new article or working on a new feature/product with your team, so the last thing you want is not to receive search traffic for it. Let’s walk through the steps to get your new pages in the search engines and look at the ways you can ‘measure’ success at every step.
2. Submit: to the Index and/or Sitemaps
The first thing you want that you can impact is making sure that your pages are being crawled, in the hope that right after they’ll be indexed. There’s a different way to do this, you can either submit them through Google Search Console to have them fetched, beg that this form still works, or list your pages in a sitemap and submit these through Google Search Console.
Want to go ‘advanced’ (#sarcasm)? you can even ping search engines for new updates to your sitemaps or use something like Pubsubhubbub to notify other sources as well to know there is new content or pages.
How to measure success? Have you successfully submitted your URL via the various steps. Then you’ve basically completed this step. For now there’s not much more you can do.
This is your first real test, as submitting your page doesn’t even mean these days that your page will be crawled. So you want to make sure that after you submit the page is being seen by Google. After they’ve done this they can evaluate if they find it ‘good enough’ to index it. Before this step you mostly want to make sure that you, indeed, made the best page ever for users.
How to measure success? This is one of the hardest steps as most of the time (at least for bigger sites) you’ll need access to the server logs to figure out what kind of URLs have been visited by a search engine (User Agent). What do you see for example in the following snippet:
220.127.116.11 - - [06/Sep/2017:22:23:56 +0100] "GET" - "/example-folder/index.php" - "200" "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" - www.example.com
It’s a visit to the hostname: www.example, on the specific path: /example-folder/index.php, which returned a 200 status code (successful) at September 6th. And the User Agent contained Googlebot. If you’re able to filter down on all of this data in your server logs, you can identify what pages are being crawled and which not over a period of time.
4. Indexed: Can the URL be found in the Index?
Like I mentioned before, a search engine crawling your page doesn’t mean at all that it’s a guarantee that it will also be indexed. Having worked with a lot of sites with pages that are close to duplicate it shows the risk that they might not be indexed. But how do you know and what you can do to evaluate what’s happening?
How to measure success? There are two very easy ways, manual: just put the URL in a Google Search and see if the actual page will come up. If you want to do this at a higher scale look at sitemaps indexed data in Google Search Console to see what percentage of pages (if you’re dealing with template pages) is being indexed. The success factor, when your page shows up. It means that it’s getting ready to start ranking (higher).
5. First Traffic & Start Ranking
It’s time to start achieving results, the next steps after making sure that your site is indexed is to start achieving rankings. As a better ranking will help you get more visits (even on niche keywords). In this blog posts I won’t go into what you can do to get better rankings as there have been written too many blog posts already about this topic.
How to measure success? Read this blog post from Peter O’Neill (Mr. MeasureCamp) on what kind of tracking he added to measure the first visits from Organic Search coming. This is one of the best ways I know for now, as it will also allow you to retrieve this data via the Google Analytics Reporting API making it easier to automate reporting on this.
As an alternative you can use Google Search Console and filter down on the Page. So you’re only looking at the data for a specific landing page. Based on that you can see over time how search impressions + clicks have been growing and when (only requirement is that you should have clicks in the first 90 days of launch of this page, but you’re a good SEO so capable of achieving that).
6. Increase Ranking
In the last step we looked at when you received your first impression. But Google Search Console can also tell you more about the position for a keyword. This is important to know to make sure that you can still increase your efforts or not to get more traffic in certain areas.
In some cases it means that you can still improve your CTR% by optimizing the snippet in Google. For some keywords it might mean that you hit your limit, for other it might mean that you can still increase your position by a lot.
How to measure success? Look at the same report, Search Analytics, that we just looked for the first visit of a keyword. By enabling the data for the Impressions you can monitor what you rankings are doing. In this example you see that the rankings are fluctuating on a daily basis between 1-3. When you’re able to save the data on this over time you can start tracking rankings in a more efficient way.
Note: To do this efficiently you want to filter down on the right country, dates, search type and devices as well. Otherwise you might be looking into data from other countries, devices, etc. that you’re not interested in. For example, I don’t care right now about search outside of the US, I probably rank lower and so they could drop my averages (significantly).
As Google Search Console only shows the data on a 90 day basis I would recommend saving the data (export CSV). In a previous blog post I wrote during my time at TNW I explained how to do this at scale via the API. As you’re monitoring more keywords over time this is usually the best way to go.
7. First Positions
In the last step I briefly mentioned that there is still work to be done when you’re ranking for a specific keyword when you’re in position 1. You can still optimize your snippet usually for a higher CTR%. They’re the easier tasks in optimization I’ve noticed over time. Although at scale it could be time consuming. But how do you find all these keywords.
I still believe in keyword rankings, definitely when you know what locations you’re focusing on (on a city, zipcode or state level) you’re able these days to still focus on measuring the actual SERPs via many tools out there (I’m still working on something cool, bear with me for a while until I can release it). The results in these reports can tell you a lot about where you’re improving and if you’re already hitting the first positions in the results.
How to measure success? You stay in the same report as you were in before. Make sure that you’ve segmented your results for the right date range and that you segmented on the right device, page, country or search type that you want to be covered in. Export your data and filter or sort the column for position on getting the ones where position == 1. These are the keywords that you might want to stop optimizing for.
What steps did I miss in this analysis and could use some more clarification?
In the next part of this series I would like to take a next step and see how we can measure the impact from start to finish for links, followed by part three on how to measure conversions and measure business metrics (the metrics that should really matter). In the end when you merge all these different areas you should be able to measure impact in any stage independently.
A while back somebody posted the SEO platforms/vendors/tools that he was using at his agency job (as an SEO). Me missing some great tools in there decided to respond but it also got me thinking about my own toolset and decided to dedicate a blog post to it, to get better recommendations and learn from others what they’re using but hopefully also to shine some light on what I am looking for in tools. This is not all of it and I din’t really have time to explain in detail what I’m using specific tools for (I might dedicate some posts over time to this). But at least wanted to give you a first look. So here we go..
In general I have three requirements for tools:
- It should be easy to use & user friendly, no weird interfaces and stuff that only works (90% of my tools).
- The most data/features available, or the opposite: have a very specific focus on 1 element of what I’m looking for.
- They must have an API, so I can build things on top of it, preferably this is included in the pricing of the tool (normal for most tools these days).
Google Search Console, Bing Webmaster Tools, Yandex Webmaster Tools
Obviously Google Search Console is the tool that really matters out of the three. As most of my time is being spent managing our visibility in Google. My favorite reports are Search Analytics for getting a quick overview in our performance (we use most of their data outside of it, by using their API/R library). Structured Data (don’t forget about the Structured Data Testing Tool) to track what we’re doing with Schema.org on our pages. From time to time I might look into the Index Status report when I’m dealing with multiple domains at the same time.
One of the reasons why I like Bing Webmaster Tools is that their Index Explorer enables you to find directories & subdomains that exist on the site. A great benefit if you’re just getting started with a new site. Still after years at The Next Web and these days at Postmates I’m find out about folders or subdomains that you never hear about on a day to day basis but might cause issues for SEO.
Google Analytics & Google Tag Manager
You get the point on this one right? You’re tracking your traffic and the combination of the two can help you track all the contextual data through custom dimensions or other metrics/dimensions that will help you understand your data better. I’ve blogged about them many times on The Next Web while I was there and will remain to do so in the future.
Screaming Frog & Deepcrawl
Getting more insights in your technical structure is super valuable when you’re working on a technical audit. But ScreamingFrog for day to day use for subsets of data and Deepcrawl for weekly all-pages crawls are very powerful and help me get more insights into what kind of pages or segments are creating issues. I like to use them both as they have different reports and certain differences between tools help me better understand issues.
In my current toolset, Botify which I’ll mention later in this document, is a third option.
SEMrush & Google Adwords Keyword Tool
You always want more insights in keywords and you want to know more about them, that’s what both tools are great at. They give you a great basis for a keyword research which you can use as the start of your site’s architecture, keyword structures and internal links structures. In my previous blog post on Google Search Console I kicked off the basis for a keyword research based on that, if you want to take it easy: go with these tools (as a start).
Majestic, might not be the most user friendly (hint & sorry!), but as they have one of the largest indexes it’s great for link research. In this case I definitely value data + quality over the friendliness of the tool.
AuthorityLabs / SERPmetrics
I still deeply believe in using ranking data, as I have the opportunity to do this at large scale & use the data for both national & local level it helps me get a better understanding in what’s happening in the rankings and mostly what’s moving. It doesn’t necessarily have to be that I’m interested in our own rankings or our competitors. But if certain features in the SERP suddenly move up it will help me understand why certain metrics are moving (or not). It’s a great provider of intelligence data that you can leverage for prioritization and measuring your impact.
AuthorityLabs used to be my favourite tool to use, these days as they changed their pricing model I switched over to SERPmetrics.
Botify / Servers
I’ll try to write a follow up blog post on this explaining how this data can help you in getting more insights into the performance of the features/products that you build. But getting more insights from the log file data that you have on your servers can be extremely useful (must add that this is a thing that mostly applies to big site SEO). Right now I’m using Botify for this.
For bigger sites it’s really hard to keep up to date on the latest changes as so many people are working on it. That’s why we want to make sure that you get alerted when important SEO features are changing. We’re using some custom scripts in Google Drive but also like to use SEORadar.
Google Cloud Platform
My former coworker Julian wrote a great blog post on how to scale up ScreamingFrog and run it on a Google Cloud server. It’s one of many use cases why you want to use the Google Cloud Platform. Besides their server, analyzing large data sets with BigQuery (with or without using their Google Analytics connection) provides you with a better ability to handle large sums of data (log file, internal databases, etc.).
- Data: In addition to the tools I just listed, there are a few APIs that I’m using on a regular basis that are making my life easier as they’re providing a lot of data. They’re APIs to retrieve keyword volumes and related keywords, to handle things on a bigger scale you’re going to want to be able to work with APIs instead of dealing with Excel files.
- Reporting: Most of the reports that you’re delivering on can be automated. That is one of the best things that can deliver a great timesaver. By using the Google Analytics reporting in Google Sheets, googleAnalyticsR, SearchConsoleR and the Google Analytics Reporting API V3 and V4.
What am I still looking for?
- Quality Control & Assurance: Weekly crawls aren’t enough if things are messed up. You want to know this on an hourly basis. Mostly when things are moving so fast that you can’t keep track of changes anymore.
- More link data: Next to Majestic it would be great to be able to combine the datasets of others as well when doing this research. Doing this manually is doable but not on a regular basis.
- More keyword data: When you start your keyword research you can just start with a certain set of keywords. But it could be that you’re forgetting about a huge set of keywords in a nice related industry. I’m exploring how to have more keywords to start your keyword research with (we’re not talking about 19 extra keywords here, more like 190.000 keywords).
I’m sure the set of tools will keep evolving over the next months when new things happen. I’d love to learn more about the tools that you’re using. Shoot in the comments or on Twitter what I should be using and I’ll take a look!
Duplicate content is (according to questions from new SEOs and people in online marketing) still one of the biggest issues in Search Engine Optimization. I’ve got news for you, it for sure isn’t as there are plenty of other issues. But somehow it still always comes up to the surface when talking about SEO. As I’ve been on both sides of the equation, having worked for comparison sites and a publisher I want to reflect on both angles. Why I think it’s really important that you see both sides of the picture when looking into why sites could have duplicate content and if they do it on purpose or not.
When I started in SEO about 1211 years ago I worked for a company who would list courses from all around the globe on their website (Springest.com, let’s give them some credit), making it possible for people to compare them. By doing this we were able to create a really useful overview of training courses on the subject of SEO for example. One downside of this was that basically none of the content we had on our site was unique. Training courses are often a very strict program and in certain cases are regulated by the government of institutions to provide the right qualification to attendees. Making it impossible to change any of the descriptions on contents, books or requirements as they were provided by the institutions (read: copy pasted)
Having worked at the complete other side with The Next Web where I had the privilege of working with 10-15 full-time editors all around the globe who write unique, fresh and (news) content on a daily basis. Backed up by dozens of people willing to write for TNW where are presented with the opportunity to chose what kind of posts we publish. It made some things easier, but even at TNW we ran into content issues. The tone of voice over time devalues/changes as editors come and go. But also when you publish more content from guest authors it’s hard to maintain the right balance.
These days I’m ‘back’ with duplicated content, working at Postmates where we work on on-demand delivery. Now it makes it easier to deal with the duplicate content that we technically have from all of the restaurants (it’s published on their own site and on some competitors). But with previous experience it’s way easier to come up with so many more ideas based on the (duplicate) content that you already have. It also made me realize that most of the time you’re always working with something that is duplicate, either it be the product info you have in ecommerce, the industry that you operate in. It’s all about the way you slice and dice it to make it more unique.
In the end, search engine optimization is all about content. Either duplicated or not. We all want to make the best of it and there is always a way to provide a unique angle. Although the angle of the businesses and the way of doing SEO for them is completely different there are certain skills required that I think could provide you with a benefit over a lot of people when you’ve worked with both.