Measuring SEO Progress: From Start to Finish – Part 1: Receiving Traffic

Measuring SEO Progress: From Start to Finish – Part 1: Receiving Traffic

How to measure (and over time forecast) the impact of features that you’re building for SEO and how to measure this from start to finish. A topic that I’ve been thinking about a lot for the last few months is. It’s hard, as most of the actual work that we do can’t be measured easily or directly correlated to results. It requires a lot of resources and mostly a lot of investment (time + money). After having a discussion about this on Twitter with Dawn Anderson, Dr. Pete and Pedro Dias I thought it would be time to write up some more ideas on how to get better at measuring SEO progress and see the impact of what you’re doing. What can you do to safely assume that the right things are impacted.

1. Create

You’ve spent a lot of time writing a new article or working on a new feature/product with your team, so the last thing you want is not to receive search traffic for it. Let’s walk through the steps to get your new pages in the search engines and look at the ways you can ‘measure’ success at every step.

2. Submit: to the Index and/or Sitemaps

The first thing you want that you can impact is making sure that your pages are being crawled, in the hope that right after they’ll be indexed. There’s a different way to do this, you can either submit them through Google Search Console to have them fetched, beg that this form still works, or list your pages in a sitemap and submit these through Google Search Console.

Want to go ‘advanced’ (#sarcasm)? you can even ping search engines for new updates to your sitemaps or use something like Pubsubhubbub to notify other sources as well to know there is new content or pages.

How to measure success? Have you successfully submitted your URL via the various steps. Then you’ve basically completed this step. For now there’s not much more you can do.

3. Crawled?

This is your first real test, as submitting your page doesn’t even mean these days that your page will be crawled. So you want to make sure that after you submit the page is being seen by Google. After they’ve done this they can evaluate if they find it ‘good enough’ to index it. Before this step you mostly want to make sure that you, indeed, made the best page ever for users.
How to measure success? This is one of the hardest steps as most of the time (at least for bigger sites) you’ll need access to the server logs to figure out what kind of URLs have been visited by a search engine (User Agent). What do you see for example in the following snippet:

30.56.91.72 - - [06/Sep/2017:22:23:56 +0100] "GET" - "/example-folder/index.php" - "200" "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" - www.example.com

It’s a visit to the hostname: www.example, on the specific path: /example-folder/index.php, which returned a 200 status code (successful) at September 6th. And the User Agent contained Googlebot. If you’re able to filter down on all of this data in your server logs, you can identify what pages are being crawled and which not over a period of time.

4. Indexed: Can the URL be found in the Index?

Like I mentioned before, a search engine crawling your page doesn’t mean at all that it’s a guarantee that it will also be indexed. Having worked with a lot of sites with pages that are close to duplicate it shows the risk that they might not be indexed. But how do you know and what you can do to evaluate what’s happening?
How to measure success? There are two very easy ways, manual: just put the URL in a Google Search and see if the actual page will come up. If you want to do this at a higher scale look at sitemaps indexed data in Google Search Console to see what percentage of pages (if you’re dealing with template pages) is being indexed. The success factor, when your page shows up. It means that it’s getting ready to start ranking (higher).

5. First Traffic & Start Ranking

It’s time to start achieving results, the next steps after making sure that your site is indexed is to start achieving rankings. As a better ranking will help you get more visits (even on niche keywords). In this blog posts I won’t go into what you can do to get better rankings as there have been written too many blog posts already about this topic.

How to measure success? Read this blog post from Peter O’Neill (Mr. MeasureCamp) on what kind of tracking he added to measure the first visits from Organic Search coming. This is one of the best ways I know for now, as it will also allow you to retrieve this data via the Google Analytics Reporting API making it easier to automate reporting on this.

As an alternative you can use Google Search Console and filter down on the Page. So you’re only looking at the data for a specific landing page. Based on that you can see over time how search impressions + clicks have been growing and when (only requirement is that you should have clicks in the first 90 days of launch of this page, but you’re a good SEO so capable of achieving that).

6. Increase Ranking

In the last step we looked at when you received your first impression. But Google Search Console can also tell you more about the position for a keyword. This is important to know to make sure that you can still increase your efforts or not to get more traffic in certain areas.

In some cases it means that you can still improve your CTR% by optimizing the snippet in Google. For some keywords it might mean that you hit your limit, for other it might mean that you can still increase your position by a lot.

How to measure success? Look at the same report, Search Analytics, that we just looked for the first visit of a keyword. By enabling the data for the Impressions you can monitor what you rankings are doing. In this example you see that the rankings are fluctuating on a daily basis between 1-3. When you’re able to save the data on this over time you can start tracking rankings in a more efficient way.

Note: To do this efficiently you want to filter down on the right country, dates, search type and devices as well. Otherwise you might be looking into data from other countries, devices, etc. that you’re not interested in. For example, I don’t care right now about search outside of the US, I probably rank lower and so they could drop my averages (significantly).

As Google Search Console only shows the data on a 90 day basis I would recommend saving the data (export CSV). In a previous blog post I wrote during my time at TNW I explained how to do this at scale via the API. As you’re monitoring more keywords over time this is usually the best way to go.

7. First Positions

In the last step I briefly mentioned that there is still work to be done when you’re ranking for a specific keyword when you’re in position 1. You can still optimize your snippet usually for a higher CTR%. They’re the easier tasks in optimization I’ve noticed over time. Although at scale it could be time consuming. But how do you find all these keywords.

Keyword Rankings

I still believe in keyword rankings, definitely when you know what locations you’re focusing on (on a city, zipcode or state level) you’re able these days to still focus on measuring the actual SERPs via many tools out there (I’m still working on something cool, bear with me for a while until I can release it). The results in these reports can tell you a lot about where you’re improving and if you’re already hitting the first positions in the results.

How to measure success? You stay in the same report as you were in before. Make sure that you’ve segmented your results for the right date range and that you segmented on the right device, page, country or search type that you want to be covered in. Export your data and filter or sort the column for position on getting the ones where position == 1. These are the keywords that you might want to stop optimizing for.
What steps did I miss in this analysis and could use some more clarification?
In the next part of this series I would like to take a next step and see how we can measure the impact from start to finish for links, followed by part three on how to measure conversions and measure business metrics (the metrics that should really matter). In the end when you merge all these different areas you should be able to measure impact in any stage independently.

What tools am I using for SEO?

What tools am I using for SEO?

A while back somebody posted the SEO platforms/vendors/tools that he was using at his agency job (as an SEO). Me missing some great tools in there decided to respond but it also got me thinking about my own toolset and decided to dedicate a blog post to it, to get better recommendations and learn from others what they’re using but hopefully also to shine some light on what I am looking for in tools. This is not all of it and I din’t really have time to explain in detail what I’m using specific tools for (I might dedicate some posts over time to this). But at least wanted to give you a first look. So here we go..

In general I have three requirements for tools:

  • It should be easy to use & user friendly, no weird interfaces and stuff that only works (90% of my tools).
  • The most data/features available, or the opposite: have a very specific focus on 1 element of what I’m looking for.
  • They must have an API, so I can build things on top of it, preferably this is included in the pricing of the tool (normal for most tools these days).

Google Search Console, Bing Webmaster Tools, Yandex Webmaster Tools

Obviously Google Search Console is the tool that really matters out of the three. As most of my time is being spent managing our visibility in Google. My favorite reports are Search Analytics for getting a quick overview in our performance (we use most of their data outside of it, by using their API/R library). Structured Data (don’t forget about the Structured Data Testing Tool) to track what we’re doing with Schema.org on our pages. From time to time I might look into the Index Status report when I’m dealing with multiple domains at the same time.

One of the reasons why I like Bing Webmaster Tools is that their Index Explorer enables you to find directories & subdomains that exist on the site. A great benefit if you’re just getting started with a new site. Still after years at The Next Web and these days at Postmates I’m find out about folders or subdomains that you never hear about on a day to day basis but might cause issues for SEO.

Google Analytics & Google Tag Manager

You get the point on this one right? You’re tracking your traffic and the combination of the two can help you track all the contextual data through custom dimensions or other metrics/dimensions that will help you understand your data better. I’ve blogged about them many times on The Next Web while I was there and will remain to do so in the future.

Screaming Frog & Deepcrawl

Getting more insights in your technical structure is super valuable when you’re working on a technical audit. But ScreamingFrog for day to day use for subsets of data and Deepcrawl for weekly all-pages crawls are very powerful and help me get more insights into what kind of pages or segments are creating issues. I like to use them both as they have different reports and certain differences between tools help me better understand issues.

In my current toolset, Botify which I’ll mention later in this document, is a third option.

SEMrush & Google Adwords Keyword Tool

You always want more insights in keywords and you want to know more about them, that’s what both tools are great at. They give you a great basis for a keyword research which you can use as the start of your site’s architecture, keyword structures and internal links structures. In my previous blog post on Google Search Console I kicked off the basis for a keyword research based on that, if you want to take it easy: go with these tools (as a start).

Majestic

Majestic, might not be the most user friendly (hint & sorry!), but as they have one of the largest indexes it’s great for link research. In this case I definitely value data + quality over the friendliness of the tool.

AuthorityLabs / SERPmetrics

I still deeply believe in using ranking data, as I have the opportunity to do this at large scale & use the data for both national & local level it helps me get a better understanding in what’s happening in the rankings and mostly what’s moving. It doesn’t necessarily have to be that I’m interested in our own rankings or our competitors. But if certain features in the SERP suddenly move up it will help me understand why certain metrics are moving (or not). It’s a great provider of intelligence data that you can leverage for prioritization and measuring your impact.

AuthorityLabs used to be my favourite tool to use, these days as they changed their pricing model I switched over to SERPmetrics.

Botify / Servers

I’ll try to write a follow up blog post on this explaining how this data can help you in getting more insights into the performance of the features/products that you build. But getting more insights from the log file data that you have on your servers can be extremely useful (must add that this is a thing that mostly applies to big site SEO). Right now I’m using Botify for this.

Google Cloud Platform

My former coworker Julian wrote a great blog post on how to scale up ScreamingFrog and run it on a Google Cloud server. It’s one of many use cases why you want to use the Google Cloud Platform. Besides their server, analyzing large data sets with BigQuery (with or without using their Google Analytics connection) provides you with a better ability to handle large sums of data (log file, internal databases, etc.).

APIs

  • Data: In addition to the tools I just listed, there are a few APIs that I’m using on a regular basis that are making my life easier as they’re providing a lot of data. They’re APIs to retrieve keyword volumes and related keywords, to handle things on a bigger scale you’re going to want to be able to work with APIs instead of dealing with Excel files.
  • Reporting: Most of the reports that you’re delivering on can be automated. That is one of the best things that can deliver a great timesaver. By using the Google Analytics reporting in Google Sheets, googleAnalyticsR, SearchConsoleR and the Google Analytics Reporting API V3 and V4.

What am I still looking for?

  • Quality Control & Assurance: Weekly crawls aren’t enough if things are messed up. You want to know this on an hourly basis. Mostly when things are moving so fast that you can’t keep track of changes anymore.
  • More link data: Next to Majestic it would be great to be able to combine the datasets of others as well when doing this research. Doing this manually is doable but not on a regular basis.
  • More keyword data: When you start your keyword research you can just start with a certain set of keywords. But it could be that you’re forgetting about a huge set of keywords in a nice related industry. I’m exploring how to have more keywords to start your keyword research with (we’re not talking about 19 extra keywords here, more like 190.000 keywords).

I’m sure the set of tools will keep evolving over the next months when new things happen. I’d love to learn more about the tools that you’re using. Shoot in the comments or on Twitter what I should be using and I’ll take a look!

From 99% ‘duplicate content’ to 15 editors and back to ‘duplicate content’

Duplicate content is (according to questions from new SEOs and people in online marketing) still one of the biggest issues in Search Engine Optimization. I’ve got news for you, it for sure isn’t as there are plenty of other issues. But somehow it still always comes up to the surface when talking about SEO. As I’ve been on both sides of the equation, having worked for comparison sites and a publisher I want to reflect on both angles. Why I think it’s really important that you see both sides of the picture when looking into why sites could have duplicate content and if they do it on purpose or not.

When I started in SEO about 1211 years ago I worked for a company who would list courses from all around the globe on their website (Springest.com, let’s give them some credit), making it possible for people to compare them. By doing this we were able to create a really useful overview of training courses on the subject of SEO for example. One downside of this was that basically none of the content we had on our site was unique. Training courses are often a very strict program and in certain cases are regulated by the government of institutions to provide the right qualification to attendees. Making it impossible to change any of the descriptions on contents, books or requirements as they were provided by the institutions (read: copy pasted)

Having worked at the complete other side with The Next Web where I had the privilege of working with 10-15 full-time editors all around the globe who write unique, fresh and (news) content on a daily basis. Backed up by dozens of people willing to write for TNW where are presented with the opportunity to chose what kind of posts we publish. It made some things easier, but even at TNW we ran into content issues. The tone of voice over time devalues/changes as editors come and go. But also when you publish more content from guest authors it’s hard to maintain the right balance.

These days I’m ‘back’ with duplicated content, working at Postmates where we work on on-demand delivery. Now it makes it easier to deal with the duplicate content that we technically have from all of the restaurants (it’s published on their own site and on some competitors). But with previous experience it’s way easier to come up with so many more ideas based on the (duplicate) content that you already have. It also made me realize that most of the time you’re always working with something that is duplicate, either it be the product info you have in ecommerce, the industry that you operate in. It’s all about the way you slice and dice it to make it more unique.

In the end, search engine optimization is all about content. Either duplicated or not. We all want to make the best of it and there is always a way to provide a unique angle. Although the angle of the businesses and the way of doing SEO for them is completely different there are certain skills required that I think could provide you with a benefit over a lot of people when you’ve worked with both.

Retrieving Search Analytics Data from the Google Search Console API for Bulk Keyword Research

Retrieving Search Analytics Data from the Google Search Console API for Bulk Keyword Research

Last year I blogged about using 855 properties to retrieve all your Search Analytics data. Just after that Google luckily released that the limits on the API to retrieve only the top 5000 results had been lifted. Since then it’s been possible to potentially pull all your keywords from Google Search Console via their API (hint: you’re not able to get all the data).

Since I’ve started at Postmates now well over two months ago one of the biggest projects that I started with was getting insights into what markets + product categories we’re already performing OK in from an SEO perspective. With over 150.000 unique keywords weekly (and working on increasing that) it is quite hard to easily get a good grasp on what’s working or not as we’re active in 50+ markets that influence the queries that people are searching for (for example, show me all the queries over a longer period of time with only Mexican in the title across all markets, impossible from the interface). That’s why clicking through the Search Analytics feature in Google Search Console was nice for checking specific keywords quickly, but overall it wouldn’t help in getting detailed insights into what’s working and what’s not.

Some of the issues I was hoping to solve with this approach:

  • Pull all your data on a daily basis so you can get an accurate picture of the number of clicks and how that changes over time for a query.
  • Hopefully get some insights into the actual number of impressions. Google Adwords Keyword Tool data is still vary valuable but as it’s grouped it can be off on occasion. Google Search Console should be able to provide more accurate data on a specific keyword level.
  • Use the data as a basis for further keyword research and categorization.

Having used the Google Search Console API a bit before I was curious to see what I could accomplish pulling in the data on a daily basis and making sense of it (and combining it with other data sets, maybe more on that in later blog posts).

The process:

  • Daily pull in all the keywords, grouped by landing page so you know for sure you get all the different keyword combinations and your data isn’t filtered by the API.
  • Save the specific keyword if we haven’t saved it before, so we know if the keyword was a ‘first-hit’ for the first time.
  • For every keyword that you return do another call to the API to get the country, landing pages and metrics for that specific query.

In our case we categorize the keywords right after we pull them in to see if it’s matching a certain market or product category. So far this has been really useful for us as it’s providing way better ways for dashboarding.

Some of the things that I ran into while building out this:

What to look out for?

  • The API is very much limiting the keywords that you get to see with only impressions. I was able to retrieve some of the data but on a daily basis the statistics for impressions are off with 50% from what I’m seeing in Google Search Console. However clicks seems to only have a small difference, win!
  • Apparently they’re hiding some of the keywords as they qualify them as highly personal. So you’ll miss a certain percentage because of that.
  • The rate limits of the Google Search Console aren’t very nice, for over 5k keyword it’s taking quite long to pull in all the data as you have to deal with their rate limits.

Most of these items aren’t really being an issue for us, we have better sources for volume data anyway. In the future we’re hoping to gather more data around different sources to extend that. I’m hoping to blog about somewhere in the future.

Making a move … what’s next!?

Last week was my last one at The Next Web as their Director of Marketing. For the last four years I’ve worked alongside great people: publishing the best content (TNW), organising the best + biggest tech conferences (TNW Conferences), selling the craziest drones (TNW Deals), creating the most beautiful workspace (TQ) and them who collect a ton of data on the global tech industry (Index.co). But still… it’s time for me to move on to something new:

Project: ‘New adventures’

As of Monday I’ll be joining Postmates to help out/lead their SEO strategy, which means I’m already in San Francisco to join the team from there.

For the last 7 years I’ve been maintaining a list of goals that I update every day/week/month with the things I’d like to achieve on a personal and business level in the (near) future. Since my first trip to the US many years ago one of the goals that I created was to move to the Bay Area for x months > x years, mainly to see if my skills would hold up in the more competitive and global area that I consider the Bay Area to be. Having had the ability to spend 6 weeks in/around San Francisco at the end of last year it made it even easier to decide that I wanted to move towards that area the sooner the better.
Having had some very positive changes in my life over the course of the last year it made the choice even easier 😃 #analyticspowercouple.

In talking to Postmates I found that my passion for SEO, Growth, Analytics, Innovation could even be more stimulated so joining them is a great opportunity to develop myself even more.

Some highlights and numbers from my time at TNW: 8 TNW conferences, 35+ conferences, 60+ flights, xx blog posts, 415 million users, 342 A/B tests, 1465 commits, 231 Gitlab tickets, 111827 messages on Slack.

What I’m not going to miss from my time at TNW:

  • The most ridiculous PR pitches from companies that aren’t relevant to TNW 😉
  • Product & startup pitches in my inbox that aren’t relevant or not ready for the scale of TNW 😉
  • Endless analysis on what content is supposed to attract more engagement + traffic 😉 – Still haven’t found the answer if your curious.

What I’ll be missing though:

  • The great opportunity that I got from Boris & Patrick in having me built out the marketing team.
  • Very passionate people trying to improve TNW every day just a little bit more.
  • A great team, that I’ll truly miss working with.

Overall, I’m ready for the new challenge at Postmates and very enthusiastic about working with a new (growing) team and trying to reach world domination. If you’re around in the future, please let me know. I’ll definitely be sticking to my current strategy: trying to meet with great people across the industry to learn from. So, drinks on me!
If you want to reach out to me, you can find me at: martijn.scheijbeler@gmail.com

Why giving back is so important, help out

Getting more experience can be hard when you’re just starting with your career. You’re either trying to hope to get into an internship, your first job or if you’re making a career move you just need the experience to keep up. But you’re living in a great time to get there as there are many options available to you to get started, which would almost deserve it’s own blog post. Today I’d like to talk about the idea of giving back to help you get more experience.

Giving back, even if you’re more experienced, is really important in my opinion. There are way too many NGOs, foundations that don’t have the resources (time, tools, people, knowledge, budget) to hire the experts in the field of SEO, Growth, Analytics and CRO. But there are many more opportunities for all of us to give back (our time). I’d like to highlight two initiatives that I’m working with in helping them out but that allow you as well to participate:

  • Analysis Exchange: Started by Analytics Demystified to give NGOs the ability to hire a mentor + student to help them with their web analytics projects. The perfect opportunity for people starting in web analytics to get taught by an industry expert in improving their skills. But also a great way for NGOs to get the help from two people during a couple weeks to help them out with their questions.
  • MeasureCamp (Amsterdam): A great initiative that MeasureCamp Amsterdam is starting during the next edition is reserving 8 time slots for a foundation/NGO to help them out with ideas from 100+ experts in different areas. That’s basically a hundred hours that day from top experts in the area of data, analytics, SEO and CRO that will get contributed and should help the chosen foundation in improving their web presence.

How are you helping!? What great initiatives am I missing that you & I could contribute our time + knowledge to?

Introducing: the Google Tag Manager & Google Analytics for AMP – WordPress Plugin

Today it’s time to make it easier for sites running AMP on WordPress to track & measure their web site traffic. Over the past weeks I’ve been working on a WordPress Plugin which supports adding Google Tag Manager and Google Analytics to your AMP pages. As AMP itself is quite a hard new platform to get completely right, I thought it was time to make it easier for developers, marketers and analysts to do more with the platform.

Over the last year AMP had as massive increase in support and their WordPress plugin is rising to the charts of the most downloaded plugins on WordPress but overall the support for web analytics & tag management is lacking. The documentation is available, but as the abilities within WordPress are still limited I thought it would be pretty straight forward to come up with a plugin to enable users to start using GTM & GA on AMP pages.

So that’s what this plugin will help you with, the ability to add Google Analytics or Google Tag Manager to your pages, start tracking more advanced features (outbound click tracking) and provide 10 (for now) custom variables that you can use as custom dimensions.

You’re curious how to start? Check this out:

Moving away from onclick events, data attributes it is

Onclick attributes are amazing to easily sent some data to the dataLayer. It will make sure that on every click the data is being sent:

<a href=”http://example.com” onclick=”dataLayer.push({‘eventCategory’:’Click’, ‘eventAction’:’This Link’, ’eventLabel’:’Anchor Text’});”>Link</a>

Looks valid right? It is and it will work for you. But the problem is that with external links the dataLayer push won’t hit the browser + Google Tag Manager before the browser moves on to the external link (not taken into account links that open in a new tab/window). Which means that you’ll lose the onclick event in Google Tag Manager and won’t be able to record it.

But there is a great alternative, which we switched to for The Next Web when we found out this was happening. Data attributes can be very useful instead. If you’d use them like this:

<a href=”http://example.com” data-eventCategory:’Click’, data-eventAction:’This Link’, data-eventLabel:’Anchor Text’”>Link</a>

Why this works? The link attributes on click will always be sent automatically to Google Tag Manager. So making sure that the data attributes are part of the link and not in a separate dataLayer.push will improve it. Next step is to create a variable in Google Tag Manager for all three attributes:

Make sure to set the default value to undefined. In this case Google Tag Manager won’t take any actions in the next step, which will be defining the Trigger. Repeat this another two times for the Event Action and Event Label.

This trigger will check all incoming links, with a data-event-category attribute that doesn’t equal undefined and that has any values. In the next step for your Tag you can use all three variables that you’ve defined before.

You’re all set! You can now use data-attributes for your links to make sure that you won’t lose any data when the browser isn’t ready to receive dataLayer pushes and you just made your tracking a bit more efficient!

The discovery & benefits of using ga:dataSource in Google Analytics

As there is always something new to learn in this world I recently came across (by a tip from Simon Vreeman) the dimension: dataSource in Google Analytics. It’s one that isn’t very well known, as it’s not available through any of the default reports, but can help bigger companies with integrations in multiple places in a great way. In our case it was incredibly useful to make reporting on AMP performance more easy.

DataSource is meant to provide an additional field to send where you’re sending the data from. But it’s different from a hostname or page path because it’s more about the source. By default it will always send the value: ‘web’ for sites, ‘app’ for mobile apps and apparently ‘amp’ for AMP tracking. Something that, in our case, was very valuable as traffic on AMP pages can be hard to find as a bulk. Filtering on pages that have ?amp or /amp/ as a pageview ain’t great if you want to see the traffic sources for 1 specific article. With LinkedIn also supporting AMP that won’t make it easier. Discovering dataSource makes it easier for us to kill a custom dimension to set the dataSource and immediately being able to filter down on this and then on referral traffic.
How you can leverage it? Send data from a specific source, for example your CRM, data warehouse or your email campaigns. We’re sending for The Next Web a hit to Google Analytics via the measurement protocol to track email opens. We’ll probably move this over as we would see it as a different dataSource then it being a ‘web’ hit.

How to configure the dataSource

Google Tag Manager: You can easily set it up in Google Tag Manager by creating an additional field that you can set, apparently it’s so unknown that it isn’t covered in the default field that are suggested. But if you add ‘dataSource’ as a field and enter the value (in this case a hardcoded value, but I would recommend going with a GTM variable).

Google Analytics: Setting this up in Google Analytics works in about the same way, you can add it as an additional field in your hit call to GA to set it by adding it like this.

@GoogleAnalytics: I know you’re reading this, please help us and update the documentation to also mention ‘amp’ being a default data source for tracking on AMP pages.

How to report

Creating a custom report

As the dimension isn’t available through a standard report you’ll have to create a custom report to do so. With our data I created a report like this that can be used to report on where traffic is coming from specifically (I’ll talk about the hit referral path in another blog post).

Look at this table 😃 with information about the dataSource dimension

What am I reading in 2017?

In August I wrote about what I did/was/would be reading in 2016. Although I haven’t finished reading all these books I have already planned on reading a ton of new books this year. My goal is to read even more, to get smarter, read about new things and just dive into more knowledge. So far that’s going pretty well as my new Kindle makes me like reading even more.

Usually before the new year starts I already compiled a list of books that I’d like to read, that I’ve come across during the year and that made it to my Amazon wish list. I’ve ordered most of them by now or put them on my Kindle so I can get started right away.Watch movie online The Transporter Refueled (2015)

So what will I be reading in 2017:

Recommendations?

I’m not reading a ton of marketing related books according to this list. So what are the books that you think I should be reading around: Growth/Marketing/Data/Analytics this year? Let me know: @MartijnSchRoblox Free Unlimited Robux and Tix