This blog post is part of a series of four? (we found enough content to add more articles ;-)), in which we’ll dive into the details of why we wanted to create a data warehouse, how we created the data lake, how we used the data lake to create a data warehouse. It is written with the help of @RickDronkers and @hu_me / MarketLytics who we’ve worked with alongside during this (ongoing) project.
The Process of Building a Data Lake
Three Options: BigQuery Transfer Service, Firebase, SuperMetrics, and StitchData
Loading data from a dozen marketing vendors (not all of them are visualized in this graph) as efficiently as possible was important to us as we tried to save most money and resources for other parts of the data warehouse integration. In the end, we believe that we should let people do the jobs that they’re good at and leverage vendors instead of always following a ‘build ourselves’ strategy. This makes it possible for us to focus on the actual use cases instead of having to worry about the upkeep of integrating with over a dozen vendors, something that could/would take up a chunk of time in the long term.
That’s why we made a few decisions on how to load data:
Firebase: Google Analytics App & Web automatically can export to a BigQuery data set. So we decided to run our regular Universal Analytics integration of Google Analytics in sync with App & Web for now so we have the ability to cheaply export Google Analytics data.
BigQuery Transfer Service: Google Cloud offers some great integrations for YouTube, Google Ads that can be set up within minutes which makes it possible to instantly backfill and load data.
StitchData: We decided to go with Stitch as their pricing was more flexible and cheaper for the needs that we had. There are many alternatives to them, but they had integrations with most vendors that we needed. Since we didn’t require Enterprise level SLA’s (yet), we knew that we could rely on their standard integrations with many vendors that we have integrated with.
SuperMetrics: Some vendors weren’t available at the time in StitchData (Facebook Insights) so we decided to get a license for their BigQuery export.
‘Manual Import’: As a last resort we decided to build connections to BigQuery ourselves, so far we only had to do that for 2 vendors which made the workload for this lower than expected.
Understanding platform APIs and Data Granularity
There were over a dozen vendors that we wanted to import data from and in different ways, as described, to load the data into our data lake: Google BigQuery. We needed to ensure upfront that data and platforms that we would be using were up for the task, as some important areas could have thrown problems our way. This was needed to build a good understanding of the platforms and also how we could leverage them. What we looked at:
Data Freshness & Backfill capabilities: How often would the data refresh and how often would we require a refresh of the data? For some vendors a 24-hour sync was more than enough but for others this meant that a 24-hour sync would last for many hours, basically making yesterday’s data useless as it would take too long. On the other side, we needed to make sure that we could backfill data from vendors and with most decisions on how to load this data that was possible (except for manual backfill, in which case we needed to write scripts that could go back in time to backfill its data).
Cut time to collecting: Do we need all the data from an API or only partial responses? And in combination with the previous point, do we need to load data hourly (realtime wasn’t much of a case) or could we just gather the data every day to save time collecting it?
Hard to see all situations beginning end to end: Google Ads exports dozens of tables through their Transfer Service into BigQuery, which meant that it took a while to figure out what tables to use and how certain columns were translated into API fields & tables.
Using data to find gaps and roadblocks: What worked out for us is that we tried visualizing the data as soon as possible so we could quickly find anomalies (days of data missing due to a broken backfill process), otherwise the data wouldn’t match the interface because metric X turned out to be calculated differently from what we thought.
BigQuery Storage: The lowest cost out of all of this, while we store dozens of GBs daily it’s ridiculously cheap to store all of this long-term.
Stitch: Pricing can be found here. This is really dependent on how much data you’re exporting and from how many sources as you’re billed based on usage.
SuperMetrics: Pricing can be found here. Basically $190 per vendor monthly.
BigQuery Transfer Service: This is free for a couple of months for Google Ads export and YouTube. Previously it was $2.50 – one less coffee at Starbucks monthly.
BigQuery BI Engine Reservation: While it’s not fully related to the data lake I want to add a mention for it. We’re paying $60 monthly for a 2GB size reservation. Currently, we’ve optimized queries as much as possible to be under this limit, if needed we might move up. This has been a game-changer to make querying for dashboarding lightning fast.
Cutting time to prove the value
We knew many use cases that we wanted to solve with our data lake, for example: show Google Analytics metrics next to the open, unsubscribe rates from our marketing newsletters (Marketo). This meant that we had the work cut out for us and could quickly take a sample of the data and build out the use case to see if the data could be matched (would we run into tracking issues somewhere, we sometimes did) and what possibilities we had for visualization. In short, we tried to cut as much time out of the initial process as possible, by not focusing on scalability yet, to figure out if the use cases that we had actually could provide value for the teams. Basically thinking about the end goals that we had in mind.
We did that by:
Talking to end stakeholder – interview & document: The goal of the projects was to, at a minimum, provide every channel owner (PPC, SEO, Social, etc.) with dashboards with their core metrics. That was the basic level that we asked of them to provide, on top of that, we discussed briefly with them all the other use cases that they knew about that they wanted to repeat. By doing this we immediately had documentation on how the process currently worked for them, what data sources were needed to be included and what needed to be built. But more importantly, it also created their buy-in as they were able to pick and choose what we build for them.
Taking a sample and building a use case: Load the data from the vendor into the data lake and investigate what the schemas were and if the granularity of the data was enough. As long as there was a key in multiple tables that we could leverage between vendors, we knew we were on the right track (examples: UTM tags in emails needed to match with Google Analytics conventions).
Automate & Backfill later: We never focused on making sure that this could scale over the next years or would backfill our data directly for multiple months at a time. In the end, we noticed that we knew the process was iterative and that we would come up with new use cases which would potentially mean that we had to backfill many times for the same vendors.
Backfill Manually: Depending on the vendor and type of automation we could put in place, we backfilled some of the data manually where it was needed. Sometimes manually connecting some data points in a database makes more sense when you try to match vendors than it does trying to overcomplicate things in a codebase. We figured this out quickly when we had 50 email templates that we manually needed to map to other data sources.
The (unexpected) problems that we ran into?
Certain things always go differently from the expectations that we set upfront; that’s why I wanted to touch on some of the ‘problems’ that we ran into and how we dealt with it.
Inaccurate data: Dashboard versus Dump
Exports are always different from the data you see in the dashboards, mainly because the information is stored differently from how it’s visualized. For example, Cost data from Google Ads is saved in micro-cents, which means you always have to divide it to get the correct number. CTRs from multiple vendors are just a calculation of Clicks / Impressions. And in some cases, it meant that specific data points aren’t available for export through an API (looking at you Facebook).
Overall this means that you have to assess upfront what data schema is available in export carefully and if this matches the use cases that you want to fulfill. If that’s not the case, sometimes it might not be worth it to try to automate it.
Duplication as a problem in Stitch
StitchData has a different way of exporting data into BigQuery than we were used to. This meant that we had to get used to deduplicating data before we used it for analysis; in short: they created new rows in tables when they notice the unique row has changed. This means that we always needed to find the latest row value for a record. Not a massive problem in itself, but it took us a few days to figure this out as the documentation around this, at the time, lacked.
How much useful data do you have available by vendor? As some vendors are very structured: Google Ads have their own transfer service into Google BigQuery, so it was easy to backfill the data here. But for other vendors: Marketo, Sendgrid, it was harder as we made so many changes to professionalize it over time. This meant that for some, we were able to backfill data into 2018, whereas for others, we could barely get data from the last six months. Knowing this upfront is essential as it might make certain use cases irrelevant if you’re trying to learn more from YoY behavior, and it turns out the data isn’t compatible.
Exceptions, for some sources (read Marketo) we had to rely on other solutions. There are limitations to some of the data that we can get out of their (old skool) API. So for some of it, we’re getting daily email reports that are parsed and sometimes ‘manually’ ingested into Google BigQuery. It’s some of the workarounds that we had to do as we would want the data to be accessible through our data lake and some vendors aren’t always as accessible.
While we had an early discussion about the naming conventions that we wanted to follow, this was more food for thought than expected. Quickly we had tons of datasets in BigQuery that inside of them followed their conventions (defined by the data load providers). As we were applying ETL on some data tables, it meant that we needed to make sure that it was clear what every table and field in the schema meant. For example, there were tables that were only used: temporarily, for backfill, for production, for development, for transformation, etc. What made it more complicated is that most of them were also individualized by vendor. In our case, we settled on making the distinction between: ‘prod/dev’ + ‘raw/formatted/dashboard’ + ‘tables’ for datasets and then specifying the specific vendor name and goal with the tables in the dataset.
This blog post is part of a series of three, in which we’ll dive into the details of why we wanted to create a data warehouse, how we created the data lake, how we used the data lake to create a data warehouse. It is written with the help of @RickDronkers and @hu_me / MarketLytics, who we’ve worked with alongside during this (ongoing) project.
Six months ago, we were looking at Google Sheets, Google Analytics, and many other tools/vendors daily and had no marketing data live in a data warehouse (nor a lake). We wanted to get better at using (marketing) data but mostly find a better way to connect insights from multiple sources to find insights but also make more data available to the rest of the internal teams (think: PPC, SEO, Email, Product).
Prove business value: Everything we do as Marketing needs to prove some kind of business value. We’re not doing things just to keep ourselves busy. So we wanted to make sure upfront that the business cases that we had in mind would have an upside by creating this setup. Some of these were: detailed insights into social media performance, attribution level data, and a way for us to connect transactions between data sources.
Access to ‘our’ raw data: As often with marketing data, you’re using the interface of the actual marketing vendor to pull your daily numbers. But since so much of the marketing data overlaps with internal data from other channels we wanted to get access to all the raw data that is ‘ours’ (to begin with) and have the ability to combine it. This, for example, would provide us with insights on the lifetime value of customers by channels or the ability to export cohorts of users.
The analysis doesn’t scale in a spreadsheet and isn’t repeatable: Besides creating reporting monkeys, we wanted our teams to not have to spend time pulling data from different sources and manually combining it. It’s too high a risk for potential mistakes, the process doesn’t scale, and valuable time of team members would be lost in the process.
More control of data quality: We want to control our data pipelines and be involved in the ETL (Extract, Transform, Load) process so we know how data is shaped. There shouldn’t be a black box of data being created in the background. With having this process potentially in place, we can control the quality and usefulness of data more (and continue more over time).
More flexibility for visualization and ‘deep analysis’: With having access to our raw data in a warehouse, we knew that we would have to rely less on building analysis right on top of vendor’s APIs or worry about potential issues on how to visualize data (on dashboards).
Looking at the complete picture
Marketing data is often confined within the platforms and tools used to distribute the marketing message. But your prospects don’t confine themselves to just one source or medium to interact with your company (the path length is usually longer than one session before a transaction takes place). To be genuinely able to measure the effectiveness of your marketing activities, extracting the data from these platforms and having the ability to analyze them is essential. This goes back to our first point on Why. It needs to solve cases that provide business value.
Beginning with the end in mind
One of the biggest challenges in working with data is that often you don’t actually know precisely what data you need to solve for your use case. Usually, you tend to go through the cycle of “capture – store – enrich – analyze” a couple of times before arriving at really valuable insights.
By creating your own data warehouse, you create the flexibility to transform and relate the data to each other in ways that aren’t possible when your data is locked within separate platforms.
Build versus Buy?
While it was clear that we wanted to own as much of the data ourselves, an apparent next discussion was how to gain access to the actual data. Would we build the pipelines ourselves? Would we buy tools to do so, and how much of the transformation of data would fall on the team? How about maintenance?
Pros of Buying a solution
Plug and Play: A strong Pro for buying software usually is a decreased time to value because you’re skipping the building phase. So you avoid needing to have a strong learning curve and having to deal with the hard/soft-ware setup. What was a requirement for us is that we didn’t want to deal with a long process of setting up resources.
Scalability: As we grow, our datasets will grow. While paying extra for additional volume isn’t a problem, we wanted to make sure that the infrastructure would hold up and could last while we reach a certain factor of scale.
Cons for Buy & Build
High Costs: Buying tools would likely, over the long term, have more direct costs associated with it. While obviously, the indirect costs for building the solutions ourselves (increased headcount) would also be expected to be high.
Vendor Buy-in: Regardless of either buy or build, it’s a strong Con that we’re basically bought into a vendor’s platform that we wanted to make sure that certain aspects of it are able to be moved over to another service if we deem that to be the right choice at that point in time.
Standard data formatting: With some vendors, it was harder to change the data schema (often for a good reason) or pull additional data formats that we wanted to collect from, for example Facebook (the data was available in the API, just not in the vendor’s export).
Time to first use: Building pipelines for extracting the data from certain platforms could take a long time. But obviously the real business value will, in most cases, only show up when you’re actively working with the data. Losing months while dealing with extracting the data was something (also taking into account our seasonality, which provides a ton of data) that would delay this. So going with platforms that could directly provide us with extracted data was a great pro versus a strong con for building it ourselves.
Maintenance: We didn’t want to maintain extract pipelines from the marketing platforms that we use. While it’s a really important part of our data strategy, we wanted to avoid the need to spend a significant amount of our time to stay up to date and maintaining pipelines. With a buy solution, we would rely on the resources of the vendor to deal with this part of the data flows, which is very much what we would prefer at this point, mainly considering that the resources of our data team are minimal.
In the end, we decided to go with a combination of the two strategies. We decided to let most of the Extract processes be handled by software that we would buy, or that would already be available opensource as this is a task that we wanted to make as cost-effective as possible. Most of the Transform will be done through a set of tools that we would maintain and build ourselves as there are too many custom use cases and needs. With that, it meant that we also needed to ‘Load’ the data back into our data warehouse mainly ourselves. More on the details of that in our upcoming blog posts.
An often asked question in this process is what kind of costs we were expecting to have. While we won’t disclose the costs for the additional help received, we’ll quickly touch on the costs for this whole setup thus far. Parts of this are variable based on the size of your data but so far our costs:
Google Cloud Platform: ~$400-700 monthly.
This includes costs for Google BigQuery, Cloud Composer and other tools. We’re expecting that this will increase at some point but gradually will go up over time.
Note: In another blog post we might dive deeper into Google Cloud Composer and how we leverage that for Airflow (processing data pipelines). Right now, Cloud Composer presents at least half the costs for Google Cloud Platform. If you decide not to use it, your costs will be significantly lower for GCP.
Stitch Data: $ 350-500 monthly based on volume for now. We’re expecting that this will increase at some point but gradually will go up over time.
SuperMetrics: $190 monthly per license per vendor (depending on the number of licenses you might need).
Strategy going forward
With some of these decisions made upfront, we decided to move forward with collecting the data channel by channel, starting with the most important channels. This way, instead of needing to collect all the data for every channel, we were able to directly start building dashboards on top of the data that would be useful for teams. The steps to take to get there were as follows, and we’ll dive into some of them in detail in upcoming blog posts:
Extract all the data using Google BigQuery Transfer Service, Supermetrics and StitchData.
Validate that the data in Google BigQuery is correct by comparing it to the regular data sets, existing dashboards, and the data that we would look at in the vendor’s platform.
Transform the data so data sources can be combined and Transform the data to be directly useful for reporting purposes.
Load this new data into new reporting tables, backfill historical data where possible, and sync it back into Google BigQuery.
Depending on the channel, build dashboarding in Google DataStudio or Tableau.
In the next blog post, we’ll go into depth on how we created a marketing data lake, where all our raw marketing data from our vendors (Google Search Console, Google Ads, Bing Ads, etc.) lives. Expect that to go live within the next two weeks. Until then, feel free to share your thoughts on this blog post on Twitter. Again… This blog post is written with the help of @RickDronkers and @hu_me / MarketLytics, who we’ve worked with alongside during this (ongoing) project.
For the last four years, I wrote blog posts (2019, 2018, 2017 & 2016) listing the books that I read in the past year and that I wanted to be reading during that year. As always, the past year I didn’t read all the books that I’ve listed out in the blog post as I discovered some new ones and changed my focus on some others. But I did read a lot, as I finished ~25 books (and put two books aside that weren’t worth finishing).
What books I didn’t get to in 2019 and have re-added to the list for 2020:
The Intelligent Investor: Wanting to read more about investing, this is apparently a good book to start with to learn more.
My favorites from 2019:
High Growth Handbook: My favorite book from last year as it touched right on the topics that I care about on a daily basis, great interviews with daily practitioners on many topics that relate to High Growth companies including a lot of tactical advice.
The Ride of a Lifetime: A fascinating read on how Robert Iger made it’s way to the top of Disney as their CEO. One of my favorites as it shows the personal side of growth he had to go through to reach the top and it provides some insights along the way of how big Disney is of a company.
Loonshots: A different book from what I would usually read as it went into detail on projects that have changed the world and created innovations that we still benefit from these days (think radar, energy).
Secrets of Sand Hill Road: A great book if you want to know more about investing, money and the workings around startups related to investors, deal flows and structures.
What books I’d like to be reading in 2020:
The Great CEO Within: Started this book on the 1st of January and it aligns a lot with the High Growth Handbook but takes a bit more the approach from a CEO point of view with tons of tactical advice. With the lack of books around other C-level positions, it will hopefully provide enough useful input for other exec positions.
“Tell me, who/what do I need to hire to run our SEO program? What is the first hire for a new SEO team?” Questions I often get, usually followed by: “Do you know anybody for our team?”. As so many companies around the Bay Area are hiring it makes sense, which also makes hiring harder. I’ve previously blogged about writing a better job description for SEO roles but I also wanted to shed some light into what I’d suggest as good setups for an SEO team and what roles + seniority to hire for depending on your company structure.
Why need SEO support?
Most startup founders or early employees don’t have an extensive background in Marketing or specifically SEO (and they shouldn’t). Most of the time, they have been too busy building the company, getting to product-market fit and iterating on their product/service. In a lot of growing B2C companies, you need to establish plans for long term growth. That’s what SEO can usually bring to these companies: a sustainable long term growth strategy. But in order to get there, you’ll need to bring in extra help to make sure that it actually is sustainable. Instead of employing short term SEO tactics that might put your growth more at risk if you approach it wrong (as many startups also do).
Why create an SEO Team?
So why do you need to create an SEO team, for many of us this is common knowledge as we’re in this on a daily basis? But let’s say you’re getting started, these could be some of the objectives:
Dedicated focus on SEO, there are too many other channels to take care off.
Need to grow a long-term channel to success.
Too many tasks, need to specialize with its own dedicated specialized IC/team.
Build out more brand awareness for the company (SEO is a great way of doing this long term).
Grow revenue & transactions at a low Customer Acquisition Cost.
Consultant versus Hiring Inhouse
Hiring an SEO Consultant versus an Inhouse SEO
Some teams can’t always hire talent right away (think about the Bay Area where basically all the bigger companies constantly have a need to hire SEO talent) or it might take too long to ramp up SEO. In some other cases, it made more sense for the company to hire a consultant in the short term to take care of some issues and figure out what’s actually needed instead of just hiring somebody with potentially the wrong skill set for the long term.
My take on this is usually that if you already know what you want your SEO team to work on & are able to wait another 2-3 months that you’re better off hiring somebody in-house (if resources are available). In other cases: you have a short term need, you need a technical SEO but want to hire a content person, etc. You’re likely better off starting with a specialized consultant in an area to make sure your issues around that are covered.
Provide them with resources, when I joined Postmates one of the questions that I wanted to make sure was that they provided me with not just resources to set up some tools but also that I had engineers available to run the actual implementations and a designer to create the new pages that we needed there.
Engineering: It’s important, as you the SEO can’t make all the changes yourself, you’ll need the team to make actual changes. Most SEOs that I meet don’t have the knowledge about their infrastructure to actually push code or design something that complies with brand guidelines.
Design: You need to add additional content, you need more blog posts, but they can’t just be text. There need to be visual add-ons to it, so you need design support.
Content: In a bigger company there will be an actual huge need for content (either new or to edit existing content).
How have you been growing SEO teams, what is missing, what should SEO teams really focus on? Leave a comment!
If there was one thing that I could teach people in SEO, it was always the technical side of SEO that came up first. Mostly, because I think it’s a skill that doesn’t suit too many SEOs and there is already enough (good or bad, you’ll be the judge of that) content about the international, link building or content side of SEO out there. As technical SEO is getting more and more technical and in-depth about the subject itself, I’m excited to announce that I’m launching a new technical course with the folks of CXL institute.
The course will cover everything from structured data to XML sitemaps and back to some more basic on-page optimization. Along the way, I show you my process for auditing a site and coming up with the improvements. I’ll try to teach you about as many different issues and solutions as I could think of.
It’s not going to be ‘the most complete’ course ever on this topic, technical SEO evolves quickly, and likely some things will already be outdated now it’s published, while we have worked on it for months. But I’m going to do my best to inform you here and on CXL Institute about any changes or any improvements that we might be able to make in a future version. If you have any questions about the course or want to cheer me on, reach out via Twitter on @MartijnSch.