Over the last years, I’ve had many (healthy) debates with product, brand and growth teams on what experiments to run and for what reason. In some cases, it was easier to run brand or product experiments just like a regular experiment to improve conversion rate. But in the some of them there was fear that either brand or product changes might hurt conversion rate. The main question that usually came up was: what is acceptable? And at the same time for product improvements: is this improvement going to impact anything? And do we decide that speed of testing is more important than the actual learnings? That’s what I want to cover in this blog post.
Experiments: Isolation & Exploration
So when do you run multiple experiments to explore what combination of features is working and when do you focus on one specific feature to isolate what really makes your audience tick. I think the majority of discussion around testing for conversion rate optimization is focused around that.
Combining Product Changes
A discussion that I’ve had with multiple teams over the years, what if we add feature X, Y, Z to the page. But what then, in some cases it’s easy to add all of these new features to the same page and run individual experiments or it’s a product improvement and in any case, it would be good to have that feature. In most cases, I would recommend running 1 experiment and stacking all these changes together.
Exploration: You explore if the product changes are going to have an impact on the user and what they’re doing. The argument against it is that you basically haven’t learned much and that you only know if the impact has been positive/negative. The upside is that you have good insights into if the product changes have an effect at all by rolling them out. The downside is that it might be hard to analyze the individual changes that you have implemented.
Isolation: In this case, I really want to know what the impact of a CTA, Image, text is on the user and if making changes to it will have any impact on the users’ behavior. If that’s the case, great! You really learned something that you can leverage again for future tests. And in case of the multiple changes, you would run multiple experiments to test the effect of all the changes on each other. The big downside could be in this case that it will take longer to achieve the results that you want.
What other methods have you seen, or what could be improved? How do you test faster when you do not always have enough traffic for multiple experiments.