| Culture

Establishing an Experimentation Culture

Your website is sitting pretty, making money and everything is working just fine, until your analyst knocks on your door to tell you that a significant number of users who arrive on your product description pages are dropping off without adding an item to the shopping cart. “Not to worry”, you think, “we can run an experiment here to figure out how to rectify this”. So, you follow your business processes and launch an experiment where half of your traffic is exposed to the current version of the PDP and the other half to an alternative variant.

The test runs for an entire month and low and behold, your add to cart rates are up from 10% to 20% and everyone celebrates. After all, you have discovered what the problem was, fixed it and are now in the process of redesigning your PDPs so that they exhibit the new design that was generated the uplift seen from the test.

The new design is live and life is good, until your analyst knocks on your door once again. “Add to cart rates are down”. Why? seasonality? economic climate? World events? You decide to sit tight and wait it out, after all your test was conclusive, wasn’t it?

As it turns out, the test wasn’t conducted for long enough to be statistically significant. You look back at the numbers and come to the realisation that a month simply wasn’t long enough to run this experiment and as such you have now commissioned the redesign of your PDPs using incomplete data. In other words, you’ve scored an own goal.

Careful evaluation of your test sample size is crucial to attaining accurate results. Sure, the testing tool was telling you the truth when it said that the challenger was beating the control variant however, if not enough visitors are entering the test and the variants aren’t getting the necessary exposure to reach statistical significance, chances are the results could be a fluke or just a random blip.

One of the most common failures of CRO is that practitioners don’t understand the numbers and despite having a degree in Engineering, I confess – mathematics isn’t my strong point either! Statistics are the most important considerations of CRO. By not understanding the data properly your experiments may end up damaging your business rather than helping it. Always asses the sample size required to attain a truly conclusive result and make sure that you prioritise your testing roadmap according to how long it might take to achieve a result that’s statistically significant and don’t declare a winner until your test has achieved this measure.

Another common mistake is to look at tactical methods i.e. tests that have been successful elsewhere and assume that if these are applied on your website they would yield the same results. Before even thinking about tests or solutions, you should always conduct your research to determine where the friction is in the journey before defining a problem statement and then progressing on to the hypothesis gathering phase.

Last by not least, failed tests. The first thing to say about this is there is no such thing as a failed test. This is negative language that implies that the test itself was not a valuable exercise. This is utter nonsense. Every single test (provided you have its KPIs, problem statement, hypothesis and statistical significant result) is valuable. You learn something every time you run an experiment. For example, I often ask people to tell me what conclusion they would draw from an experiment that compared long and short copy on a given page should there be no winning variant. Many ponder and say things like “well, you should segment the results before you can learn anything” or “nothing, that test was rubbish” or “some prefer long copy and some prefer short”…the list goes on. But the correct answer relates to what you can now do with that result. In essence, you can now reduce your workload by only writing short copy which saves you time, money and effort. So, whilst the test didn’t produce a clear winner it was still a worthwhile endeavour.

The same goes for tests where the control variant wins against any of the challengers. You’ve now saved yourself a whole heap of money and reduced your exposure massively by not running headlong towards a new design that wouldn’t have worked!

Whilst there are a few important details to consider when testing and many ways of looking at your results, don’t let that put you off. Get out there, try things and see what happens. The positives far outweigh the negatives. If you’re passionate about your business, your role and your proposition you’ll quickly realise that testing is fun, rewarding, exciting and incredibly interesting.

Interested in working with Creative CX?

Book a free discovery call