GROWTH MARKETING MINIDEGREE / CXL — REVIEW 4

Mert Kolay
5 min readNov 15, 2021

In my previous articles about my growth marketing studies with the CXL Institute, we have covered the growth marketing foundations and the research part before conducting experiments. Today, we are going to discuss how to run those experiments with the fourth part of CXL Growth Marketing Minidegree.

You can go to the CXL website from here.

In digital marketing, a common method of running experiments is A/B testing. It basically consists of running two variants of an experiment and seeing which one works the best.

Planning

In what cases do you want to run an A/B test?

  • When you deploy a change, you want to learn if it has a negative impact on your measurement KPIs.
  • For conversion signal map research: the idea is to remove an element and see if it has a negative impact. If not, it is then a useless element and you just don’t need it.
  • On the other hand, you can add an element (such as a social proof) and see if it has a positive impact. If so, implement it. You can also test this on a specific segment of users.
  • With a purpose of optimization, apply change in a client-side way. If it is a win, you want to deploy it immediately.

Do you have enough data to run an A/B test?

  • If you have below 1000 conversions (transactions, clicks, leads…) per month, do not A/B test. Significance would be too low. At this stage, just focus on purely growing. Optimization will come later. Above 1000 conversions per month, you can start A/B testing.
  • If you have above 10000 conversions per month, you can run 4 A/B tests per week. 10000 conversions is the “DNA border”. If you are below, take more risks, grow your company above 10000 conversions and then create a real proper structure. Once you reach that mark, you will need to get more optimization teams and A/B testing will become part of the DNA of your company.

Overall evaluation criterion (OEC): different teams may have different (thus sometimes conflicting) goal metrics. It is really important to have a goal metric that fits each team’s purpose in the company. That is what we call an overall evaluation criterion. OEC should be defined by short-term metrics that predict long-term value and translate factors of success. Do NOT rely on vanity metrics.

Setting an hypothesis: Formulating an hypothesis gets all stakeholders aligned on the same page. It frames what we are going to do and why, and saves time on discussions during and after the experiment. The hypothesis shows everyone the direction to go.

Prioritizing your A/B tests: There are some well-known prioritization frameworks that you can use such as PIE or ICE. No matter which framework you choose, they are quite similar. Here, we are going to introduce the “PIPE” model. It is about putting the hypothesis at the center of the framework.

  • Potential: what is the chance of the hypothesis to be true
  • Impact: where would these hypothesis have a bigger impact
  • Power: what are the chances of finding a significant outcome
  • Ease: how easy is it to test and implement

Warning: do not use your prioritization model to prioritize which pages you are going to test. This needs to be determined by the customer journey analysis and the right hypothesis.

Execution

When designing your A/B test, do not feel limited but do think about your implementation costs. It is fine to apply more than one change when you are striving for optimization. The only situation where you want to apply just one change is when your purpose is research. However, unless you are a very mature company and know exactly what you are doing, you should oppose only one challenger to the null hypothesis. Multivariate testing is much harder to analyze.

The change should be visible, scannable and usable. Do not waste time testing meaningless things. Also, talk to your designers to make sure the design you use is the best for your hypothesis.

When developing your A/B test, keep in mind that it is an experiment. It doesn’t make sense to look for the perfect codes and the perfect implementation. Of course, you want to monitor the essential factors, but the idea is to test fast and well. If you can’t make it within the set time frame, just propose some design changes to make it on time.

In the execution phase of your A/B test, you should also consider quality insurance. If some browsers or devices perform less, you might want to check if everything is right. Analyze the potential loss of income before deciding whether you will do some QA in these segments or not: it is a matter of time VS money. What you surely need to do is check if the interaction with other pages is still working fine. Make sure your A/B test didn’t create any technical issues for the other parts of your website.

The last part of the execution phase is monitoring. Should you stop the experiment earlier than expected? There are three main things you should check out:

  • Monitor graphs and stats. If there is something abnormal, it’s probably broken. If it is broken, stop the experiment.
  • Check the main user segments and analytics. If traffic is 50/50, then you should have the same number of users in A and B. If not, there is what we call a sample ratio error. Stop the experiment.
  • If you lose too much money, stop the experiment. This is probably something abnormal. Talk to the customer service, analyze chat logs… Try to find out what is wrong, whether it is caused by the experiment or an outside event.

What to do with the results of your tests

Now you have researched, planned and run your experiments, what do you do with your results? Simply said, you have two situations: your result is inconclusive or your result is significant.

If you have no winner, it does not mean that your hypothesis was bad. You were just unable to prove that it is outperforming the null hypothesis. It is up to you to decide whether you will implement it or not. After all, the chances of having a negative impact from the changes are very small. Personally, I would advise to go for the most simple solution. Another suggestion is to segment your results if you haven’t done it yet. You might find some influences among a specific type of customer or device for example. However, do not try to find a winner at all costs. Yes, you will eventually end up making the numbers speak for what you want to discover, but the goal is to stay honest and objective. Otherwise, what is the point of A/B testing?

If you have a significant result, implement it right away. However, keep in mind that your measured impact will not necessarily be the real impact. When planning for your A/B testing, you set up everything you can to get the best insights (sample size, test duration, statistical power…) but you never know the real outcome until you implement it. In the event of a significant result, you also want to dive in segments to understand who really caused the behavior change. Is it a global trend or a specific segment ? Up to you to find it out.

Next week, I will talk about Google Analytics, Google Tag Manager and attribution.

Thank you.

Mert Kolay

--

--