Menu
A six-step process for applying ‘test and learn’, including defining KPIs, creating test criteria, achieving statistical significance and turning insights into action.
According to a 2019 study carried out by Kantar Group, 47% of advertisers lack the confidence to extract meaningful insights from their data. That means almost half of experts in one of the most data-driven industries around feel they’re not capable of getting value from data. While the same study reveals the problem only gets worse as the volume of data and number of sources increase.
Shortly before the Kanta study was published, PhocusWire ran an article entitled Are you really doing data-driven marketing or do you just think you are?, asking marketers whether they’re mistaking data-driven marketing for data-driven assumptions. The author wants to know how many data-driven tests you’re running and if the answer is none, then you’re not doing data-driven marketing.
What you’re lacking is a test and learn approach to digital marketing that makes sense of your data and turns it into actionable insights.
The test and learn principle is a product of data science that allows organisations to convert insights into hypotheses and then test those theories to prove their value. Before the era of big data, marketing decisions were largely driven by correlative insights and gut instinct while the only measure of performance was vague (and equally correlative) KPIs, such as profit.
The problem with this is there’s no way to prove which marketing decisions actually result in higher or lower profit margins. Which means there’s also no way to truly learn from successes or failures – everything is merely speculative.
Test and learn principles remove this speculation by building a data-driven system that proves the success and failure of individual marketing strategies. Instead of relying on correlative insights, you attribute meaningful KPIs to each campaign, measure performance and test variations to determine which strategy is most effective.
No more speculation, no more assumptions and no more stabbing in the dark with your marketing efforts.
The learn aspect can be as simple as finding out which hypotheses are correct and prioritising them in order of value. You can also learn from previous experiments (and their data) to create new hypotheses or determine which testing opportunities generate the highest ROI.
Or you could go as far as feeding experimental data into machine learning algorithms to build a predictive analytics model that spots opportunities for testing, makes recommendations based on your results and predicts the outcome/value.
Conversion rate optimisation (CRO) is probably the most widely-known application of test and learn principles in marketing. This strategy defines a specific KPI (conversion rates), creates data-driven tests based on performance (A/B tests), and then runs with the variation that achieves the strongest performance metrics – in this case, the highest conversion rates.
Collectively, the tests run throughout a successful conversion optimisation strategy reveal patterns, inform future marketing decisions and allow marketers to make predictions about future outcomes or the potential value of new testing opportunities.
Most marketers understand the benefits of conversion rate optimisation. These all derive from the test and learn principles CRO is built around:
None of those benefits are exclusive to conversion rate optimisation though. You can apply the same test and learn principles to any marketing strategy, campaign or optimisation to get valuable insights from every action and use them to make better marketing decisions in the future.
Now, let’s think beyond conversion rate optimisation and look at how you can apply a test and learn approach to other marketing strategies. The great thing about CRO is that it already defines a specific goal (increasing conversions) but this isn’t the case with every strategy.
So the first thing you need to determine is what your testing goals are.
Perhaps you want to test the effectiveness of an existing strategy or try out a new one to see if it’s worth pursuing. Or maybe you have a more specific goal in mind, such as testing ad copy variations to increase click-through rates (CTRs) vs impressions.
Here are some other possible examples:
Let’s say you’re looking to test the effectiveness of different content types in your paid social campaigns. The first question is, why do you want to test this? Is it because you want to maximise ROI, increase CTRs, improve engagement, increase the quality of leads your ads generate – or something else entirely?
It’s important to answer this question specifically because it defines how you conduct your tests, measure results and the value of the lessons you’ll be able to learn from them.
Here’s the process you’ll want to follow.
We’ve already decided that we’re going to test the effectiveness of different types of content on paid social media ads. And, for the sake of example, let’s say the goal is to determine which content and ad formats generate the most engagement on Facebook, Twitter and LinkedIn.
What you need to do now is define which metrics and KPIs you’re going to use to measure engagement on each network.
While Facebook Advertising has a dedicated post engagement metric, this isn’t going to help you compare results across three networks. To do this, you’ll have to create your own metric that you can use to consistently measure ad engagement on each network.
For example, you might create a custom metric called Average Engagement Rate. To calculate this, you can add up the total number of clicks, likes, shares and comments of each ad variation, divide this figure by the total number of impressions and then times this result by 100.
This will give you your Average Engagement Rate (%) across each network for every content and ad format you test.
Before you start running any tests, you need to develop your hypotheses as a starting point to work from. Ideally, these will be based on relevant historical data you’ve already got access to, such as engagement reports from previous campaigns on each of the networks you plan to test.
For example, you may have some correlative data that suggests video ads achieve the highest engagement rates on Facebook. But it’s not so clear which ads and content formats perform best on Twitter and LinkedIn.
So one of your hypotheses might be that video ads are the most engaging content/ad format on Facebook and now you want to prove it in controlled testing. You may also have data that suggests Promoted Tweets advertising your thought leadership content perform best on Twitter – another possible hypothesis that you now want to prove/disprove in controlled testing.
Once you know which theories you want to test, you can start thinking about how to make your tests conclusive.
To get meaningful results from your tests, you need to remove every possible variable that might skew the outcome. It’s reasonable to think that different content types and ad formats are more effective than others on each network but what if the different audiences you have or the targeting settings you’re using are influencing engagement rates more than you realise?
These are the kind of variables you want to remove from your tests – as much as you possibly can.
Of course, there are some variables that you can’t control, such as the different formatting rules and visual appearance of ad formats on each network. Wherever possible, though, you want to run the same ad variations on each network and do what you can to get them seen by similar audiences on each platform.
When the time comes to run your first test(s), it’s important to start with the ideas that have the highest potential first. Start with the test that you expect will significantly increase ROI, conversion rates, lead quality or your most valuable performance metrics first. Of course, there’s always a danger your calculation will fall short but this is all part of the test and learn procedure – so just focus on the potential for now.
Your calculation will only become more accurate as your test and learn systems mature and have more data to work with.
In our example of testing content formats in social media ads, you can actually run one version of each ad type simultaneously per platform. So you don’t have any choices to worry about in this particular scenario, as long as you stick to one content format/ad type on each network.
To make sure your data is reliable, you’ll need to run your tests long enough to reach statistical significance. Basically, this means you’ve collected data from enough scenarios that the outcome can be trusted within a reasonable range – normally around the 95-99% region.
Most testing platforms will display a percentage to represent how statistically significant your data is, which means you can normally run tests until you hit your target percentage.
There are three key things that matter in achieving reliable test results:
First of all, you want to be confident that you’re collecting data from relevant sources – in this case, similar audiences you would normally target with your ad campaigns. You also want to make sure you have enough of this data to compensate for anomalies and variables.
Time is a balancing act when it comes to reliable testing. Of course, you’re going to need a certain amount of time to collect enough data and achieve statistical significance but you have to consider how time variables such as Christmas or seasonal changes might impact your outcomes.
Sometimes it can be beneficial to end tests slightly earlier if it means you’ll avoid unwanted variables. Likewise, it can help to repeat tests to average out results across seasons or years to compensate for unusually wet summers or poor economic performance in certain years.
Once you’re happy that your tests have returned statistically significant results, you can use these findings to create more engaging social media ads. Of course, this should have an immediate impact on your social advertising performance but there’s a lot more you can learn from this data if you continue to test and collect more insights.
One obvious next step would be to keep repeating these tests to measure engagement rates on each platform to see how they change over time. You could also start adding data from new social networks as they emerge to compare results and ensure that you’re always active on the most engaging network.
Here we have the test and learn principle helping you to choose which social networks to advertise on.
You can also expand these tests to include other KPIs – for example, how engagement compares to the ROI of each ad type per network. This will help you confirm that the engagement you generate is worth the initial investment, which can be particularly important if you’re spending a lot to create video ads.
With enough testing and data, you’ll also be able to predict future trends like dropping engagement rates or declining ROIs and put contingency plans in place before your strategy comes to a halt.
This test and learn approach can be applied to any marketing strategy, campaign or the smallest set of design changes on your website. In fact, some of the world’s most innovative brands, such as Amazon and Facebook, have built entire business models based on test and learn principles that apply data-driven insights to every business decision they make.
If you’re not getting the most out of your marketing data, you can speak to our team by calling 02392 830281 to find out more about developing a test and learn system. Plus, learn about how our all-in-one marketing strategy services could help you.
Chris is Managing Director at Vertical Leap and has over 25 years' experience in sales and marketing. He is a keynote speaker and frequent blogger, with a particular interest in intelligent automation and data analytics. In his spare time, he enjoys playing the guitar and is a stage manager at the Victorious Festival.
Website under-performing but not sure why? Our free review will reveal a list of fixes to get it back on track!
Categories: CRO
Categories: CRO, Design