It goes without saying that if you want your digital marketing efforts to be results-driven, then you need to measure results – and an important part of making sure you’re going to get results is to do testing.
Split testing, commonly called A/B testing, allows you to evaluate the impact of a potential change on your website or in a marketing campaign.
It’s a pretty simple concept. “A” is status quo, “B” is a change you’re considering. You make use of your choice of a variety of online tools to see if your idea for change will increase conversions.
The goal of A/B testing is, of course, conversions. Whether you’re trying to make a direct sale or gain prospects by collecting email addresses, the little gem that triggers conversion is your call to action (CTA).
So let’s look at how you can test ideas for improving the effectiveness of your CTA.
1. Figure out what you’d like to test
You can evaluate almost any element of a webpage with a split test, but you want to pick one single element to test at a time to eliminate any confusion from your results. For example, let’s say you were to test the following two pages against one another:
- Version A: Uses “Headline A” and a red CTA button
- Version B: Uses “Headline B” and a grey CTA button
And let’s say Version B tested better – but which element made the difference? Was it the headline or the CTA colour? This is the primary reason why you only want to test one element at a time.
With your CTA, you might consider changing the colour, font, copy or position on your page or email to garner a different result.
So develop your hypothesis (“I think changing the color from gray to red will increase conversions”) and test one change at a time.
In other tests, you can compare the impact of a copy change, and then your hypothesis would become: “I think stating ‘Request Your Free Newsletter’ will perform better than ‘Sign Up for Our Newsletter’”.
The important thing is to be clear on what you’re testing and why.
2. Define your specific goal and how to measure it
For simplicity’s sake, let’s say you’ve decided to test the colour of your CTA button. You need to be clear on your goal and how you will measure this.
For example, you may be tempted to measure your results in actual sales or conversions, but you’ll want to get a bit more granular, and here’s why:
Imagine your CTA takes your prospects to a landing page where they are presented with an offer — but various elements on that page may impact the user’s decision to convert. Because of these potential other variables, (i.e. sales copy, and images on your destination page) conversions or lead generation are not the best measurement for your test.
Instead, you will simply measure the number of clicks on the CTA itself to see which version performs better. Remember, the goal of your analysis is to determine which of your two CTA variations generates the most clicks.
To put it simply, does the red button or the grey button get more clicks?
3. Set up your control and variation
Your “control” in testing-language is your “A” or status quo. Your “treatment” is the “B,” the variation that includes the change you’re going to test.
In the example, “A” has our CTA in dark gray; version “B” is red.
Most importantly, all other elements on the page should be exactly the same. The only visible difference should be the element you are actively testing.
4. Start your testing
I’m assuming here that you’ve already selected a testing platform. So now that you’ve decided what to test, how you’ll measure results and your control and treatment, you’re ready to get your test underway.
You have to create the content and graphics that you need for your control and treatment. In the example here, that’s the grey CTA and the red CTA shown below.
You’ll see that the only difference is the color. Later we might test other variables such as shape, text (content) or position on the page. For this first test, we’re only interested in whether the color impacts the number of clicks on the CTA element.
5. Drive traffic to your test
To get enough results for your test to be statistically significant, you need a lot of action on your page during the test.
This requires you to know what typically drives traffic to your site – but not just any traffic. Existing customers, for instance, are not likely prospects for clicking on your CTA. Nor are those who already subscribed to your newsletter or whatever else you’re offering for lead generation. For this test, you need a mass of new visitors to your site.
A side benefit of A/B testing is that you can use it to simultaneously test methods for driving website traffic.
For instance, if you tailor a promotion to a Facebook demographic, you can test the ability of your social media campaign to drive traffic and (assuming the campaign is successful) also get the numbers you need to check your red CTA against your existing gray one.
6. Gather data
Marketers attempting A/B testing for the first time often question how long the test needs to run.
Unfortunately, there’s no easy answer. It’s a waiting game. Put your promotion into hyper-drive until you have statistically significant results.
It can take as much as a month for your site traffic to yield significant results. Or, you may find that what you’re testing doesn’t make enough difference to measure. In other words, maybe you’ll find that the red button and the grey button perform equally as well.
That’s another possible outcome of split testing.
Testing for an appropriate amount of time will generate data that gives you an opportunity to look closely at your marketing funnel. It’s the journey your web visitor takes from becoming aware of a product or service to doing something.
7. Analyze your marketing funnel
Long before the internet, marketers have examined the AIDA model: Awareness, Interest, Desire, Action. There are variations on this idea, but these are the basic steps a buyer takes before making a purchase – and you need to build your website design around this or a similar concept.
Even if you didn’t get enough traffic for your results to be statistically significant (but even better if you did), you could further analyze results with the AIDA model in mind.
Although the point of this test was only to measure clicks on the CTA, you can look further. For instance, you might see if there was any impact on the number of people who made that click and eventually converted by completing the offer on your landing page.
Everything you can learn about user behavior on your site will help you improve user experience and thereby, conversions. So there is no insignificant data, just more opportunity!
There are so many variables that impact conversions. The time of year, the time of the month and even the time of day. As you wrap up your first A/B test, consider what you might want to examine next. Keep track of your data and results from A/B tests. By doing this, you can develop a robust testing program to improve user experience and conversions.