10 Ideas to Kick Your A/B Testing Program Up a Notch
Want to optimize your optimization efforts? Most marketers do. And if you’ve been focusing on A/B split-testing as the way to get there, you’ve already done something smart: A/B split-testing is widely named as the most effective way to improve conversion rates.
That’s according to a survey of conversion rate optimization pros from Econsultancy:
And according to another survey of optimization pros from ConversionXL:
But as good as it is, A/B testing can get stale after awhile. It’s easy to get bogged down thinking at the level of call-to-actions and lose sight of larger, more system-wide tests and tactics that could deliver bigger returns.
This is especially true if you’re a solo optimizer. About 30 percent of CRO professionals fall into this category.
If optimization isn’t your primary job function (like 23 percent of optimizers), it gets even harder. You have to fit your conversion optimization work into the very limited time you’ve got. That tends to make most people play it safe, and stick to more vanilla tests.
So to shake you out of that rut, here are 10 out-of-the-box A/B testing ideas. Some are tools and some are hacks. Some can be applied to your entire testing program, or to your entire site. And one – just one – breaks a fundamental rule of testing. But for a very good reason.
1. Speed up your site.
Why? Because speed affects conversions.
Here’s why this is a hack: Speeding your site up will increase conversions for everything. Every landing page, every micro-conversion – all of it.
Not only will this immediately give you an across-the-board win on every possible test or problem on your list, but it will also make all your tests take less time.
This is due to statistics, and how statistical relevance is calculated. As you may know, the higher a conversion rate you’ve got to work with, the smaller a test sample you need to get statistically valid results.
Now, is speeding up your site going to double your conversion rates? No. The example above is extreme, but it illustrates the point: Faster pages convert better and pages that convert better are faster to test. We’ll see why that’s so important in the next point.
One more thing: You might also want to be careful about how the A/B testing tool you’re using affects your site’s speed, too. Ends up, some testing tools can slow things down a lot.
2. Get picky about what you test. Really picky.
We may want to test everything… but we can’t.
There are only 52 weeks in a year. Which means even if you ran each of your tests for only a week (you can, but it’s dicey), you’d only be able to run 52 tests every year.
And if you were more conservative about your tests, and wanted to be really sure about the results (or you have very little traffic, and so you have to give your tests extra time), you could only test 26 things every year.
This actually ends up being about the average rate of test implementation. About half of CRO pros are doing only one or two tests a month.
Because of this limitation, most testing professionals have a standing list of potential tests. They plan these tests, and they may even have a testing calendar that is quite similar to what an editorial calendar looks like (because they need creative, tech support, and to coordinate with multiple departments for every test they run).
Roughly half of professional optimizers use some version of a “test prioritization framework”. And the more tests people run, the more likely they are to have a framework.
If you don’t want to get so formal about all this, a “framework” can be as simple as a spreadsheet. It’s enough to help you remember all your test ideas and rank them according to benefit and priority.
Include a column for “possible result”: Try to quantify what the dollar value of a win would be for each test. If you could lift your existing conversion rate by, say, 10 percent, what would that mean for your company’s revenues?
As you set up your testing calendar, leave some empty spaces. Your boss may suddenly have something they urgently want to test. Or your company may have an unexpected event.
You may only get to choose about 20 tests to run each year.
3. Experiment with bandit and multivariate tests.
There’s a long-standing problem with A/B split tests: They inflict a lot of opportunity costs.
That’s because most tests won’t generate a positive result, which means you’ll have been sending traffic to something that doesn’t perform as well as your control… which means you’re generating fewer conversions, and costing your company precious revenue.
A bandit test attempts to get around this problem. Bandit tests work by adjusting the amount of traffic that goes to each variation over time, whereas in a standard A/B split-test, the amount of traffic is divided equally between the variations for the entire test.
A bandit test does start out by dividing traffic equally, but then it shifts and begins to give more traffic to the top-performing variation. Over time, the losing variations get less and less traffic, and the winning variation gets most of the traffic.
Of course, for this to work, you’ll have to send enough traffic to each variation to get statistically valid results. But a bandit test can be an effective way to run either a quick test or a really long test, while minimizing lost business.
They can be a good tool, but most optimizers rarely try them.
There are a slew of different types of bandit tests, and while there are a lot of pros and cons about when to use them, you should know they exist. They can be a smart alternative to a standard A/B split test.
Of course, if you’ve got a lot of traffic, consider a multivariate test. These can work really well, but honestly, they can also be really complicated to set up and expensive to run. If you are a solo CRO professional working with a limited budget (and limited risk tolerance from the C-Suite), you may have to limit yourself to split-tests. They’re just way easier to run, and work well enough for most marketers.
But if you do need to break out the big guns, and you’ve got the traffic, consider a multivariate test.
4. Go back to your customers.
Do you talk to your customers? Do you talk to them more than once a month? More than several hours once a month?
If not, you need to start doing that. Even if you’re really busy.
Because your customers are the secret sauce. Understanding them is a massive competitive advantage. Being a data-driven marketer is great (and necessary), but how might things change if you spent as much time talking to your customers as you did looking at analytics reports?
Talking to people is a soft skill, for sure. It’s not as celebrated as coding or analytics. But listening to people – actually being able to closely hear what they are saying, and what they’re not saying – is a critical skill, too.
And soft hacks can make a big difference. Even among tech B2B marketers (arguably a logic-driven bunch), soft skills “such as communication and people management” are named as the most important skills to have.
5. Do a usability test.
It’s one of the best ways to improve your marketing. But before you skip over this section because you think usability tests are too hard, or too expensive, know this: A usability test doesn’t have to be a $20,000 affair.
And to do a usability test, you only need:
- Five testers. You can find 80 percent of your site’s or app’s or software’s problems with just five testers.
- A way to capture their screen actions. Something like Camtasia would work, or any reliable screen capture software.
- A way to record a video of them sitting in front of the computer. All this requires is a smartphone and a tripod. You want this video so you can see how the person responds to tasks, where they get frustrated, what they say, even the expressions they make. And because you want to see expressions, check the camera angle or consider having two cameras.
- Three to five well-defined tasks. Think carefully about these, because you can’t ask your testers to do everything. Don’t ask people to test for much more than 45 minutes, and preferably only 30 minutes. Their patience will wear out.
- A note taker. There should be a human in the room with your tester, explaining what you want them to do. The tester will be taking notes, and should offer help if someone gets really frustrated. But don’t give too much help – the whole point of this is to reveal where people are getting stuck in your system.
- A form with a page or two for each task. This is to take notes about what the tester experiences, where they get stuck, but also what the note taker’s impressions and takeaways are while the user test is happening. It’s fast, intense work.
This is clearly a large project. But it can reveal things that there’s just no other way to see. And while optimizing landing pages and emails is great, a usability test can reveal things that will deliver radically more value to your company. It could totally revamp your testing program. That’s why so many marketers rank usability tests high in terms of optimization effectiveness.
Make it a rule to use the first person in your headlines and call to actions.
Nine times out of ten, using “my” or “me” will outperform “you”, or not mentioning the user at all.
Occasionally, writing something in the first person can get tricky. If you have to, try using a quote from a customer (even a faux customer), like “I’m tired of spending four hours every week pulling reports.”
The real hack here is to set this as a default rule: always use the first person in your copy and call-to-actions.
So go check everything – landing pages, website, ads – the works. If you find a use of the third person (he/she/it/they) or even a use of the second person (you), set up a test to see how the first person performs. Going forward, make sure every new headline and CTA is in the first person.
7. Consider breaking the statistical significance rule early on in some situations.
Let’s say you’ve just launched a test product, and you’ve got a brand new landing page to test.
Traditionally, you might split-test two completely different landing pages… and wait until you’ve got statistically valid results. You’d wait until your test achieved a 95 percent confidence level.
That could require quite a bit of traffic. But if you “cheat”, and accept an 80 percent confidence level, you could get away with almost 40 percent less traffic:
Note: The calculator is from https://www.abtasty.com/sample-size-calculator/
Here’s the rub: If you go with the 80 percent confidence level, there’s a one in five chance you’ll pick the wrong variation. But if you have to make this campaign profitable as soon as possible, it might make sense to take that risk.
Because if you’ve got a limited amount of time to make this work – or a limited amount of potential users – going with the 80 percent confidence level means you’ll be able to do more tests. A lot more tests.
Of course, this approach won’t work for everyone, and it won’t work for even most tests. But if being able to run more tests means you’ll be able to make the page work before your boss pulls the pull on the test product, then maybe 80 percent confidence will do.
8. Buy a book.
Getting tired of all these split-testing projects? Let’s give you a rest and let you sit down with a book for a while.
Here’s the book: Nielsen Norman’s B2B Website Usability for Converting Users into Leads and Customers. It costs $248. But if you’ve got the time and the focus to read it, and you work at any size B2B business at all, this is almost guaranteed to give you a font of ideas for what to test, what to re-design, and maybe how to rethink your entire website.
Here are two other excellent books about testing:
- Landing Page Optimization: The Definitive Guide to Testing and Tuning for Conversions by Tim Ash, Maura Ginty, and Rich Page
- Web Analytics 2.0: The Art of Online Accountability and Science of Customer Centricity by Avinash Kaushik
9. Test out a form analytics tool.
Forms are where a lot of conversion magic happens. They’re so important, there are tools expressly built to hone in on exactly how people are using your forms – down to how long it takes users to fill out each field.
While many form analytics tools are bundled into more expensive programs, you could get your feet wet with Lucky Orange, which offers a complete form analytics tool starting at $10 per month.
If you got even one modest win from using form analytics, you’d probably be able to talk your boss or the C-Suite into giving you more budget for either more time or more resources.
Here’s one thing you might want to do right out of the gate: Test the order of the fields in your forms.
One split-test got a 4.59 percent lift just by swapping the order of fields, and using zip code data to populate the state and city fields.
10. Stop testing general audiences.
Trick question: Who is your average user?
Answer: There is no average user.
So stop optimizing for somebody who doesn’t exist.
A lot of us have heard this advice before. The question is how to apply it.
Begin by segmenting your users – by device, by referrer, and by location for starters.
There are a couple of tools that make this quite easy. Personyze is a dynamic landing page builder that can let you customize landing pages based on more than 70 different attributes.
So if you want to customize a page based on “visitor attributes including visitors’ campaign source, industry, real-time and past behavior, demographics, geo-location, CRM data, weather, social profile, session data” and more, have at it. And it’s not limited to just landing pages – you could add call-to-actions or any other personalized elements to your website.
Also check out RichRelevance, BrightInfo, and Monetate for these sort of personalization features.
Final Thoughts
Testing is more successful if it’s a habit rather than a one-time effort. And while any one of these tactics can improve conversion rates, it’s better to view each one as a part of a larger whole – as part of a larger optimization strategy.
Even if this means you have to do fewer tests, the tests you do run will be more strategic. That will probably make them more successful, too. We need to be as careful about “random acts of testing” as we are about random acts of content.