Split testing (also referred to as A/B testing) is a method of conducting controlled, randomized experiments with the goal of improving a web based metrics. But how many of us are doing it? Do we know what we’re looking for? Here are 6 tips for running effective split-tests to help guide your expectations!
Marketeers around the land should be heard shouting from the rooftops about the merits of running split-tests and how they’ve boosted results by 1000%.
Unfortunately, all too often, we become so consumed with the tiniest tweak in content, or believe that wholesale changes will solve all marketing woes, that we fail to test empirically what actually works!
The moral of the story is this - put aside your preconceived ideas, lay to rest your pride and test your hunches. Marketing and the positive evolution of your campaign should be based on numbers not egos.
Below are my top tips for running split tests:
Ask the opinions of your peers
You’re one person, with one viewpoint. It’s highly unlikely that all of your customers will think and react to your marketing communications in the same way you do.
Through asking for the opinions of others you’ll gather a whole host of different viewpoints. It’s likely you’ll dismiss many and only accept a few but the end result will be a better campaign.
Think about what you’re testing
What’s your theory? Is it that subject line ‘A’ is better than ‘B’ or that text based works better than HTML designs?
Without deciding on what’s being tested the results are meaningless and you’re likely to be wasting time. You’ll need to decide on a broad idea prior to moving forwards.
Ensure what you’re testing is suitably different
It is absolutely pointless to run 2 campaigns side by side changing only one minor (inconsequential) characteristic.
To learn the most, you’ll be in a better position by rewriting the whole thing. Include different articles, imagery, calls to action - heck, test 2 completely different creatives as you’ll learn more from it!
Test one variable at a time
Don’t skew the results. A variable can be more than testing one thing - it’s about testing the idea.
A split to 2 or more HTML designs is a good example of one controlled variable. It’s important however to ensure it’s being sent to similar data. Changing too many variables will force ‘opinion’ into the interpretation of results. This is not a fair test.
What can you test
In short, anything and everything!
At CANDDi, we run split tests on subject lines, overall design, calls to action, ‘sales’ vs ‘informative’ content, time of broadcast, day of broadcast, imagery, friendly “from” names amongst a myriad of variations! Although we haven’t secured the perfect formula yet, we’re much closer than we used to be.
Report, report, report!
You’ve minimised the variables, negated most outside influences and run your test. Think like a scientist, put aside your prejudice and trust in the results.
Hopefully, you’ll see a marked difference in results. Adopt the changes and implement the results. Move onto the next improvement.
If you’re stuck on how to improve your campaigns, feel free to get in touch with your CANDDi consultant - we’re always happy to offer advice on how to get started!
P.s. The above tips can (and should) be applied much wider than email marketing!