Sure, certain parts of marketing strategy are still based on old-fashioned intuition.
A/B ad testing isn’t one of ‘em. The time-honored scientific process that directly measures consumer response to two marketing variables is a fast, relatively easy way of objectively determining which produces a better response. And it can be applied from everything to email subject lines to ad copy to the times your ads are broadcast to maximize ROI and keep you from wasting time and money.
“A/B testing is the simplest, most straightforward testing method available,” advises Cara Olson on Marketing Land. “Not only does it help you understand the impact you’re making, but it gives you a much fuller understanding about your customers’ behavior and preferences.”
When it comes to fine tuning call marketing strategy, such testing can work wonders, allowing you to zero in on your most successful ads, vanity phone numbers and call center sales techniques. Savvy marketers are already taking full advantage of the value of inbound calls; Salesforce recently determined they convert at 30 to 50 percent, compared to 2 percent of web leads. And different, highly memorable phone numbers can be easily incorporated into the ads spurring the calls, allowing for easing tracking and comparison.
Perhaps best of all, setting up an effective A/B test need not be complicated. Consider the following tips:
- Plan to test an element that will make a significant difference in future campaigns, such as ad copy, calls to action, pricing, headlines, images, timing or external links.
- Whatever you’re testing, establish a baseline to fully understand whether your A or B options represent improvements over what you’re already doing.
- Ensure an adequate sample size to reduce your margin of error. That size will be highly dependent on your subject matter, but some guidelines are here.
- Plan to conduct your test for a time interval relevant to your study. To compare conversions resulting from two different ads, for example, you may need to keep testing over an entire sales cycle. A few suggestions for establishing a time frame are here.
- Make sure you’re reaching the right audiences, and compare responses at the same stage of the sales funnel. If your test ad shows up in venues unlikely to attract potential customers, or ends up comparing established customers with new customers, results could be virtually meaningless.
- Consider and omit extraneous factors that could skew results. For example, conversions made on major shopping days such as Black Friday can’t reasonably be compared with conversions on average shopping days.
- Limit your variables to as few as possible, keeping your two tests as identical as you can other than the two elements you wish to test. Olson notes you can easily muddy your data by mixing results from display and paid search traffic, for example, since consumer intent is different with each.
- Check your testing structure throughout the process to ensure code is intact and data isn’t unexpectedly being skewed.
- Vet your conclusions. Could there be other, less obvious reasons why results may not be valid? Could the details of your findings reveal other valuable insights?
Thanks to the flexible nature of call marketing campaigns, once you’ve confirmed the validity of your results you can immediately apply them toward marking improvements that could improve ROI.
“A/B testing, done consistently, can improve your bottom line substantially,” advises Cameron Chapman in a recent Kissmetrics blog. “When you figure that one variation might work two, three, or even four times better than another, the idea that you would conduct promotions without testing starts to seem a bit ludicrous.”
Dial800’s CallView360 product measures the performance of various call marketing campaigns in real time. Ask us more about how incorporating A/B testing into your campaigns can drive