A/B Testing for Email: What You Need to Get Started

Email is still one of marketers’ strongest tools, generating $38 for every $1 spent. With that high of a return, even small improvements to your strategy can make a huge difference in your revenue.

A/B testing allows marketers to test different variables in emails to see what performs best. This happens by sending two or more different emails to random, separate portions of your audience. After a set period of time, you or your ESP determines a winner based on performance, and sends the strongest version to the remainder of your audience.

39% of brands never or rarely test their emails, but A/B testing is a great way to see what drives your email engagement. With A/B testing now available in Hive, we decided it was time to break down the steps you’ll need to take to start testing your email marketing the right way.

Related: You asked, we listened: A/B Test Campaigns now on Hive

Decide what metrics are important to you

You might feel ready to jump in and start testing your emails, but let’s back up for just a second. Before deciding what content to test, you’ll need to determine what metrics you’ll be looking at.

Think: what problem are you trying to solve? Are you not getting as many opens as you’d hope? Or are people not clicking through?

The problem you’re trying to solve will help you decide what content to test. If your problem is open rates, you might test your subject line. Or you might test your CTA to see how it affects click-through rates or conversion rates once subscribers land on your website.

Decide what variable to test

The sky’s the limit in terms of what you can test in email, but it’s important that you only test one variable at a time. If you send emails where both the “from” name and preview text are different, you won’t know which to attribute for higher open rates.

Need some inspo to start your A/B testing journey? Here are a few email campaign A/B test ideas to get you inspired:

  • Subject line
  • CTA
  • Headline
  • Images
  • Preview text
  • All text vs. some images
  • ”From” name (e.g. your brand vs. a person’s name)
  • Personalization (e.g. using someone’s name vs. keeping the greeting generic)
  • Layout (e.g. two columns vs. one column)
  • Body text
  • Offer (e.g. % discount vs. free shipping)
  • Time sent
  • Length

Each of these variables have the potential to affect results differently, whether the number of people clicking a CTA button or the amount of revenue earned.

Decide who you’re sending to

The larger your sample size, the more accurate your results will be. In order to make sure your findings are actually meaningful, best practice is to send to a minimum of 1,000 subscribers, but some ESPs will recommend at least 5,000 per variant.

How many people you can send to will ultimately depend on the size of your list. If you have less than 10,000 subscribers you may need to run several tests on the same variable to make sure the results you see are accurate.

The other trick to accurate results is making sure your audience is as random as possible. The only exception to this rule is if you are testing how specific segments, like your buyer personas, respond to different content.

Decide what good results look like to you

Amazing, you’ve run your test! Now how do you know if the results are notable? At this point, you’ll have to decide what a significant difference looks like to you.

If you’re seeing click rates of 10% and 11%, chances are the difference is a statistical anomaly - not enough to tell you which variant was actually favored by your audience.

You’ll also want to dig deeper than just the usual engagement metrics. For example, if 10% more people opened your top performing email, did you see that engagement trickle down to other benchmarks proportionally? Did 10% more people engage on your website? Or click through? If more subscribers opened your best email but less clicked through, it may indicate that your subject line felt misleading.

Takeaways

Remember the rule of thumb, so important we’ll tell you twice - only test one variable at a time! This is the only way to ensure you know what’s driving your results. That means controlling your audience and other factors like timing too (unless that’s what you’re testing).

Test as large an audience as possible for the most accurate results, and test often. A/B testing on a regular basis will give you more information so you know what content resonates with subscribers and drives your email marketing. Ready to get started?