Tomahawk Blog

Email and A/B testing: how do you get started?

When creating a newsletter, you've probably thought about what works best. What subject line gets opened the most? What is the best time to send your newsletter? Now there is no one-size-fits-all answer to these questions. What works for one target group may not work for yours. But what does work is testing! So optimize your emails with A/B testing.

How large will your test group be?

To set up a realistic A/B test, the size of your test group is important. Suppose you send an email to two people, variant 1 to person 1 and variant 2 to person 2. Variant 1 is opened, variant 2 is not. Can you then say that variant 1 performed better? Or was it just a coincidence?

The answer to this is related to the size of your test group. It must be large enough to yield a so-called significant result. Unlike A/B testing on a Web page, when sending e-mail you usually cannot increase the size of your test group by stretching the test period. But then how do you determine the size of your test group?

With a test group size calculator! In it, you can immediately see that the size of your test group depends on the total number of recipients in your list or segment. Mailchimp, one of the largest email platforms in the world, recommends a lower limit of at least 5,000 contacts. So when you are working with a smaller segment or smaller list, it doesn't make much sense to A/B test your emails.

Explanation of significance

"In statistics, one speaks of a significant outcome if it substantially supports the assumption that an observed effect was caused by something other than chance." (https://www.cultureelwoordenboek.nl/)

What will be your hypothesis?

Using my customer's first name in the subject line increases the likelihood that my customer will open the newsletter.

In this case, you test a normal subject line against a personalized subject line. You measure the result of this test by the open rate. If significantly more people open the personalized email, your hypothesis is correct and experimenting with personalized subject lines more often is highly recommended.

How do you prioritize A/B testing?

Email A/B testing provides incredibly interesting insights. Got the hang of it? You're bound to get all sorts of ideas about what else you can optimize. But how do you decide which test to start with?

The ICE model makes prioritizing your A/B tests a lot easier. Here, you assess a test on three aspects.

  1. Impact
    How much impact does the test have? Is the test high in the email marketing funnel? Then it is likely to have a higher impact than a test that is lower in the funnel.
  2. Confidence
    How much confidence do you have that the test will have a positive impact? Has research shown that a positive result is possible? Have you ever seen positive results from a similar test in other marketing channels?
  3. Ease
    How easy is it to set up the test? Adjusting the shipping time is easier than designing a new banner.

For each aspect, give a grade from 1 to 10. The more Impact you expect, Confidence you have and Ease there is in setting up the test, the higher the number. Add it all upandn start with the test that scores the highest.

How long should the A/B test run?

When you send a newsletter, you see that results don't come in right away. This only happens once people open your newsletter. So think carefully about how long you want to run an A/B test.

Do you choose the winner based on realized sales? Then also give people the opportunity to make a purchase. You can often see well in your email marketing platform up to how long after a send opens and clicks are coming in. Let the duration of your test depend on that. Is your newsletter time-bound because of a discount during a promotional period? Make sure you schedule your A/B test early enough.

How do you determine the email A/B testing winner?

You can have a winner chosen manually or automatically. Suppose you test with 20% of the recipients. Half receive variant 1, the other half receive variant 2. Once the test duration ends, the system picks the winner and the winning email is sent to the remaining 80% of your recipients. Win-win! You learn from itandn automatically send the best performing email to the majority of your recipients.

What do I do with the results?

When you get serious about A/B testing, it's time to record your tests and results. That way, you can look back on previous resultsandn take learnings for subsequent campaigns. Keep in mind the well-known saying "past results do not guarantee future results". You should always keep optimizing.

Do you have test examples?

Want to start optimizing your email campaigns, but don't know what you want to test yet? This blog isn't complete without a short list of ideas, here it comes:

IdeaResult
Personalized subject lineOpen ratio
Urgency application in subject lineOpen ratio
Shipping timeOpen ratio
Adding social proof in body copyClick through ratio
Body copy length (descriptive vs. bullets)Click through ratio
Multiple buttons vs. one button and text linksClick through ratio
Show trust elements on landing pageConversion Rate

Get in touch

Want to a/b test the impact of your email content? Our content marketer will be happy to help you create it.

Roel

Working together?

I'm Roel, founder of Tomahawk. I am happy to help you from our office in Nijmegen.