Email Marketing: 6 bad habits to avoid when testing emails

4

In my experiences with helping our Research Partners with email campaigns, I’ve discovered when it comes to testing, it’s not a one-size-fits-all activity.

Your email campaigns are fundamentally different than landing pages or any other elements of your marketing mix and sales funnel.

Consequently, your approach to testing them will also be different.

They have different goals, elements, best practices and bad habits to avoid.

In today’s MarketingExperiments Blog post, I wanted to share six common bad habits to avoid when testing email campaigns.

 

Bad Habit #1. Not knowing how your email list is being split

This is a very common mistake I see often and it’s one of the most avoidable. Marketing teams will test with limited understanding of how their email list is being split into test cells.

Or worse, they don’t know how their email platform splits at all. This is troublesome because it can easily cause sampling errors that will skew your results.

So first, check the platform you’re using to learn how the list splitting algorithm works.

If there’s not any specific information about how the email platform is allocating test cells, consider testing a dual control email send to gain a better understanding of how much your data may vary.

Also, try to make sure each test cell has allocated recipients fairly, especially if your list has information in the database that indicates recipients have varying degrees of motivation.

The reason for this is unlike A/B split testing, where landing page traffic comes from multiple sources and is split at random, email lists are a finite traffic source.

What if I’m splitting lists myself, you ask?

If that’s the case – try to do so as randomly as possible.

 

Bad Habit #2. Drawing conclusions after only one test

Judging a test by a single email drop is a mistake, even if your testing tool says your results have reached statistical significance.

I recommend testing your treatments over multiple email drops to ensure you are seeing some form of consistency in your results before making a business decision.

Also, one common question I get about data analysis is which method of analysis should be used to interpret your results.

In this case, I recommend recording the data as separate points in time instead of lumping all of the data together.

The reason for this is the fixed points will give you a better picture of behavior across sends, which is likely more accurate given this approach also takes into account variability over time.

 

Bad Habit #3. Random send times

The results of an email drop represent a single point in time versus landing page testing which has a continuous stream of traffic to pages.

Consequently, if you are not consistent in the delivery of your email drops – time of day, day of week, etc. – this inconsistency will impact your ability to interpret results accurately.

Here’s why …

If you think about when you go through the emails in your own inbox, it’s likely you do so at random. So, the only way to account for that randomness is by sending emails on a consistent schedule.

Inherently, you can adjust that send schedule to test your way into discovering the ideal time to send your customers an email, but keeping the frequency constant is key.

 

Bad Habit #4. Not having a clear-cut goal in your testing

This is another common mistake I see that’s avoidable – lacking a clear test hypothesis.

Email is one of the strictest channels. The general conversion path of an email is something like this:

  1. You send an email to your list
  2. The customer receives your email in their inbox (unless it gets caught in a spam filter)
  3. They identify the sender, skim the subject line and choose to open or delete the email
  4. If they choose to open the email, hopefully they engage the content
  5. If all goes to plan after engaging the content, they convert

But even with the path clearly laid out, you still can’t go anywhere without a sense of direction.

That’s why you want to make sure you have a good hypothesis that is clear and testable right from the start to help keep your testing efforts strategic in focus.

 

Bad Habit #5. Inconsistent key performance indicators

Ultimately, conversion (or revenue) of the treatment cell should be used to determine the winner. Depending on your goals, the point here is to make sure you are consistent as you evaluate the results.

Also, I would caution judging test results solely on clickthrough or open rates, which tend to be the primary drivers in email tests. Secondary metrics can tell a very interesting story about customer behavior if you’re willing to look at the data from all angles.

 

Bad Habit #6. Not setting a standard decay time

So, what is time decay exactly?

To keep things simple, time decay is really just a set period of time for an activity to take place around an email drop – an open, a click, etc.

If you are judging multiple drops, the data for each drop should follow a standard decay guideline that everyone on your team understands and agrees with. We generally suggest a week (seven days) as enough time to call the performance of a typical email drop.

One caveat here worth a mention is there is no magic bullet with email decay time.

The goals and objectives for campaigns vary by industry, so there are no universal standards in place.

Your organization should come to a consensus about a standard decay time to judge campaign performance before the campaign gets underway.

 

In email testing, kicking bad habits starts with a moment of clarity

There’s one more thing I wanted to mention …

Before embarking on your next email testing cycle, ensure that your site analytics platform and email platform are as integrated as possible.

Integration (and a little quality assurance won’t hurt either) can help provide you with an accurate picture of your entire funnel from email to conversion.

Feel free to share any pitfalls you may have experienced in your email testing efforts in the comments below.

 

Related Resources:

MarketingSherpa Email Summit 2014

Copywriting: How long (or short) should your copy be?

Email Marketing: Promotional vs. letter-style test increases conversion 181%

Email Optimization: A single word change results in a 90% lift in sign-ups

You might also like

Leave A Reply

Your email address will not be published.