Posts Tagged ‘email optimization’

Email Marketing: 7 (more) testing opportunities to generate big wins on your next email test [Part 2]

May 2nd, 2016

Does your email audience prefer short or long emails? How about images versus GIFs?

If you don’t know the answer to any of these questions, it’s OK. All you need is an A/B email test. 

Testing allows us to better understand our customers, and determine ways we can better engage them.

Last week, we detailed nine experiment ideas for you to try on your next campaign. If those weren’t your style, we have seven more for you — for a total of 16 testing opportunities.

Today, we’ll be reviewing opportunities in your body messaging, calls-to-action and design.

Email Body Messaging Testing

Testing Opportunity #10. Messaging tone

In this test, from the Web clinic, “Email Copywriting Clinic: Live, on-the-spot analysis of how to improve real-world email campaigns,” researchers used two treatments to increase total lead inquiries from visitors who abandoned the free trial sign-up process.

The first treatment was designed based on the hypothesis that visitors did not convert because the copy didn’t engage them enough, so it took a direct response tone. The second treatment was based on the hypothesis that visitors experience high levels of anxiety over potential high-pressure salespeople or spam phone calls. This treatment took a more “customer service”-oriented tone.

Read more…

Email Marketing: 9 testing opportunities to generate big wins on your next email test [Part 1]

April 28th, 2016

Email is a great medium for testing. It’s low cost, and typically requires less resources than website testing. It’s also near the beginning of your funnel, where you can impact a large portion of your customer base.

Sometimes it can be hard to think of new testing strategies, so we’ve pulled from 20 years of research and testing to provide you with a launching pad of ideas to help create your next test.

In this post and next Monday’s, we’re going to review 16 testing opportunities you can test around seven email campaign elements.

To start you out, let’s look at nine opportunities that don’t even require you to change the copy in your next email.


Subject Line Testing

Testing Opportunity #1. The sequence of your message

Recipients of your email might give your subject line just a few words to draw them in, so the order of your message plays an important role.

In the MarketingExperiments Web clinic “The Power of the Properly Sequenced Subject Line: Improve email performance by using the right words, in the right order,” the team reviewed several tests that demonstrate the importance of thought sequence in your subject lines.

Try testing point-first messaging. Start with what the recipient will get from your message and the email.

Read more…

Online Testing: 5 steps to launching tests and being your own teacher

April 10th, 2014

Testing is the marketer’s ultimate tool. It allows us to not just guess what coulda, woulda, shoulda worked, but to know what actually works. But more than that, it gives us the power to choose what we want to know about our customers.

“As a tester, you get to be your own teacher, if you will, and pick tests that make you want to learn. And structure tests that give you the knowledge you’re trying to gain,” said Benjamin Filip, Senior Manager of Data Sciences, MECLABS.

So what steps do we take if we want to be our own teacher?

While conducting interviews about the live test ran at MarketingSherpa Email Summit 2014, I recently had the chance to discuss testing processes with Ben, as well as Lauren Pitchford, Optimization Manager, and Steve Beger, Senior Development Manager, also both of MECLABS. The three of them worked together with live test sponsor BlueHornet to plan, design and execute the A/B split test they validated in less than 24 hours.

Read on to learn what they had to share about the testing process that marketers can take away from this email live test. We’ll break down each of the steps of the live test and help you apply them to your own testing efforts.


Step #1. Uncover gaps in customer insights and behavior

As Austin McCraw, Senior Director of Content Production, MECLABS, said at Email Summit, “We all have gaps in our customer theory. Which gap do we want to fill? What do we want to learn about our customer?”

What do you wish you knew about your customers? Do they prefer letter-style emails or design-heavy promotional emails? Do they prefer a certain day of the week to receive emails? Or time of day? Does one valuable incentive incite more engagement than three smaller incentives of the same combined value?

Think about what you know about your customers, and then think about what knowledge could help you better market to them and their needs and wants.


Step #2. Craft possible research questions and hypotheses

When forming research questions and hypotheses, Ben said, “You have to have some background info. A hypothesis is an educated guess, it’s not just completely out of the blue.”

Take a look at your past data to interpret what customers are doing in your emails or on your webpages.

Lauren wrote a great post on what makes a good hypothesis, so I won’t dive too deeply here. Basically, your hypothesis needs three parts:

  • Presumed problem
  • Proposed solution
  • Anticipated result


Step #3. Brainstorm ways answer those questions

While brainstorming will start with you and your group, don’t stop there. At MECLABS, we use peer review sessions (PRS) to receive feedback on anything from test ideas and wireframes, to value proposition development and results analysis.

“As a scientist or a tester, you have a tendency to put blinders on and you test similar things or the same things over and over. You don’t see problems,” Ben said.

Having potential problems pointed out is certainly not what any marketers want to hear, but it’s not a reason to skip this part of the process.

“That’s why some people don’t like to do PRS, but it’s better to find out earlier than to present it to [decision-makers] who stare at you blinking, thinking, ‘What?’” Lauren explained.

However, peer review is more than discovering problems, it’s also about discovering great ideas you might otherwise miss.

“It’s very easy for us to fall into our own ideas. One thing for testers, there is the risk of thinking that something that is so important to you is the most important thing. It might bother you that this font is hard to read, but I don’t read anyway because I’m a math guy, so I just want to see the pretty pictures. So I’m going to sit there and optimize pictures all day long. That’s going to be my great idea. So unless you listen to other people, you’re not going to get all the great ideas,” Ben said.

Read more…

Email Marketing: 6 bad habits to avoid when testing emails

November 11th, 2013

In my experiences with helping our Research Partners with email campaigns, I’ve discovered when it comes to testing, it’s not a one-size-fits-all activity.

Your email campaigns are fundamentally different than landing pages or any other elements of your marketing mix and sales funnel.

Consequently, your approach to testing them will also be different.

They have different goals, elements, best practices and bad habits to avoid.

In today’s MarketingExperiments Blog post, I wanted to share six common bad habits to avoid when testing email campaigns.


Bad Habit #1. Not knowing how your email list is being split

This is a very common mistake I see often and it’s one of the most avoidable. Marketing teams will test with limited understanding of how their email list is being split into test cells.

Or worse, they don’t know how their email platform splits at all. This is troublesome because it can easily cause sampling errors that will skew your results.

So first, check the platform you’re using to learn how the list splitting algorithm works.

If there’s not any specific information about how the email platform is allocating test cells, consider testing a dual control email send to gain a better understanding of how much your data may vary.

Also, try to make sure each test cell has allocated recipients fairly, especially if your list has information in the database that indicates recipients have varying degrees of motivation.

The reason for this is unlike A/B split testing, where landing page traffic comes from multiple sources and is split at random, email lists are a finite traffic source.

What if I’m splitting lists myself, you ask?

If that’s the case – try to do so as randomly as possible.


Bad Habit #2. Drawing conclusions after only one test

Judging a test by a single email drop is a mistake, even if your testing tool says your results have reached statistical significance.

I recommend testing your treatments over multiple email drops to ensure you are seeing some form of consistency in your results before making a business decision.

Also, one common question I get about data analysis is which method of analysis should be used to interpret your results.

In this case, I recommend recording the data as separate points in time instead of lumping all of the data together.

The reason for this is the fixed points will give you a better picture of behavior across sends, which is likely more accurate given this approach also takes into account variability over time.


Bad Habit #3. Random send times

The results of an email drop represent a single point in time versus landing page testing which has a continuous stream of traffic to pages.

Consequently, if you are not consistent in the delivery of your email drops – time of day, day of week, etc. – this inconsistency will impact your ability to interpret results accurately.

Here’s why …

If you think about when you go through the emails in your own inbox, it’s likely you do so at random. So, the only way to account for that randomness is by sending emails on a consistent schedule.

Inherently, you can adjust that send schedule to test your way into discovering the ideal time to send your customers an email, but keeping the frequency constant is key.


Bad Habit #4. Not having a clear-cut goal in your testing

This is another common mistake I see that’s avoidable – lacking a clear test hypothesis.

Email is one of the strictest channels. The general conversion path of an email is something like this:

  1. You send an email to your list
  2. The customer receives your email in their inbox (unless it gets caught in a spam filter)
  3. They identify the sender, skim the subject line and choose to open or delete the email
  4. If they choose to open the email, hopefully they engage the content
  5. If all goes to plan after engaging the content, they convert

But even with the path clearly laid out, you still can’t go anywhere without a sense of direction.

That’s why you want to make sure you have a good hypothesis that is clear and testable right from the start to help keep your testing efforts strategic in focus.


Bad Habit #5. Inconsistent key performance indicators

Ultimately, conversion (or revenue) of the treatment cell should be used to determine the winner. Depending on your goals, the point here is to make sure you are consistent as you evaluate the results.

Also, I would caution judging test results solely on clickthrough or open rates, which tend to be the primary drivers in email tests. Secondary metrics can tell a very interesting story about customer behavior if you’re willing to look at the data from all angles.


Bad Habit #6. Not setting a standard decay time

So, what is time decay exactly?

To keep things simple, time decay is really just a set period of time for an activity to take place around an email drop – an open, a click, etc.

If you are judging multiple drops, the data for each drop should follow a standard decay guideline that everyone on your team understands and agrees with. We generally suggest a week (seven days) as enough time to call the performance of a typical email drop.

One caveat here worth a mention is there is no magic bullet with email decay time.

The goals and objectives for campaigns vary by industry, so there are no universal standards in place.

Your organization should come to a consensus about a standard decay time to judge campaign performance before the campaign gets underway.

Read more…

Email Marketing: Promotional vs. letter-style test increases conversion 181%

October 14th, 2013

At the heart of email marketing campaigns, it often seems as if a tug-of-war is being waged.

On one side, you have gaining attention as a tactic and on the other, you have using conversation.

But, which of these is truly effective?

Let’s take a look at how the MECLABS research team tested a promotional-style email design against a letter-style and what we can learn from the results.

Before we get started, here’s a quick review of the research notes for a little background on the experiment.

Background: A large international media company focusing on increasing
subscription rates.

Goal: To increase the number of conversions based on the value proposition conveyed through the email.

Primary Research Question: Which email will generate the highest conversion rate?

Approach: A/B multifactor split test




The research team hypothesized the control featured popular design principles to create balance and hierarchy on the page.

The promotional-style email also featured heavy use of images and graphics to catch the readers’ attention and multiple call-to-action buttons for increased points of entry.




In the treatment, a letter-style email was designed to look and feel more like a personal letter. The design limited the use of graphics and images and featured a single call-to-action button.




What you need to know

By limiting the amount of graphics and focusing on engaging the customer in a conversation, the treatment outperformed the control by 181%. To learn more about why the letter-style email beat the promotional-style design, you can watch the free on-demand MarketingExperiments Web clinic replay of “Are Letter-Style Emails Still Effective?”

Read more…

Email Marketing: Subject line test increases open rate by 10%

August 12th, 2013

Every year, MarketingExperiments’ sister brand MarketingSherpa holds its annual MarketingSherpa Email Awards as a showcase to recognize marketers who designed email campaigns that exceeded expectations.

In today’s MarketingExperiments Blog post, I wanted to share a simple subject line test from a previous gold medal winner you can use to aid your email marketing efforts.


Winning back hearts and minds one email at a time

Travelocity identified a segment of existing email subscribers who had not booked for over a year and wanted to win back that segment’s business.

The team worked with StrongMail to develop an email campaign strategy to generate engagement, and drive conversion from the lapsed set of subscribers.

The StrongMail team started evaluating previous campaigns and testing offers.

One of the elements StrongMail used to test those offers was a subject line treatment offering a 10% discount incentive to the lapsed segment.

Here were the two subject lines:

Subject Line A: “Save an additional 10% for a limited time only.” (Shorter subject line with generic offer.)

Subject Line B: “As our valued customer, get an extra 10% off for a limited time only.” (Longer subject line with the “valued customer” message.)



Subject line B outperformed subject line A by a solid 10%.


What’s also interesting here is that when the 10% incentive was tested against a 15% discount, pictured above in a second round of testing, the increased incentive did not yield a significant difference in open rates.


What you need to know

A successful email marketing campaign requires more than identifying an unresponsive list. It also involves careful research of what has worked and not worked in the past and testing new approaches to engage a slumbering list.

The Travelocity and StrongMail teams were able to re-engage a significant percentage of those lapsed customers to generate incremental revenue that would likely have been lost to competitors.

Read more…