Archive

Posts Tagged ‘a/b testing’

Email Preheaders Tested: The surprising sensitivity of a single line of text

January 8th, 2015 No comments

Earlier this year I reached out to a friend of mine who manages training with the Salesforce Marketing Cloud (previously known as ExactTarget) to get a sense of what questions everyday marketers were having concerning email.

“Preheaders,” was her quick response. Specifically on “using a preheader, not using a pre-header — what should be in the preheader.”

Just in case you’re not familiar with a preheader, it is the line of preview text you find below the subject line on mobile device email apps and even in the Outlook preview pane.

 

Focusing on that piece of information, I took to the database and decided to do some looking around.

Surprisingly, I didn’t find as many tests as I usually find. This is an item that has just started to get the attention of marketers as of late. Additionally, when I searched on the Internet, I could not find a single experiment published on the subject with statistical significance.

I decided to oversee some tests myself, hoping to solidify some of the initial patterns I was noticing from my initial view of the database.

This is what I discovered: Preheaders can indeed have a significant effect on your email performance metrics. However, I still had some questions:

  • With what metrics?
  • In what way?
  • By how much?

To help answer those questions, I’d like to reference two recent examples for the same type of email:

Read more…

A/B Testing: How to improve already effective marketing (and win a ticket to Email Summit in Vegas)

January 5th, 2015 183 comments

Editor’s Note: This subject line contest is no longer accepting entries. In the next few weeks, we will read all of the entries, select the best ones and then run the test. Check the MarketingExperiments Blog in a few weeks to see which entry won, why it won and what you can learn from that to further improve your own marketing.

This blog post ends with an opportunity for you to win a stay at the ARIA Resort & Casino in Las Vegas and a ticket to Email Summit, but it begins with an essential question for marketers:

How can you improve already successful marketing, advertising, websites and copywriting?

Today’s MarketingExperiments blog post is going to be unique. Not only are we going to teach you how to address this challenge, we’re going to also offer an example to help drive home the lesson. We’re going to cover a lot of ground today, so let’s dive in.

 

Give the people what they want …

Some copy and design is so bad, the fixes are obvious. Maybe you shouldn’t insult the customer in the headline. Maybe you should update the website that still uses a dot matrix font.

But when you’re already doing well, how can you continue to improve?

I don’t have the answer for you, but I’ll tell you who does — your customers.

There are many tricks, gimmicks and types of technology you can use in marketing, but when you strip away all the hype and rhetoric, successful marketing is pretty straightforward — clearly communicate the value your offer provides to people who will pay you for that value.

Easier said than done, of course.

How do you determine what customers want and the best way to deliver it to them?

Well, there are many ways to learn from customers, such as focus groups, surveys and social listening.

While there is value in asking people what they want, there is also a major challenge in it.

According to research from Dr. Noah J. Goldstein, Associate Professor of Management and Organizations, UCLA Anderson School of Management, “People’s ability to understand the factors that affect their behavior is surprisingly poor.”

Or, as Malcom Gladwell more glibly puts it when referring to coffee choices, “The mind knows not what the tongue wants.”

This is not to say that opinion-based customer preference research is bad. It can be helpful. However, it should be the beginning of your quest, not the end.

 

… by seeing what they actually do

You can use what you learn from opinion-based research to create a hypothesis about what customers want, and then run an experiment to see how they actually behave in real-world customer interactions with your product, marketing messages and website.

The technique that powers this kind of research is often known as A/B testing, split testing, landing page optimization or website optimization. If you are testing more than one thing at a time, it may also be referred to as multivariate testing.

To offer a simple example, you might assume that customers buy your product because it tastes great and because it’s less filling. Keeping these two assumptions in mind, you could create two landing pages — one with a headline that promotes that taste (treatment A) and another that mentions the low carbs (treatment B). You then send half the traffic that visits that URL to each version and see which performs better.

Here is a simple visual that Joey Taravella, Content Writer, MECLABS created to illustrate this concept: 

 

That’s just one test. To really learn about your customers, you must continue the process and create a testing-optimization cycle in your organization — continue to run A/B tests, record the findings, learn from them, create more hypotheses and test again based on these hypotheses.

This is true marketing experimentation, and it helps you build your theory of the customer.

 

Try your hand at A/B testing for a chance to win

Now that you have a basic understanding of marketing experimentation (there is also more information in the “You might also like” section of this blog post that you may find helpful), let’s engage in a real example to help drive home these lessons in a way you can apply to your own marketing challenges.

To help you take your marketing to the next level, The Moz Blog and MarketingExperiments Blog have joined forces to run a unique marketing experimentation contest.

In this blog post, we’re presenting you with a real challenge from a real organization and asking you to write a subject line that we’ll test with real customers. It’s simple; just leave your subject line as a comment in this blog post.

We’re going to pick three subject lines from The Moz Blog and three from the MarketingExperiments Blog and run a test with this organization’s customers.

Whoever writes the best performing subject line will win a stay at the ARIA Resort in Las Vegas as well as a two-day ticket to MarketingSherpa Email Summit 2015 to help them gain lessons to further improve their marketing.

Sound good? OK, let’s dive in and tell you about your client:

Read more…

Testing and Optimization: 4 inspirational examples of experimentation and success

November 6th, 2014 1 comment

At our sister publication, MarketingSherpa, we publish four case study beats – B2B, B2C, Email and Inbound – with stories covering actual marketing efforts from your peers each week. Not every case study features a testing and optimization element, but many do.

For this MarketingExperiments Blog post, I wanted to share a quick summary of several of these case studies, along with links to the entire article (including creative samples) in case any pique your interest and you want to dig into the entire campaign.

So, without further ado, read on for four MarketingSherpa case studies that feature testing and optimization of various digital marketing channels, strategies and tactics.

 

Case Study #1. 91% conversion lift from new copy and layout

This case study features AwayFind, a company that provides mobile email alerts, and covers an effort to test, and hopefully improve, its homepage performance.

Brian Smith, Director of Marketing, AwayFind, said, “Our primary driver of traffic is our PR efforts. Our homepage is effectively our primary landing page, and we need to convert that traffic into premium users.”

The testing included both changing copy and layout elements. The main copy change was instead of focusing on features, the treatment copy focused on benefits, and layout tweaks included a shortened headline, the remaining copy was split between a subhead and a smaller block of text, and the color of the subhead text was also modified.

In this test, the treatment achieved:

  • 42% increase in clicks to the sign-up page
  • 91% increase in registrations for the trial

  Read more…

Online Testing: 3 resources to inspire your ecommerce optimization

July 3rd, 2014 No comments

Optimizing to improve a customer experience can be a little overwhelming when you consider all the nuts and bolts that make up an entire ecommerce property in its entirety.

In this MarketingExperiments Blog post, we’ll take a look at three ecommerce resources from our testing library that will hopefully spark a few ideas that you can to add to your testing queue.

 

Read: A/B Testing: Product page testing increases conversion 78%

ebook-retailer-versions

 

How it can help

This experiment with a MECLABS Research Partner is a great example illustrating how testing elements on your product pages that are probable cause for customer concern is the best way to alleviate anxiety.

 

Watch: Marketing Multiple Products: How radical thinking about a multi-product offer led to a 70% increase in conversion

 

In this Web clinic replay, Austin McCraw, Senior Director of Content Production, MECLABS, shared how radical thinking about a multi-product offer led one company to a 70% increase in conversion.

 

How it can help

 One big takeaway from this clinic you need to understand is that strategic elimination of competing offers on pages with multiple products can help drive customers’ focus to the right product choices for their needs.

 

Learn: Category Pages that Work: Recent research reveals design changes that led to a 61.2% increase in product purchases

 

These slides are from a Web clinic on category pages in which Flint McGlaughlin, Managing Director, MECLABS, revealed the results of category page design changes that increased clicks and conversions across multiple industries.

Read more…

Web Optimization: 5 steps to create a small testing program

June 16th, 2014 No comments

At Web Optimization Summit 2014, Ryan Hutchings, Director of Marketing, VacationRoost, shared the nuts and bolts behind putting together a foundational testing process.

In today’s MarketingExperiments Blog post, I wanted to walk through Ryan’s five steps you can use to create a small testing program in your organization.

 

Step #1. Decide what to test 

test-ideas-speadsheet

 

When deciding what to test, the trick, according to Ryan, is prioritization.

There are lots of things to test in a conversion funnel, but limits of time and resources are important to factor in when putting together a test plan.

One of the tools Ryan uses to help his team prioritize smaller testing efforts is a spreadsheet of test ideas from across the organization.

The items highlighted in the screenshot above are columns that list test ideas and their prospective confidence levels that the team thinks will produce a lift.

“This helps us prioritize,” Ryan explained. “It gives us a starting point.”

 

Step #2. Identify a target conversion goal 

bounce-rate-goals

 

Ryan explained that the next step is to identify target conversion goals. To help do that, the VacationRoost team sets ideal ranges for their KPIs.

During his session, he used bounce rates as one example of where KPIs can help you set some target conversion goals and identify some testing opportunities.

“Bounce rate is a good example and a good starting point for a lot of people when talking about individual landing page optimization,” Ryan said.

One additional small mention to add is the disclaimer that the illustration is only an example.  When it comes to bounce rates, 37% (represented in the image above) is just to visualize the importance of setting standards, and is not inherently an industry goal.

 

Step #3. Create a hypothesis

 small-test-ppc

 

Ryan explained that his team uses the MECLABS, parent company of MarketingExperiments, Conversion Heuristic to help them turn test ideas into testable hypotheses. Using a repeatable methodology helps the team vet testing ideas and keeps testing focused.

“Everything is based on the heuristic, and that’s all we use, Ryan said.

 

Step #4. Build wireframes, develop the treatments and launch the test

 landing-page-test

 

If you’re going use a methodology to help identify testing opportunities, you should also consider how that methodology can help you build a treatment to test against your control.

Ryan explained how the Conversion Heuristic is also used in developing treatment designs to help keep  testing centered on the specific variables they want to explore.

One example he shared in his session was a PPC landing page in which VacationRoost wanted to test the impact of quality seals on delivering the value proposition.

“As you can see, these are two totally different pages as you’re looking at it,” Ryan explained, “and when we look at it, we say, ‘OK, what do we want to impact?’”

Read more…

Online Testing: 3 steps for finding a testable hypothesis

June 9th, 2014 No comments

Oftentimes in our Research Partnerships, each party is excited and anxious to jump in and begin testing. Right from the start, most Partners have a good idea of where their site or pages are lacking and bring lots of great ideas to the table.

While having a suboptimal webpage can often be thought of as “losing money as we speak,” it is important to take the time to complete what we call the “discovery phase.”

This discovery phase can be summed up in three simple analyses that you can perform to develop a great test hypothesis to help you learn more about your customers.

 

Step #1. Evaluate your data and identify conversion gaps in the funnel

This will help you identify the page or area of your site to focus on first.

Evaluating your data can help you understand how users are behaving on your site. You can start by looking at basic metrics like new versus returning visitors, traffic sources, bounce rates and exit rates to help you identify where your conversion process has the greatest leaks.

The other side of the coin is that identifying those gaps also gives you insights into where your biggest testing and optimization opportunities exist to help you plug those leaks.

For instance, a high bounce rate may indicate users are not finding what they are expecting on a given page. Regardless of which metrics you are evaluating, think of your data as a window into the mind of your customer.

 

Step #2. Assess your competitors to gain valuable insights on what to test

There’s no need to reinvent the wheel.

Looking at competitors’ sites can give you an idea of what visitors are accustomed to seeing on similar webpages to the one you are testing.

Here are a few examples of elements to look for and test:

  • Should the button be on the left or right side of the page?
  • Where is the best place on the page for product images?
  • Are any companies utilizing dropdowns or sliders for price ranges?

You are trying to figure out what works best for your pages and users. After all, imitation is the sincerest form of flattery, right?

  Read more…