Posts Tagged ‘landing page optimization’

Online Testing: 5 steps to launching tests and being your own teacher

April 10th, 2014 No comments

Testing is the marketer’s ultimate tool. It allows us to not just guess what coulda, woulda, shoulda worked, but to know what actually works. But more than that, it gives us the power to choose what we want to know about our customers.

“As a tester, you get to be your own teacher, if you will, and pick tests that make you want to learn. And structure tests that give you the knowledge you’re trying to gain,” said Benjamin Filip, Senior Manager of Data Sciences, MECLABS.

So what steps do we take if we want to be our own teacher?

While conducting interviews about the live test ran at MarketingSherpa Email Summit 2014, I recently had the chance to discuss testing processes with Ben, as well as Lauren Pitchford, Optimization Manager, and Steve Beger, Senior Development Manager, also both of MECLABS. The three of them worked together with live test sponsor BlueHornet to plan, design and execute the A/B split test they validated in less than 24 hours.

Read on to learn what they had to share about the testing process that marketers can take away from this email live test. We’ll break down each of the steps of the live test and help you apply them to your own testing efforts.


Step #1. Uncover gaps in customer insights and behavior

As Austin McCraw, Senior Director of Content Production, MECLABS, said at Email Summit, “We all have gaps in our customer theory. Which gap do we want to fill? What do we want to learn about our customer?”

What do you wish you knew about your customers? Do they prefer letter-style emails or design-heavy promotional emails? Do they prefer a certain day of the week to receive emails? Or time of day? Does one valuable incentive incite more engagement than three smaller incentives of the same combined value?

Think about what you know about your customers, and then think about what knowledge could help you better market to them and their needs and wants.


Step #2. Craft possible research questions and hypotheses

When forming research questions and hypotheses, Ben said, “You have to have some background info. A hypothesis is an educated guess, it’s not just completely out of the blue.”

Take a look at your past data to interpret what customers are doing in your emails or on your webpages.

Lauren wrote a great post on what makes a good hypothesis, so I won’t dive too deeply here. Basically, your hypothesis needs three parts:

  • Presumed problem
  • Proposed solution
  • Anticipated result


Step #3. Brainstorm ways answer those questions

While brainstorming will start with you and your group, don’t stop there. At MECLABS, we use peer review sessions (PRS) to receive feedback on anything from test ideas and wireframes, to value proposition development and results analysis.

“As a scientist or a tester, you have a tendency to put blinders on and you test similar things or the same things over and over. You don’t see problems,” Ben said.

Having potential problems pointed out is certainly not what any marketers want to hear, but it’s not a reason to skip this part of the process.

“That’s why some people don’t like to do PRS, but it’s better to find out earlier than to present it to [decision-makers] who stare at you blinking, thinking, ‘What?’” Lauren explained.

However, peer review is more than discovering problems, it’s also about discovering great ideas you might otherwise miss.

“It’s very easy for us to fall into our own ideas. One thing for testers, there is the risk of thinking that something that is so important to you is the most important thing. It might bother you that this font is hard to read, but I don’t read anyway because I’m a math guy, so I just want to see the pretty pictures. So I’m going to sit there and optimize pictures all day long. That’s going to be my great idea. So unless you listen to other people, you’re not going to get all the great ideas,” Ben said.

Read more…

Web Optimization: Traffic without conversion doesn’t matter

April 3rd, 2014 No comments

At Web Optimization Summit 2014 in New York City, Michael Aagaard, Founder,, will present, “How, When and Why Minor Changes Have a Major Impact on Conversions,” based on four years of research and dozens of case studies.

To provide you with a few quick test ideas, we reached across the miles to Copenhagen, Denmark, and interviewed Michael from our studios here in Jacksonville, Fla.

In this video interview, Michael discussed:

  • Why he’s so passionate about conversion optimization (and why you should be, too)
  • A pop-up test that generated 142% more newsletter signups
  • The one-word change of call-to-action button copy that consistently produces results (in several languages)


Below is a full transcript of our interview if you would prefer to read instead of watch or listen.

  Read more…

Call-to-Action Button Copy: How to reduce clickthrough rate by 26%

March 31st, 2014 6 comments

“Start Free Trial” | “Get Started Now” | “Try Now”

One of the above phrases reduced clickthrough rate by 26%.



Take a look at those three phrases. Try to guess which phrase underperformed and why. Write it down. Heck, force yourself to tell a colleague so you’ve really got some skin in the game.

Then, read the rest of today’s MarketingExperiments Blog post to see which call-to-action button copy reduced clickthrough, and how you can use split testing to avoid having to blindly guess about your own button copy.


How much does call-to-action button copy matter anyway?

The typical call-to-action button is small. You typically have only one to four words to encourage a prospect to click.

There are so few words in a CTA. How much could they really matter?

Besides, they come at the end of a landing page or email or paired with a powerful headline that has already sold the value of taking action to the prospect. People have already decided whether they will click or not, and that button is a mere formality, right?

To answer these questions and more, let’s go to a machine more impressive than the Batmobile … to the splitter!


A/B/C/D/E split test

The following experiment was conducted with a MECLABS Research Partner. The Research Partner is a large global media company seeking to sell premium software to businesses.

The button was tested on a banner along the top of a webpage. Take a look at that banner below. 



Five different text phrases were tested in that button. Since I’ve already teased you on the front-end, without further ado, let me jump right into the findings.





Those few words in that teeny little rectangular button can have a huge impact on clickthrough.

As you can see, “Get Started Now” drove significantly more clicks than “Try Now.” Let’s look at the relative changes in clickthrough rate so you can see the relationship between the calls-to-action.

Read more…

Landing Page Optimization: Radio buttons vs. dropdowns

March 13th, 2014 4 comments

Radio buttons or dropdowns?

The question is arguably on the borderline of arbitrary, but as we discovered, this choice is far more important that one might think.

During a recent Web clinic, Austin McCraw, Jon Powell and Lauren Pitchford, all of MECLABS, revealed the results of an experiment that put button options to the test.

So, let’s take a closer look at the research notes for some background information on the test.

Background: A large people search company catering to customers searching for military personnel.

Goal: To significantly increase the total number of subscriptions.

Primary Research Question: Which subscription option format will produce the highest subscription rate: radio buttons or a dropdown menu?

Approach: A/B single factorial split test

In Treatment 1, the research team hypothesized that the length of the radio button layout was a source of user friction in the form.

Editor’s Note: For the purposes of the MarketingExperiments testing methodology, friction is defined as “a psychological resistance to a given element in the sales or signup process.”



In Treatment 2, the team tested a dropdown style option selection to reduce the perceived friction in the display.


  Read more…

Radical Redesigns: Lifts vs. building customer theory

“Radical redesign” is a term we use often at MECLABS.

It’s used to describe treatments that are “radically” different from the control. We aren’t talking about changing some button copy from “Buy” to “Buy Now.” We’re talking new themes, layouts, copy and even functionality on the page.

Radical redesigns contain many variables making it difficult to isolate specific elements contributing to the results of a test.

Today’s MarketingExperiments Blog post will focus on several of the pros and cons of radical redesigns, but I’d first like to provide you with a little more context on testing in this fashion and how it may impact future testing.

Lifts are awesome.

That means your treatment(s) had a positive effect on your key performance indicator (KPI). You may have increased clickthrough rate, conversions or conversion rates, or possibly even leads.

Your treatment won, and you also learned how the variables you changed in the treatments resulted in an optimized page.

However, identifying specifically what, how and why these changes made an impact is what makes building your customer theory even better. Discoveries are important to customer theory because they can be attained regardless of whether your treatments “win” or “lose.”

Discoveries are also valuable in informing future testing.

Let’s say you changed some button copy. If your treatment wins and you see a lift, you learn that the button copy you changed resonated with the visitor.

But if the treatment loses, you’ve discovered that it didn’t entice the user to click, and you need to continue testing to determine the optimal copy to increase visitor engagement with the button.

Either way, you’ve gained valuable insight about your customers with your test.

However, this isn’t always the case with radical redesigns.


Radical redesigns may provide a lift, but they also leave a lot of unanswered questions 

There’s no doubt about it, radical redesigns are fun. You are able to play with designs. You can try all of your cool ideas. You can make landing pages or email treatments that look so much prettier (or uglier) than the control version.

Radical redesigns are often beneficial and can be a quick way to optimize your page if you are seeing multiple opportunities to add value and decrease friction and anxiety.

When radical redesigns win, it’s great. It means visitors loved your new designs, copy or functionality. Your hypothesis was correct, and the new page increased conversions. You achieved a lift, and you learned that whatever you did to the page resonated well with the visitor.

But you might have to ask yourself, “What else did I learn?”

You just don’t know what you did to the radically redesigned page that made it any better and that insight may be lost as to where to test next.

Was it the new headline? The altered layout? The aesthetics you added? Or was it that updated functionality?

Often times, the little voice that wants to know what you learned is silenced by the increase in conversions.

When radical redesigns lose, however, that’s a different story. While they are fun to plan and test, the sad fact is that when they lose, you are often left back at square one.

There is not much you can learn from an underperforming radical redesign and here’s why:

  1. You didn’t see a lift. You didn’t devise a new treatment that boosted conversions to implement. It lost. The only lesson you learned was that the control was better, which just leaves you back where you started.
  2. You don’t achieve as many valuable discoveries about your customers. What aspect of the page didn’t resonate with visitors? Maybe they liked the new layout, but the headline turned them off. You’ll never know. Therein lies the risk with radical redesigns.

Keep in mind, this post is not meant to deter you from radical redesigns. As stated before, radical redesigns are a great way to make many positive changes to a page when you have diagnosed specific shortcomings.

The idea here is make you aware of some of the pitfalls of testing radical redesigns.

Read more…

Copywriting: Do you take your prospects on a journey?

February 27th, 2014 3 comments

You’ve seen the statistics. Customers receive 12 million billion marketing messages a day.

Plus they’re busy, and have short attention spans.

So you may think, “I have to get my sales message and value prop to my customers as quickly as possible.”

But your goal as a marketer is not to get quick information in the hands of a customer. It’s to take them on …


The buyer’s journey

Let’s use “Star Wars” as an analogy.

George Lucas could have made a two-minute video on YouTube and said, “So … they’re brother and sister. And on top of it, the dude he’s fighting is actually his dad. Weird, huh?”

But if he did, I’m betting he wouldn’t have this level of brand loyalty more than 30 years later.

Storytelling is powerful.

It helps people see a new way of looking at the world. As a marketer, that includes how the world would be with your product or service in it.

By taking your prospects through a story, you help to welcome them into the world of your product, help them drop their defenses to actually hear what you’re saying, and get them to internalize your value proposition.

Your challenge is to decide how every element of your marketing can take them on that journey. For a simple purchase, this journey may happen in a single email or print ad. For a considered purchase, it may occur across an email drip campaign, nurture track or an entire marketing funnel.

You can watch the free MarketingExperiments Web clinic replay, “Copywriting on Tight Deadlines: How ordinary marketers are achieving 103% gains with a step-by-step framework,” to learn more about how story connects to the conversion process.


Photo attribution: Star Wars Blog

Read more…