Archive

Posts Tagged ‘A/B split testing’

Landing Page Optimization: An overview of how one site increased leads by 155%

June 15th, 2015 5 comments

Simple, direct and bare. When your company and process is known around the world, a blank page with little competing content can not only work, but it can work really well.

Simplicity is key. Take a look at Google’s homepage:

 

What about new visitors? Imagine coming to this page for the first time, with little to no context of the company. What is this company? If I type something in that text box, for example, where will it take me?

Simplicity is not always a key to effective website optimization.

Leaders must grow comfortable with paradox and nuance. Clarity does not equate with simplicity.  Simplicity does not equate with easy.” — Flint McGlaughlin, On the Difference between Clarity and Simplicity.

Simplicity is the reduction of friction, but clarity is the optimization of the message. A simple message is not necessarily a clear message.

Take a look at a test we ran with a physicians-only social network that allows pharmaceutical companies to conduct survey research and promote products to their audience. The goal of this A/B split test was to identify which microsite would generate the most total leads.

Check out the control below. Can you find the value proposition?

Read more…

CTA Optimization: Button copy test increases click rate 95%

May 29th, 2014 No comments

If there is one area in optimization that’s ideal to either start testing or look for a quick win, it’s in testing call-to-action (CTA) buttons.

Changing a few words, the color, or both on a button can potentially make a big difference in user engagement.

Testing button copy can also help you set the right tone for a conversation with your prospects.

For example, at last week’s Web Optimization Summit 2014 in New York City, Jacob Baldwin, Digital Marketing Manager, One Call Now, shared a button experiment that put one of my least favorite words in marketing to the challenge: quote.

But before we get started, let’s take a look at the research notes from One Call Now’s testing lab for a little background on the experiment.

onecallnow-quote-request-test

 

Control and treatment versions

onecallnow-control-treatment

 

Jacob’s team hypothesized that by changing the button copy from “Request a Quote” to “Request Pricing,” they would also change the perceived value of requesting more information.

 

Results 

onecallnow-test-results

  Read more…

A/B Testing: The value of choice in decision-making

May 15th, 2014 No comments

Reducing confusion and mitigating friction are things that come to mind when thinking about presenting the optimal amount of choices for customers.

This makes sense when you consider that some customers have a hard time making decisions when flooded with choices. Presenting many similar options for the same product is usually the last thing any marketer should want to do.

It may be intuitive to think that limiting the amount of choices, buttons and text to best fit your ideal customer’s need is always the best way to go.

But, as I’ve recently learned, that’s not always the case. In today’s MarketingExperiments Blog post, I wanted to share a recent experiment that tested the effect of limiting product options within a conversion funnel.

(Editor’s Note: To protect the competitive advantage of our Research Partner, images and results have been anonymized.)

choice-test-treatments

 

In the control, customers had the option of selecting a range of delivery options with radio buttons and a dropdown menu. Visitors to the control could choose between monthly auto-renewal, six month or one year options, while visitors to the treatment were locked into a monthly option.

For the treatment, the delivery options offered in the radio button remained the same. The only variable we changed was removing the dropdown selection of desired subscription length.

The experiment was designed to test our hypothesis that eliminating a dropdown menu that required visitors to make yet another decision would decrease friction.

 

Decreases in conversion are excellent teaching tools on the power of choice

The treatment experienced a decrease in conversion of nearly 40%.

By eliminating those choices, we should have decreased friction for the majority of the visitors to the funnel. What did we miss?

It’s possible that by relying on analytics, statistics, marketing intuition and company-logic, we overlooked a fundamental human behavior.

Sure, presenting a ton of options could confuse and drive away visitors.

However, people find a lot of value in the ability to compare prices and look at options they may never be truly interested in.

Dan Ariely, who spoke at MarketingSherpa Email Summit 2014, demonstrated in his “Are We in Control of Our Decisions?” TEDTalk, the presence of an undesirable or “useless” option can often make other options seem much more appealing.

The value of choice is a powerful tool in decision-making.

Marketers that constantly strive for efficiency and optimization would do well to remember that.

Read more…

Landing Page Optimization: Multi-product page increases revenue 70%

May 5th, 2014 No comments

Finding the right balance of product and presentation on a landing page that markets multiple products can be a tricky endeavor as products compete for customer attention.

In today’s MarketingExperiments Blog post, let’s look at a recent test that not only increased revenue, but also increased our understanding about customer behavior.

Before we dive in, let’s get a little background research information on the test.

 

Background: An independent vitamin manufacturer and distributor.

Goal: To increase the total revenue from the page.

Primary Research Question: Which page will generate the highest total revenue?

Approach: A/B multifactorial split test

 

Side by side

In the experiment, Treatment A used a radio button format for each of the offers featured. In Treatment B, the design was a horizontal layout that let users compare offers.

 

Results

Read more…

Value Proposition: 4 considerations to identify the ideal channel for testing

March 3rd, 2014 2 comments

In my previous MarketingExperiments Blog post, I presented 3 steps to take in laying the groundwork for value proposition testing. In that post, I covered brainstorming value points, identifying supporting evidence and categorizing information into testable buckets.

While the lessons learned through these steps are valuable, the real prize comes from the ability to use those discoveries in a value proposition experiment. However, there’s still much to be considered before you’re ready to launch your first test.

So in today’s post, I will continue where we left off and explain the next step in developing a strategy for a solid value proposition test.

Before you begin to think of the ways to express your value proposition, you first have to understand the limitations created by the channels utilized to build your test.

We will always aim to find that perfect channel – the one that provides you with everything you ever wished to know about your customers – but oftentimes, that channel doesn’t exist and we have to settle for good enough.

Today, we will dive into four areas you need to consider in order to identify that ideal channel for testing:

  • The amount of traffic the channel has
  • The motivation of visitors within the channel
  • The need for an image
  • The ability to split visitors

 

Consideration #1. The amount of traffic in the channel

The first consideration you need to make is the amount of traffic into your channel. This is because in value proposition testing, we often have to duplicate a portion of the content between treatments in order to provide enough information to communicate value for the product or service.

Though necessary, this duplication of information can often lead to treatments performing similarly in a test because they are perceived similarly by visitors. Because of this, it’s important to run tests within channels that have high traffic levels.

The more traffic a channel has, the larger sample size you will have from the tests, and inherently, the faster real trends can emerge from your data.

Some examples of high traffic areas where I’ve witnessed value proposition testing take place include PPC advertising – both banner and text – as well as on homepages and product pages.

One caveat here worth mentioning is the challenge with reaching validity when it comes to highly specialized products.

Though it would be nice to learn something about our niche audiences for those highly defined and specialized products, if you don’t have sizable traffic visiting those respective webpages (thereby entering the channel), you may not be able to reach statistical validation in a reasonable amount of time.

I would love to be able to include the ideal number of visitors you need in your channel, but that number is only one of many variables that impact the time needed for you to reach a sample size large enough to reach statistical validation.

What I can tell also tell you is the more visitors you have, the more flexibility you have with regard to the number of treatments you can run, the conversion rates you can reliably work with, and the amount of variables you need to change to measure a difference.

 

Consideration #2. The existing motivations of your visitors

The second consideration is thinking about the existing motivation of visitors to your channel.

It’s very important to know if visitors are sufficiently motivated to pursue the product or service you’re promoting. It doesn’t make sense to offer a winter coat to someone living on the equator.

It also doesn’t make sense to try to sell that same coat to someone living in the Arctic Circle and then compare the responses between said prospects as apples to apples.

Those customers are way too different to generalize their responses across an entire product or service market vertical.

For a general example, it’s OK to test the value proposition of a car company in an advertisement posted on a website for auto enthusiasts.

But the same ad likely wouldn’t be as effective on an informational website for new mothers. Though you would likely see interaction with both ads, the mixing of markets would act to muddy the results of your test as the interaction in both places is teaching us about largely different audiences.

Consequently, instead of concentrating on learning what your ideal customer values about your car company, you would instead be mixing the views of your real prospects with the views of someone who may never look at another one of your advertisements, let alone buy this year’s new model.

The main takeaway here is always make sure that the visitors to your channel are all similarly motivated to learn more about your product or service. You can’t track clicks if no one is motivated to click.

Read more…

A/B Testing: 3 steps to help you test smarter, not harder

January 20th, 2014 2 comments

2014 is here and with it opens a new year full of many opportunities to test. We are presented with a clean slate.

Any missed opportunities or “we should’ve tested that” are in the past. This post will help you with the basic steps to learn how to approach testing in order to test smarter, not harder, when setting up your tests in 2014.

 

Step #1. Develop key performance indicators first

Every test begins with an idea on how to make a webpage or email better by deciding the key performance indicators (KPI). In other words, what are we trying to learn from the test?

A few examples of questions we might ask ourselves when developing KPIs are:

  • Are we trying to increase clickthrough rate to another page in the funnel?
  • Are we trying to increase the conversion rate on a product page?
  • Are we trying to determine user motivation by changing the value proposition?

Setting a clear objective for your key performance indicators in the beginning helps to focus on the main learning of the test.

For example, if you change the copy of the call-to-action (CTA) in one treatment versus the control, your key performance indicator is clickthrough rate. Your change of the copy on the CTA button might affect clickthrough rate.

It is important to have a clear understanding of what you are trying to learn from your test based on the key performance indicators you have developed. It is easy to lose sight of this once the data starts coming in, but this is the basis of the test so stay focused on what you are trying to learn.

 

Step #2. Hold a strategy session

Once you have your key performance indicators, get a group of people together to strategize.

Strategy sessions give you the opportunity to bring ideas to the table, but more importantly, the brainstorming with others helps keep your test plans on track.

People who are not as familiar with your project offer the advantage of seeing these webpages or emails for the first time and can point out parts that may have been overlooked, or offer a different perspective.

In the end it is worth it to take the time for collaboration to build a better test – a foundation that can, and hopefully will, make all the difference when the results start coming in.

 

Step #3. Win or lose, learn something!

The results make all the hard work and thought put into building a test worth it. This is the point in the process when you sit back and watch the data.

Like a sporting event, you are rooting for your treatment to outperform the control. When your treatment achieves a lift over the control, it feels just like your team scored a touchdown – except better, because you’re the one on the marketing field running the ball in for the score.

It is important to remember that even if your treatment(s) did not win during the test, there are still valuable lessons to be learned.

You should always dig into the results and walk away with some type of learning, even if it is not what you originally intended to learn from the test.

Read more…