Archive

Posts Tagged ‘A/B split testing’

Value Proposition: 4 considerations to identify the ideal channel for testing

March 3rd, 2014 2 comments

In my previous MarketingExperiments Blog post, I presented 3 steps to take in laying the groundwork for value proposition testing. In that post, I covered brainstorming value points, identifying supporting evidence and categorizing information into testable buckets.

While the lessons learned through these steps are valuable, the real prize comes from the ability to use those discoveries in a value proposition experiment. However, there’s still much to be considered before you’re ready to launch your first test.

So in today’s post, I will continue where we left off and explain the next step in developing a strategy for a solid value proposition test.

Before you begin to think of the ways to express your value proposition, you first have to understand the limitations created by the channels utilized to build your test.

We will always aim to find that perfect channel – the one that provides you with everything you ever wished to know about your customers – but oftentimes, that channel doesn’t exist and we have to settle for good enough.

Today, we will dive into four areas you need to consider in order to identify that ideal channel for testing:

  • The amount of traffic the channel has
  • The motivation of visitors within the channel
  • The need for an image
  • The ability to split visitors

 

Consideration #1. The amount of traffic in the channel

The first consideration you need to make is the amount of traffic into your channel. This is because in value proposition testing, we often have to duplicate a portion of the content between treatments in order to provide enough information to communicate value for the product or service.

Though necessary, this duplication of information can often lead to treatments performing similarly in a test because they are perceived similarly by visitors. Because of this, it’s important to run tests within channels that have high traffic levels.

The more traffic a channel has, the larger sample size you will have from the tests, and inherently, the faster real trends can emerge from your data.

Some examples of high traffic areas where I’ve witnessed value proposition testing take place include PPC advertising – both banner and text – as well as on homepages and product pages.

One caveat here worth mentioning is the challenge with reaching validity when it comes to highly specialized products.

Though it would be nice to learn something about our niche audiences for those highly defined and specialized products, if you don’t have sizable traffic visiting those respective webpages (thereby entering the channel), you may not be able to reach statistical validation in a reasonable amount of time.

I would love to be able to include the ideal number of visitors you need in your channel, but that number is only one of many variables that impact the time needed for you to reach a sample size large enough to reach statistical validation.

What I can tell also tell you is the more visitors you have, the more flexibility you have with regard to the number of treatments you can run, the conversion rates you can reliably work with, and the amount of variables you need to change to measure a difference.

 

Consideration #2. The existing motivations of your visitors

The second consideration is thinking about the existing motivation of visitors to your channel.

It’s very important to know if visitors are sufficiently motivated to pursue the product or service you’re promoting. It doesn’t make sense to offer a winter coat to someone living on the equator.

It also doesn’t make sense to try to sell that same coat to someone living in the Arctic Circle and then compare the responses between said prospects as apples to apples.

Those customers are way too different to generalize their responses across an entire product or service market vertical.

For a general example, it’s OK to test the value proposition of a car company in an advertisement posted on a website for auto enthusiasts.

But the same ad likely wouldn’t be as effective on an informational website for new mothers. Though you would likely see interaction with both ads, the mixing of markets would act to muddy the results of your test as the interaction in both places is teaching us about largely different audiences.

Consequently, instead of concentrating on learning what your ideal customer values about your car company, you would instead be mixing the views of your real prospects with the views of someone who may never look at another one of your advertisements, let alone buy this year’s new model.

The main takeaway here is always make sure that the visitors to your channel are all similarly motivated to learn more about your product or service. You can’t track clicks if no one is motivated to click.

Read more…

A/B Testing: 3 steps to help you test smarter, not harder

January 20th, 2014 2 comments

2014 is here and with it opens a new year full of many opportunities to test. We are presented with a clean slate.

Any missed opportunities or “we should’ve tested that” are in the past. This post will help you with the basic steps to learn how to approach testing in order to test smarter, not harder, when setting up your tests in 2014.

 

Step #1. Develop key performance indicators first

Every test begins with an idea on how to make a webpage or email better by deciding the key performance indicators (KPI). In other words, what are we trying to learn from the test?

A few examples of questions we might ask ourselves when developing KPIs are:

  • Are we trying to increase clickthrough rate to another page in the funnel?
  • Are we trying to increase the conversion rate on a product page?
  • Are we trying to determine user motivation by changing the value proposition?

Setting a clear objective for your key performance indicators in the beginning helps to focus on the main learning of the test.

For example, if you change the copy of the call-to-action (CTA) in one treatment versus the control, your key performance indicator is clickthrough rate. Your change of the copy on the CTA button might affect clickthrough rate.

It is important to have a clear understanding of what you are trying to learn from your test based on the key performance indicators you have developed. It is easy to lose sight of this once the data starts coming in, but this is the basis of the test so stay focused on what you are trying to learn.

 

Step #2. Hold a strategy session

Once you have your key performance indicators, get a group of people together to strategize.

Strategy sessions give you the opportunity to bring ideas to the table, but more importantly, the brainstorming with others helps keep your test plans on track.

People who are not as familiar with your project offer the advantage of seeing these webpages or emails for the first time and can point out parts that may have been overlooked, or offer a different perspective.

In the end it is worth it to take the time for collaboration to build a better test – a foundation that can, and hopefully will, make all the difference when the results start coming in.

 

Step #3. Win or lose, learn something!

The results make all the hard work and thought put into building a test worth it. This is the point in the process when you sit back and watch the data.

Like a sporting event, you are rooting for your treatment to outperform the control. When your treatment achieves a lift over the control, it feels just like your team scored a touchdown – except better, because you’re the one on the marketing field running the ball in for the score.

It is important to remember that even if your treatment(s) did not win during the test, there are still valuable lessons to be learned.

You should always dig into the results and walk away with some type of learning, even if it is not what you originally intended to learn from the test.

Read more…

Landing Page Optimization: 3 template design changes to help you serve multiple customer types

November 14th, 2013 No comments

Optimizing your landing pages for multiple buyer personas becomes a difficult undertaking when conversion optimization principles often call for a focus on a single customer type.

So, how do you optimize for customers with a variety of needs and interests?

In a recent Web clinic, Jon Powell, Senior Manager of Research and Strategy, MECLABS, addressed that question by presenting three key design changes that impact conversion on pages that serve diverse customer groups.

According to Jon, to clarify the offer for multiple personas, he recommended a focus on these three types of page template adjustments:

  • Number conversion paths
  • Availability of subheadlines and headers for instructional guidance
  • The sequence of content

“If you have the ability and power to make changes to your pages, then there are three types of page template adjustments that we’ve discovered that can really make a difference,” Jon explained.

 

Design Change #1. Number of conversion paths

 

In the control pop-up, the MECLABS research team hypothesized that using a single conversion path for the different customer types was not optimal. 

 

In the treatment, the team increased the number of options to three and took the same value copy in the control and distributed it across those paths to appeal to the different customer types.

The team also utilized the header as an instructional headline that attempted to appeal to all three customer paths. The overall redesign resulted in a 25% increase in clickthrough rate.

 

Design Change #2. Availability of subheadlines and headers for instructional guidance 

 

In this experiment, Jon explained the MECLABS research team also hypothesized the number of conversion paths was not ideal based on the commonalities among the different customer types.

 

In the treatment, the team decreased the number of conversion paths and added a subheadline to help explain the process and support the value copy.

The design changes in the experiment resulted in a 32.4% increase in conversion.

 

Design Change #3. Sequence of content

 

For this experiment, Jon explained the MECLABS research team hypothesized that the content’s initial positioning was impacting engagement. 

 

For the treatment, the team moved the content higher up on the page, added a subheadline to help explain the process and support the value copy, and made changes to the color design.

The changes in the experiment netted the team a 181% increase in clickthrough.

 

What you need to know

Testing these changes can help you discover the optimal number of paths, the ideal subheadline and best arrangement for your content.

But, what they also do for your marketing efforts is give them clarity, or as Jon explained, “The marketer’s goal is not simplicity; the marketer’s goal is clarity.”

By keeping landing page messaging, design and layout simple, it makes it easy for that visitor to understand what action you want them to take on the page.

To learn more about how category pages impact the sales funnel, you can watch the free on-demand MarketingExperiments Web clinic replay of “Optimizing for Multiple Personas.”

 

Related Resources:

Online Testing: How a pop-up chat test increased conversion 120%

Landing Page Optimization: 6 common traits of a template that works

Landing Page Optimization: Color emphasis change increases clickthrough 81%

Email Marketing: 6 bad habits to avoid when testing emails

November 11th, 2013 No comments

In my experiences with helping our Research Partners with email campaigns, I’ve discovered when it comes to testing, it’s not a one-size-fits-all activity.

Your email campaigns are fundamentally different than landing pages or any other elements of your marketing mix and sales funnel.

Consequently, your approach to testing them will also be different.

They have different goals, elements, best practices and bad habits to avoid.

In today’s MarketingExperiments Blog post, I wanted to share six common bad habits to avoid when testing email campaigns.

 

Bad Habit #1. Not knowing how your email list is being split

This is a very common mistake I see often and it’s one of the most avoidable. Marketing teams will test with limited understanding of how their email list is being split into test cells.

Or worse, they don’t know how their email platform splits at all. This is troublesome because it can easily cause sampling errors that will skew your results.

So first, check the platform you’re using to learn how the list splitting algorithm works.

If there’s not any specific information about how the email platform is allocating test cells, consider testing a dual control email send to gain a better understanding of how much your data may vary.

Also, try to make sure each test cell has allocated recipients fairly, especially if your list has information in the database that indicates recipients have varying degrees of motivation.

The reason for this is unlike A/B split testing, where landing page traffic comes from multiple sources and is split at random, email lists are a finite traffic source.

What if I’m splitting lists myself, you ask?

If that’s the case – try to do so as randomly as possible.

 

Bad Habit #2. Drawing conclusions after only one test

Judging a test by a single email drop is a mistake, even if your testing tool says your results have reached statistical significance.

I recommend testing your treatments over multiple email drops to ensure you are seeing some form of consistency in your results before making a business decision.

Also, one common question I get about data analysis is which method of analysis should be used to interpret your results.

In this case, I recommend recording the data as separate points in time instead of lumping all of the data together.

The reason for this is the fixed points will give you a better picture of behavior across sends, which is likely more accurate given this approach also takes into account variability over time.

 

Bad Habit #3. Random send times

The results of an email drop represent a single point in time versus landing page testing which has a continuous stream of traffic to pages.

Consequently, if you are not consistent in the delivery of your email drops – time of day, day of week, etc. – this inconsistency will impact your ability to interpret results accurately.

Here’s why …

If you think about when you go through the emails in your own inbox, it’s likely you do so at random. So, the only way to account for that randomness is by sending emails on a consistent schedule.

Inherently, you can adjust that send schedule to test your way into discovering the ideal time to send your customers an email, but keeping the frequency constant is key.

 

Bad Habit #4. Not having a clear-cut goal in your testing

This is another common mistake I see that’s avoidable – lacking a clear test hypothesis.

Email is one of the strictest channels. The general conversion path of an email is something like this:

  1. You send an email to your list
  2. The customer receives your email in their inbox (unless it gets caught in a spam filter)
  3. They identify the sender, skim the subject line and choose to open or delete the email
  4. If they choose to open the email, hopefully they engage the content
  5. If all goes to plan after engaging the content, they convert

But even with the path clearly laid out, you still can’t go anywhere without a sense of direction.

That’s why you want to make sure you have a good hypothesis that is clear and testable right from the start to help keep your testing efforts strategic in focus.

 

Bad Habit #5. Inconsistent key performance indicators

Ultimately, conversion (or revenue) of the treatment cell should be used to determine the winner. Depending on your goals, the point here is to make sure you are consistent as you evaluate the results.

Also, I would caution judging test results solely on clickthrough or open rates, which tend to be the primary drivers in email tests. Secondary metrics can tell a very interesting story about customer behavior if you’re willing to look at the data from all angles.

 

Bad Habit #6. Not setting a standard decay time

So, what is time decay exactly?

To keep things simple, time decay is really just a set period of time for an activity to take place around an email drop – an open, a click, etc.

If you are judging multiple drops, the data for each drop should follow a standard decay guideline that everyone on your team understands and agrees with. We generally suggest a week (seven days) as enough time to call the performance of a typical email drop.

One caveat here worth a mention is there is no magic bullet with email decay time.

The goals and objectives for campaigns vary by industry, so there are no universal standards in place.

Your organization should come to a consensus about a standard decay time to judge campaign performance before the campaign gets underway.

Read more…

E-commerce: Category page test increases order rates 20%

November 4th, 2013 2 comments

Category pages play a key role in e-commerce, yet they are often left to the mercy of limited, if any, best practices and minimal testing.

In today’s MarketingExperiments Blog post, we’re going to take a look at how the MECLABS research team tested a category page that led to a 20% increase in order rates.

First, let’s review the research notes for some background information on the test.

Background: An e-commerce site offering strength training and conditioning tools for professional athletes.

Goal: To increase order rate.

Primary Research Question: Which category page will generate the highest order rate?

Approach: A/B variable cluster test

 

Control

 

In the control, the MECLABS research team hypothesized critical pieces of information were difficult for customers to locate.

Here were some of the critical information pieces they identified:

  • The focus of the workshops
  • Location(s)
  • Date(s)

 

Treatment 

 

In the treatment, the team designed a category page that included the missing pieces of information identified in the control.

The team also changed the layout of the page to flow along a more natural eye-path.

 

Results 

 

What you need to know

By including the missing pieces of information identified in the control and changing the layout to flow along a user’s natural eye-path, the treatment outperformed the control by 20%.

To learn more about how category pages impact the sales funnel, you can watch the free MarketingExperiments Web clinic replay of “Category Pages that Work.”

Read more…

Online Testing: How a pop-up chat test increased conversion 120%

October 7th, 2013 2 comments

In a world where content is king, sifting through Internet clutter postured as content is generally accepted as a daily task for many of us.

The perpetual endeavor of distinguishing actual valuable content in the minds of an apathetic populace numbly navigating the Internet landscape without any concern for your KPIs can feel overwhelming at times.

Due to this harsh reality, many marketers turn to the pop-up window to solve the issue of engagement.

Their hope rests on the notion that an interruption will somehow distinguish their content as valuable, often with the opposite effect as content is likely to be ignored the second a pop-up is triggered.

Yet, the use of the pop-up window has proliferated across the Web and has become an often valuable tool for many skilled marketers. However, analysis of this webpage element yields one fundamental question:

“Is the pop-up window an effective option when attempting to gain a visitor’s attention?”

 

Don’t speculate, test

Consider a recent test performed by MECLABS. The team attempted to optimize a landing page of a well-known Fortune 500 B2B company. Traffic for this landing page was driven by paid search, PPC ads.

The landing page’s objective was to build the value of a free trial offer in order to increase the amount of leads submitted by visitors.

 

Control page with pop-up 

 

The page provided visitors with three options which they could use to speak to a qualified representative:

  • Click on a chat box, which automatically pops up after an incremental amount of “time on page”
  • Fill out a lead capture form which notifies a sales rep to contact them
  • Call a phone number

Because the chat box converts at a significantly higher rate than the other two options, the MECLABS research team was tasked with optimizing the page to increase the amount of chat clicks.

In order to accomplish this, we set up a test where the pop-up was replaced with an actual static image of a chat box placed on the right side of the page.

 

Treatment page with static chat box 

(*Please Note: We have anonymized the landing pages to protect the competitive advantage of our Research Partner. – Ed.)  

 

Visitors were encouraged to click on the image of the chat box, which then opened an actual chat box in a new window.  Other variables that changed on the page were:

  •  Value copy associated with the process of contacting a representative was added above the form fields
  •  Location of the lead capture form on the page, which was pushed farther down the page  to accommodate the increased value copy)

Would a static page element that doesn’t actively vie for a visitor’s attention perform better or worse than a pop-up?

Read more…