Archive

Archive for the ‘Analytics & Testing’ Category

Here’s Why Most A/B Testing is Boring and Not Really Worth the Trouble

April 6th, 2015 1 comment

Do a quick Google search on “things to a/b test on a website,” scan the results for a moment, then come back and read the rest of this article.

Most of you reading this are marketers, so you know I’m taking a big risk by telling you to go do something else before you read my article.

In fact if you’re reading this now, you’re probably one of the very few who made it back from that incredibly distracting activity I had you do. Thank you. You are exactly the person I want to be reading this. The others can go on their merry way. They are not the ones who need to hear this.

I had you do that search because the Internet is full of people telling you to test things on your website such as color, button size, layouts, forms, etc. I wanted you to get an idea for what’s out there.

Now, I want you to understand why almost everyone writing those articles is wrong

… or at the very least, missing the point.

Please don’t view this as me putting down the people who wrote those articles. I know a few of them personally, and I highly respect the work they are doing. This is not about whether their work is good or bad.

I’ve personally written many articles exactly like the ones they’re writing. In fact, they have one up on me because at least their articles are ranking in Google for popular search terms.

The reason they are missing the point is that most of those articles are focused on the elements of a page rather than the serving of a customer.

I get why they do it.

Webpages are far easier to understand than people. Webpages are a collection of 0s and 1s. People are a collection of who knows what.

And most of you, readers, are looking for webpage fixes — not a deeper, fuller way to serve your customer.

There is nothing necessarily wrong with you, but it’s just that we naturally focus on our own self-interest. It isn’t wrong in itself.

What is wrong is the methods we use to achieve our own goals. I don’t mean morally wrong. I mean practically wrong.

 

Our objective should always be: Make as much money possible.

MECLABS Institute has found after more than 15 years of research that the best method for achieving this objective is to spend as much money possible on serving your customer.

Until we can view every A/B test we run as an opportunity to better serve our customers, we will just be running (ultimately) ineffective tests on page elements.

It doesn’t really matter in the long run which color, layout or page element is going to perform well.

The Internet is constantly changing. Design trends are always going to influence how we look at webpages and their elements. What matters for marketers in the long run is how well we understand and, consequently, how well we can serve our customers.

Flint McGlaughlin, Managing Director and CEO, MECLABS, calls this understanding of our customers “customer wisdom.

This is also why he often says, “The goal of a test is not to get a lift, but rather to get a learning.”

However, it’s one thing to hear this, another to really understand what it means.

It really means we want to conduct research, not run a test.

We want to learn a tangible lesson about our customer so that we can apply it to other areas of our marketing and achieve a maximum return on the amount of time and energy we spend on testing.

Let me show you what I mean with a real-world example. Here’s what happens when you just run an A/B test that is focused on a page element. Let’s take color for instance.

You have two treatments. The only thing changed is the background color. 

 

You also have a result. In this case, the result was a 19.5% increase in clickthrough at a 92% level of confidence. But here’s where things get tricky.

Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Digital Analytics: How to use data to tell your marketing story

March 12th, 2015 No comments

When it comes to being a data-driven marketing team, there is not as much opposition between content and data as once thought.

Two central themes that highlight this idea came out of the Opening Session of The Adobe Summit — The Digital Marketing Conference. They are:

  • Use data correctly to support a story
  • Ensure the story you’re telling can be relayed to a wider audience

Marketers need to quit treating their data analysts as number-crunching minions and start seeing them as contributors with a vital perspective of the greater customer story.

Nate Silver, Founder and Editor in Chief, FiveThirtyEight.com, spoke about how useless data can be if you can’t communicate it to a wider audience. The practice of collecting, analyzing and interpreting data can be very costly, and marketers need to maximize ROI by making sure they tell the correct story and that it can be spread across their organization.

  Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Measuring Success: The distance between a test and the conversion point

March 9th, 2015 2 comments

There’s a misconception that I’ve encountered among our research teams lately.

The idea is that the distance between the page being split tested and a specified conversion point may be too great to attribute the conversion rate impact to the change made in the test treatment.

An example of this idea is that, when testing on the homepage, using the sale as the conversion or primary success metric is unreliable because the homepage is too far from the sale and too dependent on the performance of the pages or steps between the test and the conversion point.

This is only partially true, depending on the state of the funnel.

Theoretically, if traffic is randomly sampled between the control and treatment with all remaining aspects of the funnel consistent between the two, we can attribute any significant difference in performance to the changes made to the treatment, regardless of the number of steps between the test and the conversion point.

More often than not, however, practitioners do not take the steps necessary to ensure proper controlling of the experiment. This can lead to other departments launching new promotions and testing other channels or parts of the site simultaneously, leading to unclear, mixed results.

So I wanted to share a few quick tips for controlling your testing:

 

Tip #1. Run one test at a time

Running multiple split tests in a single funnel results in a critical validity threat that prevents us from evaluating test performance because the funnel is uncontrolled and prospects may have entered a combination of split tests.

Employing a unified testing queue or schedule may provide transparency across multiple departments and prevent prospects from entering multiple split tests within the same funnel.

 

Tip #2. Choose the right time to launch a test

 

External factors such as advertising campaigns and market changes can impact the reliability or predictability of your results. Launching a test during a promotion or holiday season, for example, may bias prospects toward a treatment that may not be relevant during “normal” times.

Being aware of upcoming promotions or marketing campaigns as well as having an understanding of yearly seasonality trends may help indicate the ideal times to launch a test.

Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Testing and Optimization: A/B tests on landing pages, email and paid search from case studies

March 5th, 2015 1 comment

No matter what type of digital marketing campaigns you are executing, there are elements in every channel that can be tested and optimized to improve campaign performance.

For example, email subject lines, copy, design and even the “from” field can be tested. Webpage elements ripe for testing include design, layout, copy, color, call-to-action button elements and more. With paid search you should be testing keywords on an ongoing basis to continually improve your PPC spend, but you can also test ad copy and calls-to-action.

At MarketingSherpa (sister company of MarketingExperiments), we publish case studies in our newsletters every week, and very often those case studies include a testing and optimization element. For today’s MarketingExperiments Blog post, I wanted to share three of those examples taken from previously published newsletter case studies.

I hope these tests give you some ideas on testing your own digital marketing channels.

 

Test #1. Webpage: Increasing lead generation on a landing page

This first test was actually a collaboration between researchers at MECLABS (the parent company of MarketingExperiments) and HubSpot and was conducted during Optimization Summit 2012. The full test was covered in the article, “A/B Testing: How a landing page test yielded a 6% increase in leads.”

A lead form landing page for HubSpot’s software with a free special report incentive for filling out the registration form was tested, with the Summit attendees providing input on what to test.

Before the Summit, the testing team came up with four hypothesis options:

Hypothesis 1 — Visitors arriving to the page are highly motivated to download the e-book based on brand recognition. Removing friction from the page will result in a higher conversion rate.

Hypothesis 2 — Communicating the urgency of the offer — that the free e-book download is a limited-time offer — will result in a higher conversion rate.

Hypothesis 3 — Adding more visual value to the page, such as charts and graphs from the e-book, will result in a higher conversion rate.

Hypothesis 4 — Incorporating pricing information to increase the perceived value of the e-book will result in a higher conversion rate.

The audience was allowed to choose which one to test and decided on Hypothesis 2.

 

Control

 

Treatment (Hypothesis 2)

Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

The Writer’s Dilemma: How to know which marketing copy will really be most effective

February 5th, 2015 2 comments

I’m staring at a blank page on my screen. There are several directions I could go with this piece of writing, and I’m not sure which will be most helpful to you:

  • How to improve the conversion rate of your email marketing
  • How to best understand and serve your customers
  • How to split test your email marketing

I’m sure you face this dilemma as a copywriter or marketing manager as well:

  • Which subject line will be most effective?
  • How should you craft the headline?
  • What body copy would be most helpful (and generate the most response) from customers?

So that’s what today’s MarketingExperiments Blog post will be about. Essentially, your product and offers likely have many elements of value, and there are many ways you can message that value, but what will work best with your potential customers?

To give you a process to follow, I’ll use an example:

We recently ran a public experiment to help answer the above questions for VolunteerMatch, a nonprofit organization with a unique funding model. It sells a Software as a Service (SaaS) product to companies to help fund its organization, which has generated close to $1 billion in social value each year through its work with nonprofits and volunteers.

Let’s take a look at the process we used for this public experiment and how you can repurpose it for your own marketing efforts.

 

Step #1: Get some new ideas

You think, breathe, eat, sleep and dream about the products and services you advertise and market. So sometimes it helps to step out of your box and get a new perspective.

For example, MarketingExperiments’ parent company, MECLABS Institute, uses Peer Review Sessions to foster idea collection and collaboration from new and unique viewpoints.

To get some new ideas for VolunteerMatch, we launched the public experiment with a contest on the MarketingExperiments Blog as well as on The Moz Blog where we asked marketers to comment on the blog post with their ideas for effective subject lines with a chance to win tickets to Email Summit and a stay at the event’s host hotel, the ARIA Resort & Casino. We received subject line ideas from 224 marketers.

However, this is only one way to step outside the box and get a fresh perspective on your products and services. You could also:

  • Talk to people in departments you don’t normally engage with (e.g., customer service, sales, product development, IT, accounting, legal … keep your options open)
  • Conduct surveys or focus groups with potential customers
  • Read reviews, feedback forms, forum conversations and social media to learn the language the customers use when talking about your products
  • Get on the phone and interview customers (and even people who chose not to be customers)
  • Read websites, magazines and newspapers aimed at your buyer and see what language they use and values they emphasize
  • Go to a museum, national park, art fair, farmer’s market, the symphony or some other creative endeavor to help spark some new thinking

My point is cast a wide net. Get a lot of ideas at this point.

 

Step #2: Coalesce these ideas around key points of value

Once you have all of these ideas, they will likely naturally fall into a few main categories of value around your products or services.

When conducting this public experiment with VolunteerMatch, we started with three elements of value (listed below) to help focus marketers who were entering the contest. When they entered, they would leave a comment on the blog post with their suggested subject line and which category of value that subject line was intended to communicate.

Defining the value upfront will help you know what elements of value you already consider important to your product or service when conducting Step #1.

However, it is important to stay open minded. When you assign the feedback you’ve received into different categories of value, you may find that all of the feedback doesn’t necessarily fit into the categories you’re using. You can find gold in these outliers — new value categories for your product that you had not considered before.

The three categories of value we focused on for VolunteerMatch were:

  • Category #1: Proof, recognition, credibility
  • Category #2: Better, more opportunities to choose from
  • Category #3: Ease of use

We also gave marketers an opportunity to come up with a category of value we may have overlooked.

From the suggestions we received on the blog post, I picked a new category to test along with the previous categories of value we had already identified. Suzanne suggested “I would argue that true volunteers are motivated by something more profound from within: dedicated volunteers are passionate about a particular cause.”

Based on this response, we added one more category of value:

  • Category #4: Passion

 

Step #3: Identify the best expressions of these categories of value

Now that you’ve identified a few areas of value to focus on, look through all of the messaging for the value from the suggestions you received and identify a few examples of wording that you think is the most effective.

I read through each and every subject line suggested in the comments on the MarketingExperiments Blog, and Cyrus Shepard, Head of SEO and Content, Moz, read through all the subject lines proposed by marketers through The Moz Blog.

We settled on these seven subject lines:

Category #1: Proof

  • Attention Business Leaders: How to Increase your ROI through Employee Volunteer Initiatives
  • Volunteering matters. We have the proof.

Category #2: Network size

  • CC Your Boss: 1,000+ Ways To Make A Difference (Inside)
  • Does your company care? Thousands of ways to prove it.

Category #3: Ease of use (app)

  • The volunteer app your coworkers will talk about
  • The One App That Can Change The Way Your Company Gives Back

Category #4: Passion (no feature)

  • Spread the Only “Good” Office Virus
  • Spread the Only “Good” Office Virus (I’ll tell you why this subject line is listed twice in the next step)

 

Step #4: Test with your audience to see which value and messaging combination is the most effective

In this case, my colleague, Jon Powell, Senior Manager, Executive Research and Development, MECLABS Institute, ran a split test with VolunteerMatch’s email list to see which subject lines would be most effective and which value is most appealing to potential customers.

Testing with your potential customers is another way to break down that fourth wall with customers and discover what is really most valuable about your product to inform and improve your copywriting.

Here was the email that was sent. (Note: The last, bolded line was changed for different treatments to correspond to the value expressed in the subject line that was tested.)

 

I listed the “passion” subject line twice because Jon used it as a double treatment. Essentially, this is a way to make sure the results that you see from an experiment are valid.

There should not be a significant difference between those two treatments since the subject line was the same. If there is a significant difference, it could be an indication of a validity threat, and you must question your data even further before trusting it (an issue we fortunately did not have with this test).

Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Testing and Optimization: How to get that “ultimate lift”

January 19th, 2015 2 comments

What would you rather have: a 32-inch flat screen TV for $100 or a 72-inch flat screen TV for $150? After considering the first 32 inches cost $100, you would probably pay the additional $50 for another 40 inches.

This same principal can be thought of in terms of testing and optimization, with one caveat — you have to buy the 32-inch TV first.

 

A discovery, not a lift

Many attempting to optimize and test within webpages want big lifts; however here at MECLABS Institute, we always say the goal of a test is not to get a lift but to gain discoveries about customer behavior. This makes sense on face value, but to be honest, when I first heard the expression, I thought to myself, “Well sure, that sounds like a good backstop in case you don’t get a lift.” However, I soon learned that it is more than a backstop or worse — an excuse.

As the curator for Dr. Flint McGlaughlin’s personal website, I often come across insightful observations. This next excerpt speaks particularly well to this topic of optimization and testing to obtain more than just a lift:

Too often, marketers are focused on results instead of reasons. We need to move deeper than ‘how much,’ into ‘why so,’ to answer an even more important question: What does this tell me about my customer or prospect? And so the goal of an optimization test transcends the notion of a lift and asks for learning. With sufficient insights we can obtain the ultimate lift. The more you know about the customer, the easier it is to predict their behavior. The easier it is to predict their behavior, the more you know about your value proposition. — Flint McGlaughlin

I have bolded what I think is the most important part of that quote for the sake of our discussion today. I am going to repeat it because it is so significant: “The goal of an optimization test transcends the notion of a lift and asks for learning. With sufficient insights we can obtain the ultimate lift.” — Flint McGlaughlin

Now we may ask ourselves, “What is the ultimate lift”? Some may think it is the biggest or most important criteria on some arbitrary scale. In my opinion, the “ultimate” lift is gaining insight about your customer and your value proposition that can be leveraged across all marketing channels.

 

Value Proposition 101

Before we go any further, if you are reading this article and do not know what I mean when I say “value proposition,” I urge you to investigate our research specifically around value proposition. However, for the sake of brevity (and this blog post), here is the oversimplified crash course:

A company’s value proposition is essentially trying to answer the question “If I am you ideal prospect, why should I buy from you rather than your competitors?

The answer should be a “because” statement that stresses the appeal and exclusivity of the offer in a clear and credible way. The offer also needs to be supported by factual claims which will add to the credibility of the offer.

 

Testing for the “ultimate lift”

Now that we have a basic understanding of a value proposition, here is an example from a past MECLABS research partner. In this experiment, we achieved the “ultimate lift” because of customer discoveries gained through value proposition testing.

 

Experiment ID: TP1306
Background: Provides end-to-end market solutions for small and medium-sized businesses.
Primary Research Question: Which page will obtain the most form submissions?

First, here is the control:

 

CONTROL

 

After analyzing the offer on the page, MECLABS analysts identified the following value proposition for the offer.

Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg