Archive

Archive for the ‘Analytics & Testing’ Category

Website Analytics: How to use data to determine where to test

May 28th, 2015 6 comments

At MarketingExperiments, we use patented heuristics to evaluate websites, emails and other digital mediums. 

Often people think that a heuristic evaluation is a purely qualitative approach to a problem. This can be true, but when you combine quantitative analytics with the qualitative knowledge you increase the power to make meaningful change.

This post will show relevant metrics for three of these elements that any marketer — from beginner to advanced — can use to discover opportunities for improvement.

 

Step #1. Look at the qualitative elements of your website

Often people just ask for data dumps. To make matters worse, they want it in a very short time. On top of that, most data scientists use purely what they are comfortable with: numbers.

To add context and save time, you must evaluate the site to see where you should focus your data analysis.

Put yourself in the customer’s mindset and go to your site. If you own creative or design, try to remove those biases as best as possible. What makes sense to you or feels like the right amount of information may be completely overwhelming to a customer who isn’t familiar with your product or industry.

 

Look for things that are broken, functions that are clunky, images that don’t make sense or add value and difficulty in completing the conversion. You must objectively look at all the pages in the conversion path and be familiar enough with them to make sense of the data that you pull.

Pull data to illuminate the points you saw to give validity to the theory. Key heuristic elements and data help prove the problem.

 

Step #2. Understand motivation

Motivation cannot be affected, but it can be matched.  Use the traffic reports below to determine who is coming, how they are coming and why they are coming.

Sources and mediums

The source is the specific place where the traffic originated and the medium is the type of traffic. Google is an example of a source, and the medium would segment this traffic by direct, organic, paid, email, etc.

Knowing where the traffic is coming from tells us about:

  • The types of users — For example: young, old or tech savvy
  • User brand awareness — Having more organic than paid traffic can mean users already know you and your products.
  • Email — Users are on a list and have engaged with you before.

Keywords

Match the motivation of search traffic by looking at what keywords were used to find the link to your site.

If the majority are branded, then you can spend less real estate on your site telling people who you are with primary level value proposition. If the majority is nonbranded, then you know you must put value proposition about your company and products directly in the customer’s eye-path.

Next level

Look at the landing pages people visit most often, and look at the bounce and exit rates. If these are high, you haven’t aligned with the motivation of the visitors.

 

Step #3. Insert value proposition

There are four main levels of value proposition:

  • Primary — about your company
  • Product — about your available offerings
  • Process — what steps need to be taken to convert or what happens post-conversion
  • Prospect — telling potential customers why they should be doing business with you

 

It is important to use value proposition in the right place to increase the perceived value of the offer as you increase the perceived cost — which could be taking action or giving personal information.

Previous page

Look at pages such as “About Us,” “Contact Us” and “FAQ” and run a previous page report. This will show you where visitors need more value proposition to keep them in the funnel.

You can take it a step further and see where visitors went next by running a Next Page Report, which shows the areas where customers needed more information.

Exit Page Report

Find your top exit pages and then look at them to see if there is value proposition on those pages. If there isn’t, then add the appropriate value proposition. If there is, maybe the value proposition needs updating. This update could be as simple as changing where it is and what it says.

To understand what goes into a proper value proposition, review these five questions from a MarketingExperiments interview with Michael Lanning, the inventor of the term.

Scroll metrics

Often I see websites where the value proposition is on the page, but it is too low on the page or too outside the eye path to ever be seen.

See how far people are scrolling on your pages to determine the placement of your value proposition.

Next level

Look at responses of customers to determine the best and worst aspects of your product(s) so that you can highlight the best and message against the worst. Also, look at your competitors and compare products so that you can highlight why your product is the best for the potential customer.

 

Step #4. Address friction

Friction is the amount of effort someone has to give to complete a conversion. Look at the key funnel steps and run these reports.

 

Every website has many friction possibilities, so focus on those specific to you. Look at form fields, product pages, carts, calls-to-action and other places that may be difficult to navigate.

“Previous” and “Next Page” reports are a great way to isolate friction. If you see that a high rate of visitors are bouncing between steps or between the cart and shopping, then some element of friction is most likely causing them to not want to complete the step. If people are going to the “Contact Us” page at the same point repeatedly, then they most likely have had enough with trying to complete the purchase online.

Form tracking

Set up events or click tracking to help identify friction (especially at places where people enter information). If you see a high drop off rate at a particular part of the checkout, then the friction has overwhelmed the visitor. You can test shortening or removing information or explaining why the information is necessary.

Internal search traffic can also be used to help identify friction in the purchase process. If common search terms are product related, it is too difficult for the user to get the relevant information they are seeking. If it is mostly related to process, then you can address those elements in the purchase path.

 

You can follow Benjamin Filip, Manager of Data Sciences, MECLABS on Twitter @benjamin_filip.

 

You might also like

Gain actionable ideas for optimizing conversion on your landing pages from Ben Fillip at the MECLABS Institute Marketing Lab at IRCE in Chicago

Beginner’s Guide to Web Data Analysis: Ten Steps to Love & Success [From Occam’s Razor]

Digital Analytics: How to use data to tell your marketing story [More from the MarketingExperiments blog]

Value Proposition: 4 key questions to help you slice through hype [More from the MarketingExperiments blog]

Value Proposition Development: 5 insights to help you discover your value prop [More from the MarketingExperiments blog]

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Understanding Your Customer’s Story: How one company increased conversion 104% by identifying motivation

May 21st, 2015 2 comments

Every time someone wants to buy something from your brand, there’s a story that explains why they want what you’re selling. Identifying that story is key to making the sale.

How do we know this is true? Because when we know someone’s story, we know their motivation. If someone is highly motivated to get a solution, they’ll put up with almost anything — a poorly written email, a slow website or even a convoluted sales flow — to get it.

Consider this patented heuristic:

 

This isn’t a math formula. It’s a guide that MarketingExperiments and its parent company, MECLABS Institute, derived from analyzing tens of thousands of sales flows. This heuristic reflects what it takes to convert (C) a prospect into a customer and shows how the five variables — motivation (m), value (v), incentive (i), friction (f) and anxiety (a) — relate to each other. The numbers next to the variables identify how powerfully they affect conversion. Note that motivation is the most heavily weighted variable.

If formulas make your eyes cross, all you need to know is this: if a customer is highly motivated, none of the other elements (such as friction, anxiety or a poorly communicated value proposition) can stop them from moving forward in the sales process.

The most recent Web clinic looked at clues that revealed customers’ stories and, consequently, their motivation. Watch it and, within 30 minutes, you’ll get critical information that you can use immediately to drive an impressive lift in conversions.

Consider the experience, the second company outlined during the Web Clinic, of a Canadian window manufacturer who was a student of MarketingExperiments. He called on MECLABS to help him increase conversions from his online site.

 

The Control

  Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

The Power of a Specific Offer to a Specific Prospect

May 7th, 2015 No comments

Specificity converts. In marketing, there should be no such thing as a general message. The marketer communicates with an aim. This aim should dictate everything else we say. This aim should influence, even constrain, every word we say.

— Flint McGlaughlin, Managing Director and CEO, MECLABS Institute

Specificity converts. A specific offer to a specific person will outperform a general offer to a general person.

This concept relates to a recent email test we ran with our MarketingSherpa audience and ran again (with a slight twist) with our MarketingExperiments audience.

First, in case you’re not familiar with MarketingSherpa, allow me to briefly explain our sister company.

MarketingSherpa’s content is geared toward exploring general marketing principles. This is also where companies and marketers can share specific marketing stories, such as Mellow Mushroom’s social media strategy and Red Bull’s content marketing advice.

Alternatively, the MarketingExperiments audience delves more specifically into the tactics of marketing strategy. MarketingExperiments is about specific tests that the reader can apply to their own marketing.

Now that you understand the difference in content related to the tested audiences, let’s get into the test itself.

 

The test

We tested an email invitation for a recent follow-up Web clinic MarketingSherpa hosted. The clinic’s objectives were to examine the results of a live optimized email test, which was run by the MarketingSherpa audience at Email Summit 2015 alongside Flint McGlaughlin.

The test consisted of two treatments:

  1. Treatment A focused on the Email Summit follow-up test, only mentioning live optimization from the MECLABS team.
  2. Treatment B switched the emphasis by focusing on the live optimization from the MECLABS team, only mentioning the Email Summit follow-up test.

In essence, both emails were an invite to the same Web clinic and messaged the same two offers, just with different expressions of focus.

 

Treatment A: Email Summit follow-up

Subject line: Does peer review work? See the results of the audience optimized email from the Email Summit 2015

Preheader: Plus live email optimization from the MECLABS research team. 

Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Here’s Why Most A/B Testing is Boring and Not Really Worth the Trouble

April 6th, 2015 1 comment

Do a quick Google search on “things to a/b test on a website,” scan the results for a moment, then come back and read the rest of this article.

Most of you reading this are marketers, so you know I’m taking a big risk by telling you to go do something else before you read my article.

In fact if you’re reading this now, you’re probably one of the very few who made it back from that incredibly distracting activity I had you do. Thank you. You are exactly the person I want to be reading this. The others can go on their merry way. They are not the ones who need to hear this.

I had you do that search because the Internet is full of people telling you to test things on your website such as color, button size, layouts, forms, etc. I wanted you to get an idea for what’s out there.

Now, I want you to understand why almost everyone writing those articles is wrong

… or at the very least, missing the point.

Please don’t view this as me putting down the people who wrote those articles. I know a few of them personally, and I highly respect the work they are doing. This is not about whether their work is good or bad.

I’ve personally written many articles exactly like the ones they’re writing. In fact, they have one up on me because at least their articles are ranking in Google for popular search terms.

The reason they are missing the point is that most of those articles are focused on the elements of a page rather than the serving of a customer.

I get why they do it.

Webpages are far easier to understand than people. Webpages are a collection of 0s and 1s. People are a collection of who knows what.

And most of you, readers, are looking for webpage fixes — not a deeper, fuller way to serve your customer.

There is nothing necessarily wrong with you, but it’s just that we naturally focus on our own self-interest. It isn’t wrong in itself.

What is wrong is the methods we use to achieve our own goals. I don’t mean morally wrong. I mean practically wrong.

 

Our objective should always be: Make as much money possible.

MECLABS Institute has found after more than 15 years of research that the best method for achieving this objective is to spend as much money possible on serving your customer.

Until we can view every A/B test we run as an opportunity to better serve our customers, we will just be running (ultimately) ineffective tests on page elements.

It doesn’t really matter in the long run which color, layout or page element is going to perform well.

The Internet is constantly changing. Design trends are always going to influence how we look at webpages and their elements. What matters for marketers in the long run is how well we understand and, consequently, how well we can serve our customers.

Flint McGlaughlin, Managing Director and CEO, MECLABS, calls this understanding of our customers “customer wisdom.

This is also why he often says, “The goal of a test is not to get a lift, but rather to get a learning.”

However, it’s one thing to hear this, another to really understand what it means.

It really means we want to conduct research, not run a test.

We want to learn a tangible lesson about our customer so that we can apply it to other areas of our marketing and achieve a maximum return on the amount of time and energy we spend on testing.

Let me show you what I mean with a real-world example. Here’s what happens when you just run an A/B test that is focused on a page element. Let’s take color for instance.

You have two treatments. The only thing changed is the background color. 

 

You also have a result. In this case, the result was a 19.5% increase in clickthrough at a 92% level of confidence. But here’s where things get tricky.

Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Digital Analytics: How to use data to tell your marketing story

March 12th, 2015 No comments

When it comes to being a data-driven marketing team, there is not as much opposition between content and data as once thought.

Two central themes that highlight this idea came out of the Opening Session of The Adobe Summit — The Digital Marketing Conference. They are:

  • Use data correctly to support a story
  • Ensure the story you’re telling can be relayed to a wider audience

Marketers need to quit treating their data analysts as number-crunching minions and start seeing them as contributors with a vital perspective of the greater customer story.

Nate Silver, Founder and Editor in Chief, FiveThirtyEight.com, spoke about how useless data can be if you can’t communicate it to a wider audience. The practice of collecting, analyzing and interpreting data can be very costly, and marketers need to maximize ROI by making sure they tell the correct story and that it can be spread across their organization.

  Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Measuring Success: The distance between a test and the conversion point

March 9th, 2015 2 comments

There’s a misconception that I’ve encountered among our research teams lately.

The idea is that the distance between the page being split tested and a specified conversion point may be too great to attribute the conversion rate impact to the change made in the test treatment.

An example of this idea is that, when testing on the homepage, using the sale as the conversion or primary success metric is unreliable because the homepage is too far from the sale and too dependent on the performance of the pages or steps between the test and the conversion point.

This is only partially true, depending on the state of the funnel.

Theoretically, if traffic is randomly sampled between the control and treatment with all remaining aspects of the funnel consistent between the two, we can attribute any significant difference in performance to the changes made to the treatment, regardless of the number of steps between the test and the conversion point.

More often than not, however, practitioners do not take the steps necessary to ensure proper controlling of the experiment. This can lead to other departments launching new promotions and testing other channels or parts of the site simultaneously, leading to unclear, mixed results.

So I wanted to share a few quick tips for controlling your testing:

 

Tip #1. Run one test at a time

Running multiple split tests in a single funnel results in a critical validity threat that prevents us from evaluating test performance because the funnel is uncontrolled and prospects may have entered a combination of split tests.

Employing a unified testing queue or schedule may provide transparency across multiple departments and prevent prospects from entering multiple split tests within the same funnel.

 

Tip #2. Choose the right time to launch a test

 

External factors such as advertising campaigns and market changes can impact the reliability or predictability of your results. Launching a test during a promotion or holiday season, for example, may bias prospects toward a treatment that may not be relevant during “normal” times.

Being aware of upcoming promotions or marketing campaigns as well as having an understanding of yearly seasonality trends may help indicate the ideal times to launch a test.

Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg