Archive

Author Archive

Marketing Analytics: Frequently asked questions about misunderstood and misinterpreted metrics

July 29th, 2013 2 comments

At MECLABS, we begin the test planning process by looking at the available metrics. I’ve seen our discussion about metrics sometimes become more of a discussion about the definition of certain metric, rather than trying to piece together a data story and understand what that metric is trying to tell us about the customer.

I’ve found if you asked three people, they each have a different definition of how bounce rate is calculated, for example. I then realized there were a lot of metrics people were confused about.

Not only are the definitions misunderstood, but the differences in how these metrics are calculated across different data platforms was being debated as well (Google Analytics and Adobe SiteCatalyst, specifically).

So, I set out to uncover these metric mysteries and simplified my findings in today’s MarketingExperiments blog post. There is also a handy chart at the end for you to print. This chart goes into more detail specifically about Google Analytics and Adobe SiteCatalyst. I chose those two platforms because, according to the most recent data I was able to find, they are the most widely adopted analytics platforms.

Before I begin, I want to iterate the definitions discussed here are the default for these platforms. Most platforms allow you to create custom metrics, but for simplicity’s sake, I have not gone into detail on this.

So without further adieu, let’s get started.

 

How is Time on Page calculated?

Time on Page, plainly spoken, is the amount of time between when the visitor enters the page to the time the visitor goes to another page (or interacts with the page if you’re using SiteCatalyst) on your site.

Important to note is that people who bounce are not included in Time on Page (or, when looking at metrics for an entire website, Time on Site). This is because time is calculated from the loading of Page 1 to the loading of Page 2, so if there is no Page 2, the time between cannot be calculated.

 

What’s the difference between Visit Duration and Time on Page?

Visit Duration (also referred to as Time on Site or Total Time Spent) is the time the visitor spent on your entire site during one session. Time on Page is the amount of time a visitor spent on a single, specific page.

 

How is Bounce Rate calculated? And, how does it differ between Google Analytics and SiteCatalyst?

In Google Analytics (GA), Bounce Rate is the percentage of people who see one page, and then leave the site.

In SiteCatalyst (SC), there are two metrics you can look at – Single Access and Bounces. Single Access has the same definition as GA: the number of people who see one page and then leave the site. Bounces, on the other hand, take into account any in-page link event or interaction with the page.

Consider this scenario – you have a landing page with a video on it and a visitor enters your site, watches the video and leaves. In Google Analytics, this visitor would be considered a bounce because they did not visit a second page. In SiteCatalyst, this visitor would be considered a Single Access for the same reason, but would not be considered a Bounce because they watched the video (a link event).

A widely-believed myth is that bounce rate is partially determined by how much time you spend on the page. I often hear something like, “If the visitor is there for less than 10 seconds, it’s a bounce, but if they are there for 11 seconds, it is not.” This is false for the default settings of Google Analytics and SiteCatalyst. It does not matter how much time the visitor spends on the site. If they do not go to another page or interact with the page (SC), they are considered a bounce.

However, I believe this myth comes from the fact that you can create custom metrics in SiteCatalyst that will thwart a bounce if someone spends a specified amount of time on the page.

 

What’s the difference between Bounce Rate and Exit Rate?

Bounce Rate will tell you when the specified page is the entry page, and what percentage of people leave without going to another page or interacting with the site (in SiteCatalyst). Exit Rate tells you, for all of the people who visited the specified page, what percentage of visits saw this page last – and then exited the site.

The difference is where the visitor starts. If a visitor started at Page B and left on Page A, it would be added to Page A as an exit, but there would be no bounce because the visitor saw two pages. If a visitor entered at Page A, didn’t do anything and left the site from Page A, it would be considered a bounce and an exit.

The people who bounced on a certain page will be included in the Exit Rate of that page.

Read more…

A/B Testing: Example of a good hypothesis

July 11th, 2013 2 comments

Want to know the secret to always running successful tests?

The answer is to formulate a hypothesis.

Now when I say it’s always successful, I’m not talking about always increasing your Key Performance Indicator (KPI). You can “lose” a test, but still be successful.

That sounds like an oxymoron, but it’s not. If you set up your test strategically, even if the test decreases your KPI, you gain a learning, which is a success! And, if you win, you simultaneously achieve a lift and a learning. Double win!

The way you ensure you have a strategic test that will produce a learning is by centering it around a strong hypothesis.

 

So, what is a hypothesis?

By definition, a hypothesis is a proposed statement made on the basis of limited evidence that can be proved or disproved and is used as a starting point for further investigation.

Let’s break that down:

It is a proposed statement.

  • A hypothesis is not fact, and should not be argued as right or wrong until it is tested and proven one way or the other.

It is made on the basis of limited (but hopefully some) evidence.

  • Your hypothesis should be informed by as much knowledge as you have. This should include data that you have gathered, any research you have done, and the analysis of the current problems you have performed.

It can be proved or disproved.

  • A hypothesis pretty much says, “I think by making this change, it will cause this effect.” So, based on your results, you should be able to say “this is true” or “this is false.”

It is used as a starting point for further investigation.

  • The key word here is starting point. Your hypothesis should be formed and agreed upon before you make any wireframes or designs as it is what guides the design of your test. It helps you focus on what elements to change, how to change them, and which to leave alone.

 

How do I write a hypothesis?

The structure of your basic hypothesis follows a CHANGE: EFFECT framework.

 

While this is a truly scientific and testable template, it is very open-ended. Even though this hypothesis, “Changing an English headline into a Spanish headline will increase clickthrough rate,” is perfectly valid and testable, if your visitors are English-speaking, it probably doesn’t make much sense.

So now the question is …

 

How do I write a GOOD hypothesis?

To quote my boss Tony Doty, “This isn’t Mad Libs.”

We can’t just start plugging in nouns and verbs and conclude that we have a good hypothesis. Your hypothesis needs to be backed by a strategy. And, your strategy needs to be rooted in a solution to a problem.

So, a more complete version of the above template would be something like this:

 

In order to have a good hypothesis, you don’t necessarily have to follow this exact sentence structure, as long as it is centered around three main things:

  1. Presumed problem
  2. Proposed solution
  3. Anticipated result

Read more…

Marketing Strategy: 4 steps to developing an effective and strategic test

March 11th, 2013 3 comments

Twenty four conversations, countless emails and three-and-a-half Red Bulls later, I walked away from my last coaching clinic session at MarketingSherpa Email Summit 2013.

I realized a lot of the questions I was asked were deeper than:

Instead, at the root of many questions was a thirst for advice on strategic thinking and test planning.

A common problem I see among marketers is  a lack of sound strategy behind testing. We are asking the wrong questions. We’re asking, “What headline should we test?” when we need to begin by asking, “Why is the current headline underperforming?” We need to ask “Why?” before “What?”

As an optimization manager, one of my responsibilities is guiding my coworkers through planning an effective test series for our Research Partners. Each time I go through this exercise, there is a thought process I follow, which is what today’s MarketingExperiments blog post will teach.

But before we begin, I want to emphasize the most important point of this post – the reason I follow this process. If you leave this page with nothing else, please remember:

We test solutions to problems, not ideas.

Okay, now we can get down to the nitty-gritty. Let’s talk about the thought process you should follow to craft a strategy-centered test.

 

STEP #1:  STATE YOUR GOAL

Start by asking yourself, “What am I trying to accomplish? What is the objective of this test?”

 

This will lay the foundation for your test, and you will need to continuously refer to your goal. It helps me to write the goal on the whiteboard to remind everyone what we are working towards. This way, we all stay focused.

An example of your goal could be to:

Increase the average order value of customer transactions.

 

STEP #2: IDENTIFY THE PROBLEM – The “Why”

Now that you have identified your goal, it’s time to start thinking about how to achieve that goal. But before you can craft a successful solution, you need to understand the problem. Ask yourself, “Why?”

If your goal is to increase average order value, you may begin by asking:

Why are users only spending an average of $50 on my site?

Once you have asked “Why?” you should then form your hypothesis by formulating possible answers to your question – the “Because.”

 

Users are only spending an average of $50 because:

     A. We place the most emphasis on the product that costs $50.

     B. The only product that is relevant to our customers is the one that costs $50.

     C. They are unaware of the extra benefits of our more expensive products.

As you can see, you can quickly identify multiple possible problems. A trap many people fall into at this point is trying to solve all of these problems at once. This will put you on the fast track to failure with no tangible learnings from your test. Choose one problem to solve at a time.

Sometimes, the biggest problem may not be feasibly solved within your project scope (possibly Problem B). You must choose the problem to solve by weighing the benefits of solving the problem with the costs of testing and implementing the solution.

Let’s move forward with Problem A – We place most emphasis on the $50 product.

Read more…

A/B Testing: Working with a very small sample size is difficult, but not impossible

November 26th, 2012 5 comments

A thought for future Web clinics:

There are millions of small businesses like mine. (Think small and local: your dentist, dry cleaner, pizza delivery). We are, in the grand picture, very small. My website generates, on average, 400 visitors in a month. (That’s around 14 a day. It works for me.) 

We run tests and split tests all the time, but it is hard to draw any real conclusion for what is working and what is not working with really small amounts of data. 

Is there something small business can do to better interpret small amounts of data? Thanks for your help and insight.

– Chris

 

Thanks for the question, Chris. After having a mini-brainstorm session with one of our data analysts, Anuj Shrestha, I’ve written up some tips for dealing with a small sample size:

Read more…

Marketing Analytics: 6 simple steps for interpreting your data

November 7th, 2012 No comments

You’ve finally set up tracking on your site and have gathered weeks of information. You are now staring at your data saying, “Now what?”

Objectively interpreting your data can be extremely overwhelming and very difficult to do correctly … but it is essential.

The only thing worse than having no insights is having incorrect insights. The latter can be extremely costly to your business.

Use these six simple steps to help you effectively and correctly interpret your data.

Read more…

Shopping Cart Abandonment: 7 simple steps to completing the sale

November 5th, 2012 4 comments

You spent years creating a valuable email list that gets Kim Johnson to opt in. Then, you craft an amazing email that inspires Kim Johnson to click to the landing page, where your marketing prowess is again on display, and Kim Johnson adds your product to her cart. And then… And then… Nothing. But why? And, what can you do to avoid this scenario as much as possible?

Well, at least you’re not alone – 88% of consumers have abandoned an online shopping cart without completing their transaction, according to a Forrester study. While you cannot eliminate cart abandonment, and many factors are out of your control (some customers just weren’t ready to purchase), you do have the ability to reduce abandonment.

 

 If you want to reduce your shopping cart abandonment rates, follow these seven simple steps:

  Read more…