Archive

Posts Tagged ‘metrics’

Measuring Success: The distance between a test and the conversion point

March 9th, 2015 2 comments

There’s a misconception that I’ve encountered among our research teams lately.

The idea is that the distance between the page being split tested and a specified conversion point may be too great to attribute the conversion rate impact to the change made in the test treatment.

An example of this idea is that, when testing on the homepage, using the sale as the conversion or primary success metric is unreliable because the homepage is too far from the sale and too dependent on the performance of the pages or steps between the test and the conversion point.

This is only partially true, depending on the state of the funnel.

Theoretically, if traffic is randomly sampled between the control and treatment with all remaining aspects of the funnel consistent between the two, we can attribute any significant difference in performance to the changes made to the treatment, regardless of the number of steps between the test and the conversion point.

More often than not, however, practitioners do not take the steps necessary to ensure proper controlling of the experiment. This can lead to other departments launching new promotions and testing other channels or parts of the site simultaneously, leading to unclear, mixed results.

So I wanted to share a few quick tips for controlling your testing:

 

Tip #1. Run one test at a time

Running multiple split tests in a single funnel results in a critical validity threat that prevents us from evaluating test performance because the funnel is uncontrolled and prospects may have entered a combination of split tests.

Employing a unified testing queue or schedule may provide transparency across multiple departments and prevent prospects from entering multiple split tests within the same funnel.

 

Tip #2. Choose the right time to launch a test

 

External factors such as advertising campaigns and market changes can impact the reliability or predictability of your results. Launching a test during a promotion or holiday season, for example, may bias prospects toward a treatment that may not be relevant during “normal” times.

Being aware of upcoming promotions or marketing campaigns as well as having an understanding of yearly seasonality trends may help indicate the ideal times to launch a test.

Read more…

Website Optimization: Testing your navigation

September 11th, 2014 No comments

As we are testing our websites, we often focus on homepages, landing pages and funnels. These are the pages that “move the needle” and get results. However, there is one aspect of many sites that goes unnoticed by optimizers — the site navigation.

Site navigation is important because it gets your visitors where they need to be. Also, it’s usually one of the static elements of your site.

The navigation is visible on all of your pages and is often the one constant throughout the website.

It simply makes sense to focus your efforts on such a high visibility area that has such a great impact on your customers’ experience.

Now, you may be asking yourself, “What can I test in my navigation?”

To answer that question, I’ve constructed a short guide to help you start optimizing your navigation.

Potential navigation testing opportunities include:

  • Changing link names that may be confusing
  • Optimizing subcopy (if you give details in your navigation)
  • Changing hierarchies or organizations
  • Adding or deleting links
  • Optimizing visual features (icons)
  • Optimizing navigation indicators (hover and click functionality, lines, highlights, etc.)

 

Begin with goals and objectives 

It’s important to have clearly defined goals and objectives when testing your navigation.

While you want your site navigation to drive conversions, you should always remember that this is ultimately a tool for your site visitors.

It should guide them where they need to go in a clear, concise manner. So how do you measure your navigation’s success? What would be your primary KPI? In many tests, our KPIs are conversions or clickthroughs. However, much more thought must go into defining navigation KPIs.

Read more…

Marketing Analytics: 4 tips for productive conversations with your data analyst

September 19th, 2013 No comments

Every week, I encounter work orders from research analysts requesting data analysis on our Research Partners’ test results that are difficult to understand.

I think there’s a reason for this – poor communication.

I’ve noticed a lack of understanding of how data analysts think. Research analysts and many marketers do not define projects and goals the same way data analysts approach a data challenge.

Data analysis takes time and resources, so the less time spent interpreting desire over data will leave more room in the budget for necessary analysis. Better, clearer communication means everybody wins.

I wanted to share with you four tips to boost your team’s communication that will hopefully save you a little time and money in the process.

 

Tip #1. The more specific you are, the faster we can help you

If you’ve had experience working with data analysts, then you may know sometimes the conversation can be like asking someone for the time of day and they explain how their entire watch works, even if you just wanted to know the time.

But, who is really responsible for the failure to communicate here?

Is it the timekeeper for being specific, or was it the vagueness of the person who needed the time?

My point here is these kinds of communication mishaps often ring especially true in the analytics world. I can attest that an analyst with clear objectives and goals will be able to perform analysis at an accelerated rate with less revisions and meetings necessary to achieve results.

So, instead of asking for general analysis of a webpage, email campaign or other initiative, try asking for the specifics you want to know.

When an analyst hears general analysis, it’s like giving us a set of Legos and expecting us to instinctively know you wanted a plane constructed instead of the impressive 40-story futuristic building.

 

For example, let’s look at the following requests that highlight how just a few more details can make all the difference:

  • Request #1: “I need to know the clickthrough rate for new visitors compared to returning visitors.”

This question is going to get you what you need faster than asking for a general analysis of a webpage.

  • Request #2:I need to know the clickthrough rate for new visitors compared to returning visitors for the second to third step of the checkout funnel.”

The second request would likely deliver the rate from steps two and three for the different visitor types.

Now, if I only had the first request to work off of, I would deliver the clickthrough rate for every step of the funnel, which takes substantially longer and costs more.

This is also because data analysts have a tendency to flex their advanced analytics muscles when given the opportunity. We want to deliver quality work.

But, the time and effort to achieving those impressive results when something quick and easy would have been equally beneficial to your needs is a waste.

 

Tip #2. Knowing how you’re going to use the data helps

To better help you with a project, we need to know how you will use the data. 

So, when starting a new project, take some time beforehand to sit down with your analyst, it’s not as bad as you think, and discuss which specific topics or characteristics will help you gain the knowledge you need quickly.

If the data will be used for internal discovery, analysts will likely approach analysis, especially the final reporting, somewhat differently than for external reporting.

 

Tip #3. Creating fancy charts should be the exception, not the rule

Knowing how the data is going to be presented will help your analysts avoid wasting precious computational time making fancy charts and graphs if you only need the information for internal use.

Formatting of charts and graphs can end up taking way more time than one would imagine, so an analyst should worry about pretty charts only when needed.

Another reason it is important to discuss how the data will be used is because your analyst might use a more efficient reporting structure. They may use graph and chart types that you ask for when in fact, they could have used a more sophisticated technique if they knew what the end reporting needed to show an audience visually.

For instance, conversations that ask, “Do you need bar graphs for each individual variable?” should happen a lot more often than they do.  

This can become cumbersome and meticulous leading up to final presentations, but if the information is represented with clarity and efficiency using the right combination of charts, everyone wins.

Read more…

Marketing Metrics: Can you have one number to rule them all?

June 6th, 2013 3 comments

One of the common questions I receive from Research Partners focuses on what metric they should use to track and evaluate tests. The tendency is often to want a single metric that defines the measure of success.

While it is important to gather consensus on which key performance indicators, or KPIs, will be used to evaluate tests early, there should never be a reliance solely on a single metric as the gatekeeper of success given your secondary metrics can provide just as much – if not more – insight into your visitors behavior.

 

In the land of testing, the marketer with one metric is not king…

If you are only using one metric, you are not seeing a full picture. Each of your KPIs tells a part of the story of performance. Only relying on one alone can mislead marketers to make poorly informed decisions.

For example, let’s say you’re testing a PPC ad. As you know, the sole purpose of an ad is to get the click and let the landing page do the selling. For this reason, you determine  your KPI is clickthrough rate since that is what the ad directly affects.

Makes sense, right?

Now let’s say that your results come back and show that both ads receive the same number of clicks and that there is no statistically significant difference in clickthrough rate.

So what happens now?

Since clickthrough rate was the only metric measured, then you may draw the conclusion that both ads perform the same and that either could be used to achieve the same result and in some cases you may be right…

However, making this assumption is a big risk that flirts heavily with a similar risk of assumption derived from artificial optimization.

Read more…

Analytics & Testing: 3 statistical testing methods for building an advanced customer theory

May 30th, 2013 1 comment

When I was in college, I took a class on complex analysis and after all the lectures, studying and nerve-racking exams, I learned one important thing about customer behavior – some characteristics of a person will likely contribute to their future behavior.

In other words, my grandparents are not likely to start buying iPods, but at the same time my younger sister and her friends are not going to go out and start buying rotary telephones either.

Many times, variables such as gender, age, income, education and geographic location will likely play a role in why your customers say yes to your offers. This brings me to my point that selecting a test methodology robust enough to explore statistical relationships among variables is more important than ever to your marketing efforts.

In today’s MarketingExperiments blog post, we will simplify three basic testing models you can use to build an advanced customer theory.

Our goal is not to give you a Ph.D. in statistics, but rather, we want to provide you with a few test methods simplified and free of as much mathspeak as possible you can use to aid your team’s next discussion on test selection.

 

Test Method #1. ANOVA (Analysis of Variance)

Marketers can use the ANOVA testing method to understand if a statistical significance exists within or between groups. Landing page optimization is a good example of how ANOVA testing can used to analyze a customer’s response to different treatments based on variables of interest.

For example, suppose you’re testing landing pages and you want to determine if the income or education level of new and return customers has any statistical significance on the probability of conversion on the landing pages you’re testing, then ANOVA would be the optimal test method to consider using.

 

Test Method #2. Logistic regressions

Logistics is a testing method for prediction analysis. In other words, a logistic regression test can help you to discover the statistical likelihood of a conversion for customers in demographic A versus customers in demographic B.

With logistic regressions however there is only one catch …

Customers in both demographic groups A and B have to be known as significant contributors to the likelihood of conversions.

 

Test Method #3. Time series analysis

Time series analysis is a test method similar to logistical regressions in that’s it has a basis in predictive analysis, but time series analysis is focused on what you can learn from historical data trends.

Understanding the seasonality behavior of your website traffic is a perfect example of when you would use a time series analysis.

These are just a few of the testing methods available to help you learn more about your customers, but ultimately no marketer is an island. So, if you have a testing method that you use to build your customer theory, feel free to share it with in the comments below.

 

Related Resources:

Marketing Optimization: How to design split tests and multi-factorial tests

Marketing Metrics: Why all numbers aren’t created equal

How to Predict, with 90% Accuracy, Who Your Best Customers Will Be

Online Marketing Tests: A data analyst’s view of balancing risk and reward

Marketing Analytics: 6 simple steps for interpreting your data

November 7th, 2012 No comments

You’ve finally set up tracking on your site and have gathered weeks of information. You are now staring at your data saying, “Now what?”

Objectively interpreting your data can be extremely overwhelming and very difficult to do correctly … but it is essential.

The only thing worse than having no insights is having incorrect insights. The latter can be extremely costly to your business.

Use these six simple steps to help you effectively and correctly interpret your data.

Read more…