Archive

Posts Tagged ‘analytics’

Call-to-Action Button Copy: How to reduce clickthrough rate by 26%

March 31st, 2014 6 comments

“Start Free Trial” | “Get Started Now” | “Try Now”

One of the above phrases reduced clickthrough rate by 26%.

 

DON’T SCROLL DOWN JUST YET

Take a look at those three phrases. Try to guess which phrase underperformed and why. Write it down. Heck, force yourself to tell a colleague so you’ve really got some skin in the game.

Then, read the rest of today’s MarketingExperiments Blog post to see which call-to-action button copy reduced clickthrough, and how you can use split testing to avoid having to blindly guess about your own button copy.

 

How much does call-to-action button copy matter anyway?

The typical call-to-action button is small. You typically have only one to four words to encourage a prospect to click.

There are so few words in a CTA. How much could they really matter?

Besides, they come at the end of a landing page or email or paired with a powerful headline that has already sold the value of taking action to the prospect. People have already decided whether they will click or not, and that button is a mere formality, right?

To answer these questions and more, let’s go to a machine more impressive than the Batmobile … to the splitter!

 

A/B/C/D/E split test

The following experiment was conducted with a MECLABS Research Partner. The Research Partner is a large global media company seeking to sell premium software to businesses.

The button was tested on a banner along the top of a webpage. Take a look at that banner below. 

cta-experiment-start-free-trial

 

Five different text phrases were tested in that button. Since I’ve already teased you on the front-end, without further ado, let me jump right into the findings.

 

Results

cta-test-results

 

Those few words in that teeny little rectangular button can have a huge impact on clickthrough.

As you can see, “Get Started Now” drove significantly more clicks than “Try Now.” Let’s look at the relative changes in clickthrough rate so you can see the relationship between the calls-to-action.

Read more…

Analytics: How metrics can help your inner marketing detective

March 17th, 2014 No comments

Why are there so many different metrics for the same component of a website? For example, page views, visits, unique visitors and instances, just to name a few. Which metrics should you use? Are all of these metrics really helpful?

Well, they exist because these metrics are tracked differently and yield different interpretations.

However, much more knowledge can be gained by analyzing these metrics with respect to one another combined with usage of segments, or characteristics of users.

For example, if we look at the metrics for page views on your site with respect to number of visits between two browsers, such as Chrome versus Firefox, and then discover that Chrome users actually have more page views per visit on average than Firefox users.

That could indicate that the customers using Firefox may be of a different demographic with a different level of motivation compared to Chrome users. Or, it could also mean that the site functionality or user experience on Chrome could be different than Firefox. For example, I know on my own computer, Chrome displays website content in a smaller font than Firefox.

In today’s blog post, learn how just these two metrics combined together can put you in a detective mindset.

 

Don’t just investigate what customers do, investigate where they go

There are plenty of great free and paid analytics tools out there to help you investigate customer behavior, but for the sake of this example, I’m going to talk specifically about Adobe’s analytics tool.

An interesting advanced feature in the platform allows you to set custom tracking on link clicks on a page that when combined with other metrics, could reveal great findings.

For example, let’s say you have a call-to-action that takes a visitor to the cart. By using custom tracking, you can track how many visitors interact with the CTA on the page and compare that to the next page flow.

If you see a significantly higher number of visitors clicking the CTA, but not as many cart page views on the next page report, it could mean that there is some technical issue preventing the visitors from going to the next page.

There could also be a lot of “accidental” or “unintentional” clicks from visitors clicking the back button before the next page even loads, which can be very common on mobile sites.

If there are significantly less visitors clicking the CTA and more cart page views on the next page flow, what would that indicate?

Perhaps people are using the forward button frequently because they came back to the page after they have seen the cart.

  Read more…

Customer Theory: What do you blame when prospects do not buy?

February 10th, 2014 No comments

The effort and money that you’re investing in your marketing is predicated on one thing – that you understand your customer.

What good is a print ad, an email or a marketing automation investment if it doesn’t deliver a message that alleviates a customer pain point or helps a customer achieve a goal? They won’t act if the message doesn’t hit them square between the eyes.

Let me give you an example of faulty customer theory. Uber, a mobile car hailing service, is coming to Jacksonville. I recently received a push poll phone call clearly supported by the frightened taxi industry.

The main message seemed to be that Uber is cheaper because it uses unregulated (and, therefore, unsafe) drivers.

 

How often are you delighted by cab drivers?

What struck me was how far off their customer theory was from my actual wants and needs. I, for example, chose to take the BART from the airport to the hotel for Lead Gen Summit 2013 – not because it was cheaper (MECLABS was paying the bill either way, so it was free for me), but because riding in a cab is a miserable experience.

Plus, I’m putting my life in the hands of someone who will cut across three lanes of rush hour traffic with no turn signal to drop a passenger off 45 seconds quicker. Goodbye, safety argument.

The reason Uber, Lyft and other car hailing mobile apps are gaining traction is because they’ve found a way to create a better customer experience. Think about it. When was the last time you were delighted by a cab ride? In fairness, there was one time for me in Los Angeles. A kind driver gave me a quick tour of Bel Air during what limited free time I had on a business trip.

Here’s why the taxi industry struggles to realize the true threat.

 

We will tend to blame external rather than internal reasons when customers don’t buy

You put in so much time marketing your company and your clients that it becomes difficult to see the flaws customers see with unbiased eyes.

This is why A/B testing can be so valuable.

Actually forming hypotheses, testing these hypotheses in real situations with real customers, and then building a customer theory over time that informs everyone in your company about what customers really want is essential.

When you have your customer theory right, marketing can focus on clearly communicating how it can fulfill customers’ needs and wants.

 

Discover what customers want

Of course, A/B testing is only one way to gain customer intelligence. So to gain a perspective beyond my own, I asked Lindsay Bayuk, Senior Product Marketing Manager, Infusionsoft, for her perspective.

“Understanding what your customers want starts with understanding the problem they are trying to solve. First, define who your best customers are and then ask them about their challenges. Next, ask them why they love you,” Lindsay said.

Lindsay said some great ways to collect both quantitative and qualitative data on your target customers include:

  • Surveys
  • Interviews
  • A/B tests (see, I was telling you)
  • Sales calls
  • Feedback loops

The email registration process is another opportunity for learning more about your customers.

Ali Swerdlow, Vice President, Channel Sales & Marketing, LeadSpend, added, “Preference centers are a great way to gather data about your customers. Then, use that data to segment your list and message different groups accordingly.”

Read more…

LPO: How many columns should you use on a landing page?

February 6th, 2014 2 comments

What is the highest performing number of columns for your webpages?

The question is deceptively simple and difficult to determine unless you test your way to the optimal layout for your needs.

During a recent Web clinic, Jon Powell, Senior Executive Content Writer, MECLABS, revealed how a large tech company decided to test its column layout in an effort to increase sales from its branded search efforts.

So, let’s review the research notes for some background information on the test.

 

Background: A large technology company selling software to small businesses.

Goal: To significantly increase the number of software purchases from paid search traffic (branded terms).

Primary Research Question: Which column layout will generate the highest rate of software purchases?

Approach: A/B multifactor split test

 

Here’s a screenshot of the control which utilized a two column layout – one main column and a right sidebar – featuring separate content and CTAs. 

 

In the treatment, the team eliminated the sidebar and focused on a single-column layout.

What you need to know

The one-column design increased branded search orders by 680.6% and revenue per visit by 606.7% when tested against the two-column design.

To learn more about why the single-column layout outperformed the two-column design, watch the free on-demand Web clinic replay of “How Many Columns Should I Use?” to see the results of an aggregate column research study you can use to aid your own conversion rate optimization efforts.

  Read more…

A/B Testing: Is responsive design worth the investment?

February 3rd, 2014 1 comment

Is responsive design worth the investment?

It depends on whom you ask.

You can ask the experts, receive a variety of replies and hopefully draw some conclusions from their answers. Or, you can look to your customer data for insights through a little testing and optimization.

During a recent Web clinic, Austin McCraw, Senior Director, Content Production, and Jon Powell, Senior Executive Content Writer, both of MECLABS, revealed how marketers at a news media organization decided to forego relying on expert opinions and put responsive design to the test.

First, let’s review the research notes for some background information on the test.

Background: A large news media organization trying to determine whether it should invest in responsive mobile design.

Goal: To significantly increase the number of free trial sign-ups.

Primary Research Question: Which design will generate the highest rate of free trial sign-ups?

Approach: A/B multifactor split test

 

Here’s a screenshot depicting both design approaches as they render responsively and unresponsively on desktop, tablet and mobile devices.

Read more…

Online Testing: Defining type I and type II testing errors

January 30th, 2014 No comments

When it comes to website testing, maybe you’ve heard of the terms type I and type II errors, but never really understood what they mean.

Perhaps you’re thinking they might have something to do with personality types, like a type I personality that is perhaps more aggressive than a type II personality. Or maybe blood type comes to mind?

These are, of course, all wrong.

In the context of testing, what we are really referring to are type I and type II statistical errors. These are concepts that two prominent statisticians, Jerzy Neyman and Egon Sharpe Pearson, first developed in the 1930s (which are great names by the way – makes me picture a Hell’s Angel and a Ghostbuster teaming up to do some stats).

Here’s how these errors are commonly defined in statistics:

  • Type I: Rejecting the null hypothesis when it is in fact true.
  • Type II: Not rejecting the null hypothesis when in fact the alternate hypothesis is true.

Anuj Shrestha, a fellow data scientist at MECLABS, had a great interpretation for these concepts: “Type I is sending an innocent person to jail and type II is letting a guilty person go.”

Most people would probably agree that sending an innocent person to jail is much more egregious than letting a guilty person walk. When it comes to making business decisions based on website testing, the equivalent is also true.

It is arguably much more damaging to make a decision based on a false positive – for example, pushing a webpage live that you incorrectly believe performed better than an alternative – versus a false negative, which is like declaring that treatments were not statistically different from one another when they were, resulting in a missed opportunity.

For this reason, we will focus on a few specific scenarios concerning type I errors to put this concept into perspective.

 

The Chicago Tribune and Gallup accidentally defeat Truman

One of the most famous examples of a type I error in politics involved pollsters and the Chicago Tribune during the 1948 presidential election. At the time, the accepted sampling methods in polling were not as refined as they are today.

Non-random sampling, coupled with ending polling prematurely before a representative sample was collected, led pollsters to incorrectly conclude that Thomas E. Dewey would defeat Harry S. Truman.

Based on this information, the Tribune’s headline declaring Dewey the victor was a huge blunder and was forever captured in the famous photograph of Truman holding a copy of the Tribune next to his gigantic smile.

Pollsters and the Tribune committed a type I error by stating Dewey would win (a false positive) when in fact, he did not.

  Read more…