Archive

Posts Tagged ‘analytics’

Marketing Analytics: Show your work

August 14th, 2014 1 comment

Data handling and analytics can sometimes offer shocking results, as global B2B company National Instruments discovered after a surprising decrease in an email campaign’s conversion rate.

 

Key Obstacle: Concern about the new numbers

“When I first saw the number change, I was a bit freaked out,” said Stephanie Logerot, Database Marketing Specialist, National Instruments.

Stephanie, as a strategist, felt her greatest challenge was communicating the new way of looking at the data to National Instruments’ stakeholders outside of the database marketing team. This meant making certain everyone understood why the numbers dropped after implementing the new, more stringent data criteria.

 

A little background

A recent MarketingSherpa Email Marketing case study– “Marketing Analytics: How a drip email campaign transformed National Instruments’ data management” – detailed this marketing analytics challenge at National Instruments.

The data challenge arose from a drip email campaign set around its signature product.

The campaign was beta tested in some of National Instruments’ key markets: United States, United Kingdom and India. After the beta test was completed, the program rolled out globally.

The data issue came up when the team looked into the conversion metrics.

The beta test converted at 8%, the global rollout at 5%, and when a new analyst came in to parse the same data sets without any documentation on how the 5% figure was determined, the conversion rate dropped to 2%.

While interviewing the team for the case study, as what often happens in these detailed discussions, I ended up some great material that didn’t make it into the case study and wanted to share that material with you.

 

The team

For the case study, I interviewed Ellen Watkins, Manager, Global Database Marketing Programs, Stephanie, the database marketing specialist, and Jordan Hefton, Global Database Marketing Analyst, all of National Instruments at the time. Jordan was the new analyst who calculated the 2% conversion rate.

In this MarketingExperiments Blog post, you’ll learn how the team dealt with the surprising drop in conversion, and how they communicated why data management and analytics was going to be held to a new standard going forward.

The team overcame this obstacle with a little internal marketing.

Read more…

Online Testing: How to use A/A testing to break through the noise

June 30th, 2014 2 comments

Getting a lift from your testing efforts can be satisfying and rewarding.

Not to mention, increases in conversion have changed the fortunes of entire enterprises and the careers of the marketers who advocated testing.

But is a lift truly a lift, or is it simply a false positive resulting from natural variation?

In this MarketingExperiments Blog post, I wanted to share an excellent example of using A/A testing (and yes, you are reading that correctly) from Emily Emmer, Senior Interactive Marketing Manager, Extra Space Storage, presented at Web Optimization Summit 2014.

 

What does variance in testing look like?

variance-testing-homepage

 

Here’s the example Emily shared with the audience to help put variance in context using a control and treatment of Extra Space Storage’s homepage.

There is absolutely no difference between these pages except for the 15% difference in conversion.

According to Emily, that’s when you need to start investigating how variance is potentially impacting your testing efforts because there should be little to no difference in terms of performance in identical pages.

“A 15% lift is more concerning,” Emily explained, “because there should be no difference with the same experience.”

 

A/A testing is not A/B testing

variance-testing-explanation

 

Emily also noted a key distinction between A/A and A/B testing that is really important to grasp:

  • A/A testing – Can help you measure the natural variability (noise) of a website by testing an identical experience.

Read more…

Call-to-Action Button Copy: How to reduce clickthrough rate by 26%

March 31st, 2014 8 comments

“Start Free Trial” | “Get Started Now” | “Try Now”

One of the above phrases reduced clickthrough rate by 26%.

 

DON’T SCROLL DOWN JUST YET

Take a look at those three phrases. Try to guess which phrase underperformed and why. Write it down. Heck, force yourself to tell a colleague so you’ve really got some skin in the game.

Then, read the rest of today’s MarketingExperiments Blog post to see which call-to-action button copy reduced clickthrough, and how you can use split testing to avoid having to blindly guess about your own button copy.

 

How much does call-to-action button copy matter anyway?

The typical call-to-action button is small. You typically have only one to four words to encourage a prospect to click.

There are so few words in a CTA. How much could they really matter?

Besides, they come at the end of a landing page or email or paired with a powerful headline that has already sold the value of taking action to the prospect. People have already decided whether they will click or not, and that button is a mere formality, right?

To answer these questions and more, let’s go to a machine more impressive than the Batmobile … to the splitter!

 

A/B/C/D/E split test

The following experiment was conducted with a MECLABS Research Partner. The Research Partner is a large global media company seeking to sell premium software to businesses.

The button was tested on a banner along the top of a webpage. Take a look at that banner below. 

cta-experiment-start-free-trial

 

Five different text phrases were tested in that button. Since I’ve already teased you on the front-end, without further ado, let me jump right into the findings.

 

Results

cta-test-results

 

Those few words in that teeny little rectangular button can have a huge impact on clickthrough.

As you can see, “Get Started Now” drove significantly more clicks than “Try Now.” Let’s look at the relative changes in clickthrough rate so you can see the relationship between the calls-to-action.

Read more…

Analytics: How metrics can help your inner marketing detective

March 17th, 2014 No comments

Why are there so many different metrics for the same component of a website? For example, page views, visits, unique visitors and instances, just to name a few. Which metrics should you use? Are all of these metrics really helpful?

Well, they exist because these metrics are tracked differently and yield different interpretations.

However, much more knowledge can be gained by analyzing these metrics with respect to one another combined with usage of segments, or characteristics of users.

For example, if we look at the metrics for page views on your site with respect to number of visits between two browsers, such as Chrome versus Firefox, and then discover that Chrome users actually have more page views per visit on average than Firefox users.

That could indicate that the customers using Firefox may be of a different demographic with a different level of motivation compared to Chrome users. Or, it could also mean that the site functionality or user experience on Chrome could be different than Firefox. For example, I know on my own computer, Chrome displays website content in a smaller font than Firefox.

In today’s blog post, learn how just these two metrics combined together can put you in a detective mindset.

 

Don’t just investigate what customers do, investigate where they go

There are plenty of great free and paid analytics tools out there to help you investigate customer behavior, but for the sake of this example, I’m going to talk specifically about Adobe’s analytics tool.

An interesting advanced feature in the platform allows you to set custom tracking on link clicks on a page that when combined with other metrics, could reveal great findings.

For example, let’s say you have a call-to-action that takes a visitor to the cart. By using custom tracking, you can track how many visitors interact with the CTA on the page and compare that to the next page flow.

If you see a significantly higher number of visitors clicking the CTA, but not as many cart page views on the next page report, it could mean that there is some technical issue preventing the visitors from going to the next page.

There could also be a lot of “accidental” or “unintentional” clicks from visitors clicking the back button before the next page even loads, which can be very common on mobile sites.

If there are significantly less visitors clicking the CTA and more cart page views on the next page flow, what would that indicate?

Perhaps people are using the forward button frequently because they came back to the page after they have seen the cart.

  Read more…

Customer Theory: What do you blame when prospects do not buy?

February 10th, 2014 No comments

The effort and money that you’re investing in your marketing is predicated on one thing – that you understand your customer.

What good is a print ad, an email or a marketing automation investment if it doesn’t deliver a message that alleviates a customer pain point or helps a customer achieve a goal? They won’t act if the message doesn’t hit them square between the eyes.

Let me give you an example of faulty customer theory. Uber, a mobile car hailing service, is coming to Jacksonville. I recently received a push poll phone call clearly supported by the frightened taxi industry.

The main message seemed to be that Uber is cheaper because it uses unregulated (and, therefore, unsafe) drivers.

 

How often are you delighted by cab drivers?

What struck me was how far off their customer theory was from my actual wants and needs. I, for example, chose to take the BART from the airport to the hotel for Lead Gen Summit 2013 – not because it was cheaper (MECLABS was paying the bill either way, so it was free for me), but because riding in a cab is a miserable experience.

Plus, I’m putting my life in the hands of someone who will cut across three lanes of rush hour traffic with no turn signal to drop a passenger off 45 seconds quicker. Goodbye, safety argument.

The reason Uber, Lyft and other car hailing mobile apps are gaining traction is because they’ve found a way to create a better customer experience. Think about it. When was the last time you were delighted by a cab ride? In fairness, there was one time for me in Los Angeles. A kind driver gave me a quick tour of Bel Air during what limited free time I had on a business trip.

Here’s why the taxi industry struggles to realize the true threat.

 

We will tend to blame external rather than internal reasons when customers don’t buy

You put in so much time marketing your company and your clients that it becomes difficult to see the flaws customers see with unbiased eyes.

This is why A/B testing can be so valuable.

Actually forming hypotheses, testing these hypotheses in real situations with real customers, and then building a customer theory over time that informs everyone in your company about what customers really want is essential.

When you have your customer theory right, marketing can focus on clearly communicating how it can fulfill customers’ needs and wants.

 

Discover what customers want

Of course, A/B testing is only one way to gain customer intelligence. So to gain a perspective beyond my own, I asked Lindsay Bayuk, Senior Product Marketing Manager, Infusionsoft, for her perspective.

“Understanding what your customers want starts with understanding the problem they are trying to solve. First, define who your best customers are and then ask them about their challenges. Next, ask them why they love you,” Lindsay said.

Lindsay said some great ways to collect both quantitative and qualitative data on your target customers include:

  • Surveys
  • Interviews
  • A/B tests (see, I was telling you)
  • Sales calls
  • Feedback loops

The email registration process is another opportunity for learning more about your customers.

Ali Swerdlow, Vice President, Channel Sales & Marketing, LeadSpend, added, “Preference centers are a great way to gather data about your customers. Then, use that data to segment your list and message different groups accordingly.”

Read more…

LPO: How many columns should you use on a landing page?

February 6th, 2014 2 comments

What is the highest performing number of columns for your webpages?

The question is deceptively simple and difficult to determine unless you test your way to the optimal layout for your needs.

During a recent Web clinic, Jon Powell, Senior Executive Content Writer, MECLABS, revealed how a large tech company decided to test its column layout in an effort to increase sales from its branded search efforts.

So, let’s review the research notes for some background information on the test.

 

Background: A large technology company selling software to small businesses.

Goal: To significantly increase the number of software purchases from paid search traffic (branded terms).

Primary Research Question: Which column layout will generate the highest rate of software purchases?

Approach: A/B multifactor split test

 

Here’s a screenshot of the control which utilized a two column layout – one main column and a right sidebar – featuring separate content and CTAs. 

 

In the treatment, the team eliminated the sidebar and focused on a single-column layout.

What you need to know

The one-column design increased branded search orders by 680.6% and revenue per visit by 606.7% when tested against the two-column design.

To learn more about why the single-column layout outperformed the two-column design, watch the free on-demand Web clinic replay of “How Many Columns Should I Use?” to see the results of an aggregate column research study you can use to aid your own conversion rate optimization efforts.

  Read more…