Archive

Archive for the ‘Analytics & Testing’ Category

Lead Generation: Great results don’t always have to be complicated

April 14th, 2014 No comments

To discover what works best for generating leads in your organization, at some point, you eventually have to do two things:

  • Wade through enough trial and error until success is the only destination left
  • Keep the process as simple as possible on that journey to reach success

For Shawn Burns, Global Vice President of Digital Marketing, SAP, keeping the company’s testing simple was instrumental in helping SAP reach some of its goals of maximizing the ROI on existing marketing.

“We can complicate everything,” Shawn explained, “and when you’re in a testing environment and you start to think about navigation, templates, images, copy, colors and buttons, you just have to sort of stop and say ‘whoa’ – clear away the madness, take it step-by-step, do simple things and see what has an impact.”

simple-cta-testShawn’s focus on keeping SAP’s testing simple was also influenced by a need to apply those discoveries to growth in areas like mobile marketing. A simple testing approach and the lessons learned from the process would be highly beneficial in aiding SAP’s efforts to optimize its marketing in what is literally a pocket-sized medium.

“There are 2 million smartphones being activated every day on the planet, so all of us as marketers are having to deal with this incredibly [physically] tiny media channel,” Shawn said. “And so, you start to look at testing as, ‘how simple can I be?’”

In this brief excerpt from Shawn’s MarketingSherpa and MarketingExperiments Optimization Summit 2013 presentation, “5 Optimization Discoveries from the SAP Website Test Lab,” you can learn  how small changes can make a big difference in your testing and lead generation optimization efforts.

Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Online Testing: 5 steps to launching tests and being your own teacher

April 10th, 2014 No comments

Testing is the marketer’s ultimate tool. It allows us to not just guess what coulda, woulda, shoulda worked, but to know what actually works. But more than that, it gives us the power to choose what we want to know about our customers.

“As a tester, you get to be your own teacher, if you will, and pick tests that make you want to learn. And structure tests that give you the knowledge you’re trying to gain,” said Benjamin Filip, Senior Manager of Data Sciences, MECLABS.

So what steps do we take if we want to be our own teacher?

While conducting interviews about the live test ran at MarketingSherpa Email Summit 2014, I recently had the chance to discuss testing processes with Ben, as well as Lauren Pitchford, Optimization Manager, and Steve Beger, Senior Development Manager, also both of MECLABS. The three of them worked together with live test sponsor BlueHornet to plan, design and execute the A/B split test they validated in less than 24 hours.

Read on to learn what they had to share about the testing process that marketers can take away from this email live test. We’ll break down each of the steps of the live test and help you apply them to your own testing efforts.

 

Step #1. Uncover gaps in customer insights and behavior

As Austin McCraw, Senior Director of Content Production, MECLABS, said at Email Summit, “We all have gaps in our customer theory. Which gap do we want to fill? What do we want to learn about our customer?”

What do you wish you knew about your customers? Do they prefer letter-style emails or design-heavy promotional emails? Do they prefer a certain day of the week to receive emails? Or time of day? Does one valuable incentive incite more engagement than three smaller incentives of the same combined value?

Think about what you know about your customers, and then think about what knowledge could help you better market to them and their needs and wants.

 

Step #2. Craft possible research questions and hypotheses

When forming research questions and hypotheses, Ben said, “You have to have some background info. A hypothesis is an educated guess, it’s not just completely out of the blue.”

Take a look at your past data to interpret what customers are doing in your emails or on your webpages.

Lauren wrote a great post on what makes a good hypothesis, so I won’t dive too deeply here. Basically, your hypothesis needs three parts:

  • Presumed problem
  • Proposed solution
  • Anticipated result

 

Step #3. Brainstorm ways answer those questions

While brainstorming will start with you and your group, don’t stop there. At MECLABS, we use peer review sessions (PRS) to receive feedback on anything from test ideas and wireframes, to value proposition development and results analysis.

“As a scientist or a tester, you have a tendency to put blinders on and you test similar things or the same things over and over. You don’t see problems,” Ben said.

Having potential problems pointed out is certainly not what any marketers want to hear, but it’s not a reason to skip this part of the process.

“That’s why some people don’t like to do PRS, but it’s better to find out earlier than to present it to [decision-makers] who stare at you blinking, thinking, ‘What?’” Lauren explained.

However, peer review is more than discovering problems, it’s also about discovering great ideas you might otherwise miss.

“It’s very easy for us to fall into our own ideas. One thing for testers, there is the risk of thinking that something that is so important to you is the most important thing. It might bother you that this font is hard to read, but I don’t read anyway because I’m a math guy, so I just want to see the pretty pictures. So I’m going to sit there and optimize pictures all day long. That’s going to be my great idea. So unless you listen to other people, you’re not going to get all the great ideas,” Ben said.

Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Web Optimization: How to get your customers to say heck yes!

April 7th, 2014 No comments

For e-commerce marketers, and many marketers with a subscription-based business, the value of the products they sell on the Internet is intangible when the purchase decision is made.

So who better to gain some conversion optimization advice from than an A/B tester who specializes in nonprofit marketing, the industry that must communicate the most intangible value of all – goodwill.

We brought Tim Kachuriak, Founder and Chief Innovation & Optimization Officer, Next After, into the studio and discussed:

  • The power of the value proposition
  • Creating a scarce resource
  • Commitment building
  • The value proposition train

I’ve known Tim for several years through his attendance at MarketingSherpa Summits, and am glad to have him as a featured speaker at the upcoming Web Optimization Summit in New York City. In fact, his Web Optimization Summit session was one of the things we worked on while he was in Jacksonville, Fla.

 

Below is a full transcript of our interview if you would prefer to read instead of watch or listen.

Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Web Optimization: Traffic without conversion doesn’t matter

April 3rd, 2014 No comments

At Web Optimization Summit 2014 in New York City, Michael Aagaard, Founder, ContentVerve.com, will present, “How, When and Why Minor Changes Have a Major Impact on Conversions,” based on four years of research and dozens of case studies.

To provide you with a few quick test ideas, we reached across the miles to Copenhagen, Denmark, and interviewed Michael from our studios here in Jacksonville, Fla.

In this video interview, Michael discussed:

  • Why he’s so passionate about conversion optimization (and why you should be, too)
  • A pop-up test that generated 142% more newsletter signups
  • The one-word change of call-to-action button copy that consistently produces results (in several languages)

 

Below is a full transcript of our interview if you would prefer to read instead of watch or listen.

  Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Call-to-Action Button Copy: How to reduce clickthrough rate by 26%

March 31st, 2014 6 comments

“Start Free Trial” | “Get Started Now” | “Try Now”

One of the above phrases reduced clickthrough rate by 26%.

 

DON’T SCROLL DOWN JUST YET

Take a look at those three phrases. Try to guess which phrase underperformed and why. Write it down. Heck, force yourself to tell a colleague so you’ve really got some skin in the game.

Then, read the rest of today’s MarketingExperiments Blog post to see which call-to-action button copy reduced clickthrough, and how you can use split testing to avoid having to blindly guess about your own button copy.

 

How much does call-to-action button copy matter anyway?

The typical call-to-action button is small. You typically have only one to four words to encourage a prospect to click.

There are so few words in a CTA. How much could they really matter?

Besides, they come at the end of a landing page or email or paired with a powerful headline that has already sold the value of taking action to the prospect. People have already decided whether they will click or not, and that button is a mere formality, right?

To answer these questions and more, let’s go to a machine more impressive than the Batmobile … to the splitter!

 

A/B/C/D/E split test

The following experiment was conducted with a MECLABS Research Partner. The Research Partner is a large global media company seeking to sell premium software to businesses.

The button was tested on a banner along the top of a webpage. Take a look at that banner below. 

cta-experiment-start-free-trial

 

Five different text phrases were tested in that button. Since I’ve already teased you on the front-end, without further ado, let me jump right into the findings.

 

Results

cta-test-results

 

Those few words in that teeny little rectangular button can have a huge impact on clickthrough.

As you can see, “Get Started Now” drove significantly more clicks than “Try Now.” Let’s look at the relative changes in clickthrough rate so you can see the relationship between the calls-to-action.

Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Analytics: How metrics can help your inner marketing detective

March 17th, 2014 No comments

Why are there so many different metrics for the same component of a website? For example, page views, visits, unique visitors and instances, just to name a few. Which metrics should you use? Are all of these metrics really helpful?

Well, they exist because these metrics are tracked differently and yield different interpretations.

However, much more knowledge can be gained by analyzing these metrics with respect to one another combined with usage of segments, or characteristics of users.

For example, if we look at the metrics for page views on your site with respect to number of visits between two browsers, such as Chrome versus Firefox, and then discover that Chrome users actually have more page views per visit on average than Firefox users.

That could indicate that the customers using Firefox may be of a different demographic with a different level of motivation compared to Chrome users. Or, it could also mean that the site functionality or user experience on Chrome could be different than Firefox. For example, I know on my own computer, Chrome displays website content in a smaller font than Firefox.

In today’s blog post, learn how just these two metrics combined together can put you in a detective mindset.

 

Don’t just investigate what customers do, investigate where they go

There are plenty of great free and paid analytics tools out there to help you investigate customer behavior, but for the sake of this example, I’m going to talk specifically about Adobe’s analytics tool.

An interesting advanced feature in the platform allows you to set custom tracking on link clicks on a page that when combined with other metrics, could reveal great findings.

For example, let’s say you have a call-to-action that takes a visitor to the cart. By using custom tracking, you can track how many visitors interact with the CTA on the page and compare that to the next page flow.

If you see a significantly higher number of visitors clicking the CTA, but not as many cart page views on the next page report, it could mean that there is some technical issue preventing the visitors from going to the next page.

There could also be a lot of “accidental” or “unintentional” clicks from visitors clicking the back button before the next page even loads, which can be very common on mobile sites.

If there are significantly less visitors clicking the CTA and more cart page views on the next page flow, what would that indicate?

Perhaps people are using the forward button frequently because they came back to the page after they have seen the cart.

  Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg