Archive

Author Archive

Email Testing Pitfalls: 7 Common Mistakes That Can Hurt Your Test Strategy

August 9th, 2010 1 comment

Editor’s Note: In a recent interview with MarketingSherpa Editor Sean Donahue, Research Analyst Corey Trent outlined errors even experienced email marketers make when conducting tests. (My personal favorite – #5). We thought this information was valuable, and wanted to share it right here on the blog for those who do not have a MarketingSherpa membership. Special thanks to our sister company for allowing us to republish the below article…

Watch Your StepSUMMARY: Before you conduct your next email test, make sure you’re not falling into a trap that can muddy your results or limit the gains you might otherwise achieve.

We spoke with an email testing expert from our sister company, MarketingExperiments, to uncover common mistakes marketers make when running email tests. Read why good analytics and segmentation are crucial forerunners to testing, and why a blockbuster discovery from one test actually can be a risky thing for a marketing team.

by Sean Donahue, Editor, MarketingSherpa

Testing is an essential component of a strong email marketing strategy. But only if the tests are conducted and analyzed properly to ensure you’re helping – not hurting – your email performance.

“There is a cost for bad testing,” says Corey Trent, Research Analyst, MarketingExperiments. “Bad assumptions based on bad tests can cost you a lot of money and cause you to lose out on a lot of business.”

Trent routinely conducts email tests as a member of the MarketingExperiments sales and marketing optimization research team. Through this work, he’s seen how mistakes, misconceptions and simple oversights can derail a well-meaning marketer’s testing strategy.

We asked him to share his advice for avoiding testing pitfalls, so you can achieve your goal of improving email performance. Here are seven common mistakes he’s observed: Read more…

Marketing Optimization Technology: Be careful of shooting yourself (and your test) in the foot

May 28th, 2010 1 comment

As a presenter on our recent technology-focused web clinic, I had the pleasure of learning about an experiment devised by my colleague, Jon Powell, that illustrates why we must never assume that we test in a vacuum devoid of any external factors that can skew data in our tests (and even looking at external factors that we can create ourselves).

If you’d like to learn most about this experiment in its entirety, you can hear it firsthand from Jon on the web clinic replay. SPOILER ALERT: If you choose to keep reading, be warned that I am now giving away the ending.

Computer ChipAccording to the testing platform Jon was using, the aggregate results came up inconclusive. None of the treatments outperformed the control with any significance difference.  However, what was interesting is the data indicated a pretty large difference in performance with a couple of the treatments.

So after reanalyzing the data and adjusting the test duration to exclude the results from when an unintended (by our researchers at least) promotional email had been sent out, Jon saw that each of the treatments significantly outperformed the control with conclusive validity.

In other words, if Jon had blindly trusted his testing tool, he would have missed a 31% gain. Even worse, this gain was at the beginning of a six-month-long testing-optimization cycle. If Jon had assumed he had learned something based on inaccurate data that he really hadn’t, this conclusion more than likely would have sent Jon down a path of optimizing under false findings and assumptions.

In other words, to create a simple pre-GPS era analogy, if you make a wrong turn at the beginning of a 600-mile road trip and keep heading in the wrong direction, you will be much farther off the mark than taking the wrong road when you’re just a mile away.  However, in our cases with many businesses, wrong turns and mis-directions can cost from thousands to millions of dollars in lost time and revenue.

Worst of all, this email came from the Research Partner itself. As we run into many times, they unwittingly sabotaged their own tests. With the Internet being a dynamic place, it is next to impossible to avoid every external validity threat to your test, but at the very least we need to make sure that we are not introducing threats with internal campaigns to the same audience.

This is not to say we stop those campaigns, but just be aware of the potential effects on testing. That awareness, at least until computers become sentient beings, requires human involvement. Of course, that’s just one area where a little human curiosity is essential… Read more…

Categories: Clinic Notes Tags: ,

Google Analytics: New browser-based, data-privacy opt out important, but what consumers really need is education

May 3rd, 2010 1 comment

Back in March, Google got more serious about protecting user data privacy (as it should be), and to that end has announced plans for a browser-based opt-out for Google Analytics.

PrivacyIn typical Internet fashion, the blogs and Twitter lit up with doom and gloom news that web tracking was dead…run for the hills web analysts. In fact, it was very reminiscent of the reaction people had when Germany announced they were going to investigate the legality of Google Analytics and collecting data on their citizens. But, as with past incidents, people calmed down and life went on.

Personally, I am all for Internet privacy. We as businesses and marketers need to respect users’ wishes if they decide they do not want to be tracked (even if retail counterparts do not honor this). If a significant amount of people are choosing to opt out, then we need to adapt and find other ways to determine what our users want and need. Heaven forbid we talk or engage them more personally (see blog post of different ways to do this online).

But in talking with people that have concerns about being tracked online (especially by Google Analytics), I typically find that they simply misunderstand what the tool does. Most people with concerns feel like it is a Big Brother tool that tells us exactly who they are, tracks them after they leave our site, and relays to us every website they visit.

They lighten up significantly when I tell them that the tool is really used to anonymously look at users of our website and help us understand how to make our process, products, or websites better.

In fact, we cannot even see (at a personally identifiable level) who these people are if we are following Google Analytics’ terms of service. Once they hear this, most skeptics see the value and how it can really make the Internet a better place without negatively impacting privacy. Read more…

PPC Innovation: How will Google’s new lead capture extension affect your pay-per-click campaigns?

March 29th, 2010 8 comments

We have been quite busy at the labs here, but I wanted to cover a PPC development that blipped on our radar earlier this year. For many of us, PPC is a critical source of traffic, and can be quite the task to manage. Well to add to the list of things to consider, Google is beta-testing the collection of phone lead information directly from SERPs (Search Engine Results Pages).

Google generates roughly 97% of its revenue from online advertising, so it makes sense that they delve into new areas of online marketing – which now seems to include part of the sales process as well.

Given the huge potential (or threat) this represents to you, the Internet marketer, I think this is a vital development to cover on this blog (and even reached out to a search engine marketing firm to get their ideas for you as well.) While this will not affect all verticals, for some niches this might pour some gasoline (or more correctly napalm), on already very competitive areas. Read more…

Conversion Window: How to find the right time to ask your customer to act

March 3rd, 2010 2 comments

Many marketers I talk to are quite interested in optimizing the content of their email messages. They test images, calls to action, subject lines, and the tone of the email. However, how many companies test the timing of email sends and how this affects readership?

Proper timing = greater relevance

TimeTo illustrate how timing might affect open and click-through rates, think about how you read email.  In the afternoon when the day is dragging on and you need a break, do you give each email message a little more time than when you first get into the office in the morning and are confronted with 20 hot items bursting from your inbox?

So would an email with a more complex conversion goal (such as signing up for a recurring subscription) do better with you in the afternoon while a simple conversion goal (like signing up for a free web clinic) might have a better chance in the morning when you’re plugging and chugging and not putting as much thought (and perhaps doubt) into your actions? Read more…

Categories: Email Marketing, Research Topics Tags:

To Tweet or Not to Tweet: Social media is a great way to get customer feedback…just be wary for potential blowback

February 5th, 2010 8 comments

In my last blog post, I challenged you (and myself as well) to be more proactive in approaching customers for feedback. I recently found an excellent example on Twitter of an auto detailing supply company tying in the New Year with an offer to give feedback on things they can do better in the coming year.

Finding the right incentive

Notice they also offer a small incentive for providing feedback. However, it is important to note that the incentive is not a brand new car or a Neil Diamond Cruise Trip. It is just enough to pique the interest of followers, but probably not enough to cloud the feedback with nonsense in an effort for a chance at winning the car wax. Read more…