The Tremendous Price You Pay for Experimenting With Your Customer

Why are so many of us unhappy with our website conversion rates? We all want our digital content and designs to capture the attention of our audience. Secretly, we probably even want them to perform at a level that beats our expectations. So we do everything we’re “supposed to” do in our pre-live workflow: design, coding, ads, and SEO optimization. All the textbook standard best practices are there. But what are we really missing?

For starters, our “tech stack,” is heavily weighted to what is happening after we publish our content. And as a result, we are unprepared and largely uninformed prior to the most critical moment – launching our website. So, we launch it, measure its results, and when they fail to meet expectations, take a guess at why and iterate from there. Lather, rinse, repeat.

This pursuit of an incremental path to success, attempting to leverage what’s sticking on the wall, and dropping what isn’t, is how we expect to experiment. Throwing on a white lab coat and testing our hypotheses is definitely a sound methodology. However, I think we’re missing the point. Experiments are supposed to be done before the product reaches its customers! And by making our customers the guinea pigs, we’re not only missing the point, but we’re also, paying for it in many ways and much more than we think.

 

How Many Visitors Can We Afford to Lose?

All the A/B and multivariate testing in the world is not a justification or excuse to go to market with subpar customer experiences. Iteration only works if one of the options is actually good. Remember that old joke about how fast you need to be to outrun a hungry lion? Just because you hightail it a little faster than the guy running next to you, that doesn’t exactly make you Usain Bolt. And, of course, the same is true for version A beating out version B in a split test duel. If A did better than B, it doesn’t mean that A is actually good.

Sure, over time, experimentation should weed out conversion blockers, thus improving customer experience and leading to losing fewer and fewer visitors over time. And by all means, we must do it. But how much time can we realistically take or spend? How many of our site visitors seeing the “losing version” can we afford to lose? The gap in effectiveness between the loser in A/B test #1 and the final winner in test N is an opportunity cost we’re paying. Our duty must be to reduce that cost as much as possible and to optimize before we go live.

 

Temporary Mistakes Can Leave Permanent Damage

In today’s fast-paced digital landscape, content can seem short-lived, or perhaps so transient that mistakes will go unnoticed or even be acceptable. Unlike an ad in the printed on the front page of the New York Times, a digital ad can be removed in seconds. Is this the right way to think about it? Sure, you can replace any content that doesn’t sit well with your target audience. But that doesn’t mean they can “unsee it.” When we come across a poor experience we’re clearly less likely to return for more.

According to a report published by Forester and Accenture Interactive, “while 52% of respondents believe that they are ahead of their competitors or “best in class” in their industries, only 7% are exceeding customers’ expectations.” That’s a hell of a lot of conversion loss, damaged brand loyalty, and diminishing customer lifetime value. Compromising your repeat customer business is playing with fire. And assuming you can sweep a mistake under the rug is even worse.

 

Go Bold or Go Home

So why risk all that on disappointing digital experiences? You shouldn’t. Nor should you play it safe, especially when a lot of future CMO careers can be launched when an enterprising marketer takes a big-bet move that shows originality, vision, and initiative. Those qualities matter when it comes to releasing a new product or meaningful update  – remember the Dollar Shave Club and “our blades are f***ing awesome” messaging?

Risk-taking moves the needle, gets headlines in Adweek, builds buzz and followers, rises above the noise – and secretly inspires envy in our peers. But we can’t afford for our content to bomb. So there’s got to be prudent risk-taking. Innovation that is pre-proven. Dramatic changes backed by customer-centric, pre-live data.

And preferably data not tainted by analysis paralysis of endless deliberation ended only by the most senior person in the room making the decision. (Side note: did you know that a McKinsey survey found that 72% of senior executives thought bad strategic decisions either were about as frequent as good ones, or “were the prevailing norm” in their organization?). We can change that. By coming prepared with real audience data, when proposing a bold idea, our chances of making an impact are improved tremendously.

 

It’s All About the Benjamins

A couple of weeks ago, I made the argument that we’re in the Dark Ages of digital marketing. As marketers, we’re not well informed, we lack time and budget, and do not have the tools to listen to our audience before launching a webpage. The impact is that we’re leaving money on the table.

Many of us are measuring the wrong things. We think “digital” automatically equates to what Eric Reis astutely calls “vanity metrics.” Page views, trial signups and social media “likes” are great, but they don’t really count. They’re all means to a single end: revenue. Revenue is what drives our KPIs, and is rapidly becoming the primary indicator of how digital marketers are being measured. While we drive visitors to increase time on site and fill out contact forms, these are just tactics.

Launching a page based on ill-informed or even knowingly raw guesses is bad form, not what a professional marketer does, and we’re paying for it. What we do matters. When we do it does too. The pre-live time frame, before a webpage goes live to the public, is an opportunity. To pick the most relatable hero image. The most eye-catching headline. And to design the best damn webpage possible!

Please follow and like us:
error