3 Common A/B Testing Workarounds and How to Avoid Them

A/B testing is meant to be quick, relatively inexpensive and measurable, but often marketers find that it is anything but. In interviews conducted by WEVO, digital marketers identified the A/B testing mistakes they’ve seen within the industry, and why these workarounds should be avoided, even though they do produce some short-term benefits.

So, why are these workarounds needed in the first place? It all comes down to reducing the time and resources it takes to get results. Marketers interviewed identified three pain points in the A/B testing process that the resulting workarounds try to address:

  1. Brand and legal reviews – Reviews of new test designs can significantly slow down the testing process, leaving many marketers seeking ways to skip this step.
  2. Coding and development – Engineering resources can be hard to come by, or it can be difficult to prioritize marketing tests over other business needs.
  3. Reaching significance – Once launched, it can take several weeks to months for tests to achieve statistical significance. Not all tests result in significant improvement, and some will actually result in lower conversion.

For all of the reasons listed above, new design tests require a lot of effort: from marketing and brand approvals to legal and copy approvals to full page development and testing. All without any sense of how well (or badly) that test might do in market.

As marketers look to overcome some of these challenges, the following common workarounds emerge:

1.  Testing Small

The most common workaround identified is the small test. Many marketers run a series of smaller tests of details like button colors, small copy edits, and single image changes instead of focusing on larger, high-impact changes. In fact, our research found that 90% of A/B tests can be characterized as small and more detail-focused.

The Benefit:

Smaller tests are more likely to show exactly what changes contributed to any conversion lift or decline, which can inform future testing. These smaller changes often bypass the hurdles of brand and legal approvals, reducing the time to test. Thanks to enhanced functionality of testing tools like Optimizely and Google Optimize, these smaller tests often require no development resources, and can be created by the marketers themselves. We have found that these changes often result in positive results, although most frequently with small increases in conversion +/- 1%.

The Problem:

Incremental tests, incremental results. While there are benefits to continuous small improvements, companies may not achieve their desired conversion improvements any time soon, if at all. Bigger design tests may have bigger failure rates, but are more likely to significantly increase conversion when they work. Those interviewed reported that their larger tests often results in 10-20% increases in conversion.

2.  Relying on the Brainstorm

The most common source of new ideas for landing pages is the internal brainstorm, according to those surveyed. Sometimes behavioral data and analytics are leveraged for these brainstorms, but often they relied only on best practices and brand guidelines, and required many assumptions.

The Benefit:

Digital marketers can review behavioral data, recommend and implement changes that reflect what they think will generate improvements, and in fact, they often do result in incremental changes.

The Problem:

One of the biggest challenges that marketers face is understanding why visitors do or don’t do what they want on their landing page. While behavioral data is very useful, it doesn’t answer the “why”, and therefore internal brainstorms lack the necessary insight that could direct the highest impact modifications to their website.

3.  Enlisting Agencies for Website Overhaul

While it’s more of a leap than a workaround, some marketers report saving all of their larger testing for full website redesigns or external agency designs. Once these designs are selected, the organization is often committed to them.

The Benefit:  

It’s refreshing to start anew. Complete website overhauls and new agency designs generally come with major changes in layout, navigation and design. The redesigns are often based on user feedback and review, whether conducted via focus groups or relying on past performance.

The Problem:

New full redesigns require commitment to the design and architecture before A/B testing. To exacerbate the problem, while agencies generally create 2-3 new designs for their clients, the client must serve as a proxy for their visitors and assume which designs will convert the highest.


So, let’s say we want to combat some of the A/B testing pain points, but don’t want to lean on these workarounds. Here are three solutions that can help marketers achieve their full conversion potential:

  1. More “big” tests, more often
  2. Get real audience insight to inform new designs
  3. Test new, agency-created pages before coding and pushing live

Luckily, the WEVO solution address all of these. With WEVO, marketers can test new design ideas before committing them to their website. Test designs are submitted to our A/B test simulator, which provides conversion estimates, audience insight and sentiment analysis. Armed with this information, marketers can then gain faster, more confident approval of designs, and A/B test fewer versions with more informed hypotheses.

Interested in learning more about WEVO? Contact us here.

Share This Post

More To Explore

Ready to Get Started?