:: By Lars Johansson ::
By using a tool for A/B and multivariate testing, you can see what your visitors prefer and let the numbers speak. This allows you to find out what website tweaks make your visitors take action and increase sales. Don't have a tool for testing? There are numerous tools from which to choose but I recommend that you first try Google Website Optimizer. It is free (get it here: https://goo.gl/hnid), and with it, you can find out what features you really need and which are just nice to have.
When you get started with testing, you need to keep some important guidelines in mind.
Remember that "best practice" is somebody else's practice
You can look at what others have been doing successfully, but don't assume what worked for them will work for you. Depending on your audience and context, results will vary. For example, some experts say that one-step forms are always more efficient, whereas others claim that multi-step forms are superior. So who's right? Well, they are equally right and equally wrong. Tests have found that sometimes one-step forms work better, and sometimes multi-step forms do. Don't assume. Test what works best for your audience and context. And if someone says that red buttons are better than purple ones, don't believe them. There is no universally superior button color but you might want to test contrasting colors.
Be bold and break rules
You might have heard about Google testing 41 shades of blue for the toolbar on Google pages but unless you work for Google, forget about it. Minor changes will typically lead to minor improvements, and to find out if those minor improvements are statistically valid, you will need massive amounts of traffic. If you want results, you will need to make more drastic changes than that. You will often have to bend, if not break company design guidelines in order to achieve substantial improvements. While doing so, it's important to work closely with brand managers, so you do not step on anyone's toes. Examples of what to test include images, tone and functionality.
Don't be greedy (limit your test)
Unfortunately, not everyone has the benefit of large, steady streams of visitors, and traffic volumes may limit you to running an A/B split test. Maybe you think changing the primary headline, main copy text, an image, call-to-action and a button will improve conversions. Do not, however, test all the changes at once in a single A/B test if you want to know what exactly causes the improvement (if any) and to find the best combination (for example, what call to action works best with which image). In the previous example, there are five page sections (elements) with two variations each. They make 32 possible combinations. If you run that as an A/B test, you will have no idea what made the difference.
Calculate estimated time to completion
It is possible to create a test that will take years to end. Say you have six elements and three variations per element, your test page gets 5,000 page views per day, your conversion rate is 4 percent, and you expect an improvement of 10 percent. That multivariate test would take 30 years to complete even if you include all visitors in your test. Use a duration calculator to find out whether the test you're planning to run is actually reasonable. Google's calculator can be found at https://goo.gl/g8VQ.
Assess risks
While you might think of testing as gambling, rest assured that the odds are on your side. What you risk is a smaller amount of conversions in the short term, but what you can gain is an overall increase of conversions in the long run. Even when your hypothesis turns out to be wrong, you will learn something - how to avoid making costly mistakes in the future. To minimize risks, you might wish to expose a smaller share of your visitors to the test. But bear in mind that if fewer visitors are included, the test will take longer to complete.
Validate your implementation
Before your test goes live, make sure to test implementation. Factors that could skew results include the prevalence and accuracy of test scripts (for instance, ensure that the goal script is implemented only on the actual goal page) and the loss of referral data. Some tools, Google Website Optimizer included, automatically validate the test for you. Do not blindly trust this because it is possible for your implementation to be flawed regardless of what the automatic validation indicates.
Avoid conflicting tests
If you set up two or more simultaneous tests sharing the same goal, you risk inaccurate results. Say you want to increase conversions by running one test to find out what headline works best, and a second test to find out which image works best. The two tests won't share data, so you won't know to which combination of image and headline a converting visitor has been exposed. If running several tests, make sure they won't add noise and uncertainty to each other.
Look out for side effects
Wanting improvement is good. Being too eager to achieve results can, however, inadvertently lead to making costly mistakes - ones that are difficult to spot, too. While changes might increase conversions for one goal, they could decrease conversions for another. Typically a test is set up for only one goal. If running an A/B test with Google Website Optimizer, use Google Analytics and custom variables to "tag" visitors. Then you will be able to find out how other goals and general visitor behavior are impacted by your test and its different variations. (Read more about the use of custom variables at https://goo.gl/NjLV.) It's also possible to include the goal script for Google Website Optimizer on multiple goal pages. But if you do that, both goals will be summed up together, and you won't know the performance of each individual goal.
Try segmented testing
The standard way of running a test assumes that one size fits all. While running any test is better than not testing at all, it is possible that one alternative or combination is better for direct traffic and another is better for traffic from Google AdWords (just an example). Therefore, viewing the results for different segments will give you deeper insights. You can use custom variables to keep track of variations in Google Analytics and advanced segments to see how the different variations performed for different segments. You can also have a look at page 25 in The Techie Guide for Google Website Optimizer (https://goo.gl/oXr7) or find a tool more suitable for segmented testing at www.whichmvt.com.
Challenge the winner
So you've reached a statistically valid result of a test and announced the winner. Are you done now? No. Testing is not a one-time practice. Rather, it should be part of your process for continuous improvement. It's not only possible but likely that a variation will perform even better than the winner. That's why a winner should be challenged from time to time. At some point, there will always be a new winner.
About the Author: Lars Johansson is the co-founder of Ampliofy (Web analytics products) and inUse Insights (Web analytics consultancy). inUse Insights is a Google Analytics Certified Partner and Google Website Optimizer Certified Partner. Lars blogs about Web analytics and testing at www.WebAnalysts.info.