It's hard to talk about testing without hearing a statistic that comes across as truly ridiculous like "Company XYZ runs a zillion tests a year - no, really!" Even if these statistics are true, they fail to take into account the most important thing: whether the tests themselves are high-quality.
If you are running tests just to run tests, you're missing the mark. Running a certain number of tests per week, month or year does not mean that you are testing in the best ways possible. In fact, it does not even guarantee you're running the right tests at all. Organizations should first focus on developing and running only high-quality tests. Once they have accomplished that, they can increase the quantity to test a wider variety of factors.
It's not uncommon to attend a quarterly business review and be asked to provide facts and figures that prove your business is doing better than it was in the previous quarter. Too often, people rely on proving their company's progressions simply by showing that they've run three times more tests than they did in the previous period. However, just increasing the number of tests doesn't take into account the results of those tests at all.
Instead, look at what your tests are telling you. Tracking records of what your return on investment (ROI), win rate, and implementation rate are can help you understand whether you are running tests that actually matter to your business.
Testing can help you understand if you are actually seeing a return on your investment. Whether your investment is the time you've put into the campaign or ad dollars you've spent on the campaigns is immaterial. It's crucial to use testing to understand what you're getting out of your campaigns. Optimizing any campaign is an iterative process, and testing is a key component of understanding what works and what doesn't.
Your win rate helps you to understand what percentage of tests were statistically significant as well as conclusively positive. This is not the case with every test. Sometimes you might try to run a test to help you determine whether A is better than B, but the test does not give you a conclusive answer. It happens, and as an organization, you should be aware of how often it is occurring with your tests. If it is happening frequently, it could be an indication that your tests aren't differentiated enough or that you are testing too many variations for your property's traffic and conversion levels. Not every test is going to be a winner, but every test should be an opportunity to learn. If none of your tests are winners, it may be time to evaluate your ideation strategy. The bonus? Tracking win rate lets you measure how much better you're getting at testing over time.
Imagine an online retailer who runs 10 A/B tests in a quarter, six of which showed statistically significant results. This is a pretty good rate of success! Knowing that recommending related products in the checkout page like, say, a shirt to go with the jeans your customer bought (B) is better than no up-sells (A) is only half of the battle, though. To reap the rewards, checkout upsells must become the new default site experience (where it becomes the new control to be tested against and the cycle goes on and on). If the IT team only implemented 3 out of the 6 winning experiments, the business is only getting to take advantage of half of their potential ROI.
Your implementation rate, on the other hand, is the rate at which you actually do something about the tests that have conclusively positive results. In fact, one could argue that the implementation rate may be the most important metric to track with regard to testing. Implementation can often be lost in the testing cycle. Let's say, for instance, that your online retailer runs an A/B test and finds that having the product price and purchase button on the left (B) is better than having those details on the right (A). That result is not worth very much if you don't take the time to implement option B. Without using your newly discovered knowledge to improve the customer experience, you are - again - testing only for testing's sake. Tracking your implementation rate can help you understand which tests you acted on, why you acted on them, and what the results were of your actions in both the short- and long-term. Brands need comprehensive personalization solutions to help bridge the gap between testing and implementation by pushing winners for you until you can integrate them into your core site experience. Your checklist should include capabilities such as auto allocate, automated personalization, and recommendations, which can be "always on" to eliminate the need to hardcode a winner! Sometimes, though, you just have to implement your results - so keep track of them!
Not every test you run is going to lead you to actionable results. But, if you do not know what your ROI or win rate are or what your implementation rate is, it's hard to determine whether your tests are of very high quality and how efficient and impactful your test-and-learn program is. True testing requires organizations to understand not only what data the tests spit out, but also what the companies did about it - and, in turn, what effects that had on their organizations. That is the true lifecycle of testing. Otherwise, you are just running tests with no real meaning - well, other than to prove to someone that you have run them.
Jason Hickey is a Senior Product Marketing Manager for Adobe Target and is highly passionate about data driven decision making and conversion rate optimization.