I have been around conversion rate optimization (CRO) for a long time; and many years of that were spent crying in the wilderness.
It seemed so obvious that making landing pages and websites more efficient was the most direct route to getting more value from online marketing, but most companies just focused on getting more traffic to their site. Instead of plugging the holes in their leaky bucket, they chose to pour more water into the top of it.
Recently there has been a huge change in the popularity of CRO, and it is finally getting the recognition that it deserves. There are many books on the subject, a vast array of powerful tools, conferences devoted exclusively to the subject, as well as people whose job title includes optimization as their sole occupation - all of this is, of course, very gratifying to me.
One of the key activities of CRO is landing page testing (also known as split testing or A/B testing). By testing different versions of content on a website, professionals can see which ones an audience prefers. This allows marketers, analysts or the like to validate whether a particular change has a positive effect on their business, and that is important. It replaces subjective opinions with hard data.
As more companies get the "testing religion,"¬ù they focus on generating ideas for new tests, testing technology and the speed with which tests can be reliably cranked out. In fact, testing velocity, becomes a goal of its own. Testing is important, but it has been taken to extremes, as well as considered by some testing-only agencies and practitioners as the only way to make changes to online experiences. Some have even stated that enterprises should never redesign their site, and should only advance its performance by continual testing.
This is where I must draw the line.
Testing will lead to some improvements on poorly performing pages on a website, but what is often broken are not the individual elements, but rather the whole Web experience. Tinkering with individual elements of it will not get the results a company wants. It is like hoping to turn an old-fashioned bi-plane into a high-performance jet-fighter while flying it; it simply can't be done.
Anyone who will say that a brand should never redesign a site is either very confused or is fighting with one hand tied behind their back.
That idea only occurs when one equates CRO with testing, and is not a fully empowered optimizer (small-scale tinkering via testing is the only activity he or she is allowed to do within the company). It is absolute poison and a very limiting view, and anyone who propagates it is doing an enterprise a big disservice. Site redesigns should be done when appropriate for a number of reasons, and tactical improvements should be done via testing to high-value/high-traffic parts of the site via split testing in-between.
It is true that the stakes are much higher with a site redesign, and that poor redesign processes often result in lower conversion (since their motivation is not to better align with visitor intent - in fact, not focused on the needs of the visitors at all). Those who are advocates of CRO should be the ones leading the site redesign effort to get the best possible outcomes.
So what significant reasons could a company have for making the leap? The top reasons include:
A visual refresh should not be the main motivation for an expensive full redesign unless the look and feel is dated and is impacting a visitor's perception of trust and site quality. Stakeholders should take a hard look at the other important motivations above in order to accomplish as much as they can when they do redesign, but the look and feel should be the icing on the cake and not the main course.
Judging the success of a redesign is tricky unless all a company has done is a visual facelift only (and not touched any of the conversion points or how they are tracked).
Even then, changing colors, the amount of visual clutter or relative emphasis of page elements can really make direct comparisons difficult. In any case, CROs should only be looking at new visitors (never return ones who are polluted by a mix of the two site versions). Even that, however, is not possible with complete accuracy since many people wipe their cookies.
The return visitors will always be effected by the fact that something changed between visits; they are not necessarily reacting to whether that change was actually for the better. In other words, they have to expend expensive conscious thought to get their minds around the new site experience, and doing that mental work is tough for them - often leading to lower conversions compared to what they were so fluently able to accomplish before.
There are a number of things a CRO should do to measure the effectiveness of a redesign:
If an enterprise does all of these things right, it may still have a negligible improvement on conversion, or it may fundamentally reset its baseline performance to a much higher level. Therein lies the real reason that website redesigns are disparaged by testing-only advocates. The stakes are higher, and so are the potential rewards if a company takes on a full redesign; it's now a question of a company's risk tolerance, and how ambitious its goals are. Testing is good, and important, but Web professionals shouldn't kid themselves that they can achieve an excellent user experience by evolutionary testing alone. Sometimes they still have to redesign their site.