When redesigning a website section, organizations will usually observe user satisfaction decreases in the beginning. If marketers did their job correctly, however (and we will get to that shortly) the dip should be temporary, and new scores should soon beat the old scores.
This scenario is the "who moved my cheese" phase of relaunching. Returning users will tend to hate even good changes because even if a company's old design was not optimal, it was something they were used to.
That is a tough sell for most organizations: spending months working on new layouts and developing features, paying designers and developers and conversion experts and then watching user satisfaction fall as a result. (At least, initially.)
So, redesigns and new features are actions marketers should not take lightly. The counterbalance organizations need to understand is that as they run more split tests to achieve incremental changes and make small optimization tweaks along the way, they will likely experience diminishing returns for their efforts. Conversion improvements from A/B tests will get smaller, page changes from survey observations will have only minor effects, and they will eventually run into the ceiling of scores for that version of the design.
The key is to understand when marketers are approaching that limit, and to minimize the risks a redesign brings.
When do you need a redesign?
Redesigns introduce costs and risks, so before enterprises conduct them, they need to be at one of two points:
+ Scenario 1: The current design is bad enough that split tests, basic optimization from traffic data and modifications based on survey data will not make the site significantly more profitable.
+ Scenario 2: The current design is acceptable, but marketers have reached the point where the kinds of changes that used to yield significant conversion boosts, no longer make a difference.
Organizations are better off with the second scenario because teams will have more time to get the design right. (It is tough to get the design right if marketers are fighting fires every day on top of having to redesign a website.)
That said, professionals usually fall back on the same metrics, mostly quantitative, to tell if they have hit one of those two scenarios:
+ Bounce Rate. Either it is terrible (scenario 1) or it will not budge any more (scenario 2). Marketers can get this metric from standard traffic monitoring tools like WebTrends, Google Analytics or Adobe Analytics. A high enough figure means most people are leaving the website without doing anything.
+ Satisfaction/Task Success Rate. If marketers have a problem with user satisfaction or task success rates, it may be time to consider the redesign as an option. Yes, it is risky, but only making incremental changes to a site where users cannot perform the tasks is equally risky. Teams can compute satisfaction and task success rate with website survey solutions like iPerception, ForeSee or Google Consumer Surveys.
+ Conversion Rate. If conversion rates are not moving despite split or multivariate tests, it might be time to consider a redesign. Split tests are usually run with offerings including Optimizely, Google Experiments (within Google Analytics) or VWO (Visual Web Optimizer).
Thus far, the tools and data that marketers are dealing with fall on the quantitative side of the table. If a team decides they do need a redesign using the tools described above, they now need to get comfortable with the qualitative side of things as well.
What do the new designs entail?
Say it has been decided that it is time for a redesign. How does one actually start? It is not like teams can fire up InDesign or Balsamiq to get a new design going and hope for the best.
Web professionals would be surprised how many organizations skip the research phase because they cannot wait to "get their hands dirty."
If teams want their redesign to have a decent shot at success, they will need a strong qualitative research phase, executing and familiarizing themselves with:
+ Reverse Card Sort. This is a test methodology that asks people to place terms under "buckets," which can do wonders for an information architecture. This is especially helpful for websites where part of the core complaint is that it is difficult to navigate. If marketers ask 20-30 people to sort navigation options under a core set of sections, their prototypes are far more likely to succeed.
+Customer Interviews. Teams do not need a lot, 12-15 interviews will suffice. The idea is that if they get a few customers to talk about their specific pains, they will be more likely to create something that directly addresses their needs. Marketers should not treat this with the same statistical rigor that they use for website surveys. They want a few responses that help them design something, not the statistical significance of something like an iPerception website survey.
+ Prototype Usability Tests. If marketers are short on time or customer respondents, they can combine these with the customer interviews. Have three batches of tests with five people per batch, and they will uncover most of the problems of their new design. Teams should make people perform tasks on the new interface, see if they can perform them and see where groups of people get stuck.
After they have run those tests, marketers can start development on the overhaul or the launch of the new website features. Remember, it is not expensive to fix a prototype but it is really expensive to fix something that has already been developed, so organizations should try not to skimp on the research phase of a redesign.
Are you done after that?
Once the development phase completes, it is time to test a few things:
+ User Acceptance Testing. This ensures that the features marketers want are working as expected. They should have a checklist of functions and steps to test against, then rate a pass or fail criteria before the code gets to the live version of the website.
+ Split Tests, Redux. After the redesign is completed, marketers still want those incremental gains. The idea is that marketers have broken through the ceiling, and can start making significant gains with split tests again.
Risky but Useful
Do not treat redesigns lightly, but remember that risk runs both ways. Sometimes, it is less risky to overhaul something than it is to lose out on the opportunity costs of doing nothing.
If you use the right tools and the right data, redesigns can be relatively low-risk, high-reward initiatives.
About the Author
Martin Greif brings 25-plus years of sales and marketing experience to SiteTuners (host of DigitalGrowth Unleashed) where he is responsible for driving revenue growth, establishing and nurturing partner relationships and creating value for its broad customer base.