I've been reflecting on a phenomenon that I've observed that is characterized by a team coming up with a significantly improved design which is then met with strong negative feedback by users. What's puzzling is that the design itself is clearly far superior to the previous design but yet users are negative on it.
Facebook, for example, recently launched a brand new design that, in my view, is clearly superior in many ways yet I've witnessed predominantly negative feedback on it. This is despite, as I pointed out earlier in this blog, the fact that the Facebook design team sought out and received extensive feedback from users on particular design issues using its own tool. Similarly, Microsoft's new ribbon user interface represented a bold attempt to re-design their Office suite. It is clearly a significantly better design but when I use it, I often have to spend valuable time trying to find a function that used to be second nature for me to find with the old design. There are many more examples like this.
This seems to be happening more frequently lately. I suspect that we didn't see this as much years ago when visual and interaction design disciplines weren't as influential as they often are today. With that influence, though, comes the responsibility of determining how much improvement is just right, not too little and not too much. I'd like to propose that there is a "design delta threshold" beyond which teams shouldn't take their designs. This threshold mostly applies to products and systems that are used by many users for whom the design hasn't changed for a long time.
The best way to know whether you're exceeding the design delta threshold is likely not by carrying out isolated design feedback or user studies. These may not provide an overall sense of the changed user experience design. An Agile Development approach that involves the delivery of fully functioning subsets of the product or system and gathering user feedback on these progressive milestone versions of the evolving offering are likely the best way of determining whether the changed design is exceeding the magic threshold. Teams should be vigilant too in looking for evidence of exceeding the threshold and then racheting back the design to a level that is acceptable to users. At times, it may take a release or two extra to make the complete transformation in the design.
Those who have been around user experience design circles for many years I'm sure are please to see designers having a signficant impact and likely welcome, as do I, the new challenges of fine-tuning the design throttle in the ways outlined here.