« March 2011 | Main | June 2011 »

May 23, 2011

Disjoint thoughts on productivity

Fried goes on to suggest that the perceived distractions of Facebook and web surfing at work are false, with "M&Ms" (managers & meetings) making up greater, involuntary, more disruptive and expensive distractions. -- http://gigaom.com/collaboration/jason-fried-why-work-doesnt-happen-at-work/

Disparate non-narrative thoughts:

I've lately been thinking a lot about designers' productivity, and my own now that I'm in a large busy office with both lead (talking to lots of people; making sure communication happens; "managers schedule") and IC (long stretches of uninterrupted time to concentrate) responsibilities.

A month or two ago my manager and I cleared my schedule. I just wasn't getting work done against the big project I've been working on -- I was interrupted and could rarely manage a half- or whole hour to work, unless it was from 4pm until midnight. I have three days a week that are blocked from meetings. I wear huge noise-canceling earmuffs (from Peltor) and will ignore people who try to interrupt me when I'm wearing them. Some people are respectful of this -- if they're interrupting me, it's actually important. Other people prefer to transact minor information exchanges disruptively -- in person.

Best productivity book I've ever read: Never Check Email in the Morning by Julia Morgenstern.

Designers don't have a good way of sharing design work. I suspect an easy-to-use source control system (like one of the biggies, plus a simple front end like TortoiseSVN) would make it easier for designers to work in sync WRT constantly moving targets. I've heard Adobe is working on this, but I've generally been unhappy with their collaboration products so I don't trust that they'll solve the problem.

If you're doing most of your work in Fireworks, you're probably not working fast enough.


May 12, 2011

Damn the metrics, full speed ahead!

A/B testing is useful, important, and a valuable part of the software production process. (Note that I didn't say part of the "design" process.) You can use A/B testing to compare two radically different versions of an idea or to optimize within a single design.

A/B testing can move the needle a bit, and it can serve as a cover-your-ass sanity check before launching something.
But it alone won't get you to an entirely new zone of user adoption, happiness, or conversion rates. Unless you're a design thinker and understand why a particular configuration did better (or can posit a solid hypothesis) you're not going to be able to synthesize that data and jump to a new maxima. 52 Weeks of UX and Andrew Chen have some great discussions of A/B testing and the concept of a localized design maxima.

Click to download high-res PNG versions:
DamnTheMetrics_Pink.png

DamnTheMetrics_green.png

Google's search UI has long relied on A/B testing, such as the infamous "41 shades of blue" incident, for a number of reasons. This includes sanity-checking / CYA and an internal political function of settling disputes with data rather than politics. However - and this is I believe what Doug Bowman was complaining about - it was also because at Google, Marissa's squadron of PMs were the ones who owned the interface and the ones who made design decisions and who got to decide what did and did not get tested. Because these PMs largely did not have the theoretical background in design, human cognition, color theory, and what have you, they lacked a solid theoretical ground from which to narrow the possible set of decisions to a couple of most reasonable options.

Frankly, any reasonably cogent designer familiar with Google's long term usability results could have deduced that the successful shade of blue would likely be the one that was eventually selected: the lightest background shade of blue.
  • An outstanding usability issue with Google's results is that many people confuse the top of page ads for actual search results.
  • To be fair, Google only allows an ad in the top-of-page slot if it is in fact incredibly relevant to the user's query.
  • Over time, Google's ads have become formatted more and more like its search results - largely along dimensions of typographical variance. As this visualization has changed, ad clickthrough rates have been higher.
  • Banner blindness / ad blindness is a well-understood phenomenon on the internet.

Thus, it makes perfect sense that having a light1 ad background would garner higher clickthroughs -- since it makes it more difficult for users to visually differentiate what is an ad from what is a result. Argumentum ad ridiculum: if Google wanted VERY high clickthrough rates (short term) it would make its ads look exactly like results and intersperse them with the organic results. Of course, this would over time damage their brand's credibility, so it would be a short punt that would eventually drive users to other services.

To make matters worse many companies aren't tracking the right metrics. Google is good about this: They have several metrics for User Happiness. But these stats are hard to track -- my understanding is that it generally requires a lot of custom coding and log analysis. Off-the-shelf stats packages (say, Google Analytics, KISSmetrics, Clicktale) do a good job of simple clicks, but they don't tell you that while initial clicks went up in experiment B, users in that group dropped off after a week and were never heard from again.

A/B testing is useful and appropriate, but for a professional designer, relying on A/B testing to develop an interface is often like watching a 5-year-old learn to read. It's true that every design is a hypothesis and that designers don't always come up with genius ideas. And it's also true that there's a particular set of cognitive skills that lets designers skip past the queue of minute optimization experiments. It's like flying first class (or, hell, premium economy) and getting to skip the security line at the airport.

So: Yes, we could A/B test 41 different button colors, or -- because you probably have bigger things to worry about than a 0.01% uptick in conversions and don't have an army of slave PMs and engineers to do your bidding -- you could just listen to your designer and get yourself optimizing on a much higher mountain than the one you're currently on.


(1) The lightness is specific to the dominant color scheme of the page. Really, "light" means "low contrast". On a black page, a pale blue background would divide the ads from the results and make it easier for users to visually skip over them. Since Google's page background is white, a low-contrast color is light blue.