Friday, June 29, 2007

Speedbird: "Lessons from experience design"

File this one under "Damn! I wish I was smart enough to have written this!" - Adam Greenfield has a great blog about experience design on Speedbird. This opus makes me look pithy, but it's worth reading the whole thing.

The clear intention was to ensure that the customer interaction inscribed in each of these phases was designed to the same high standards as an IDEO mouse or shopping cart. But with the best of intentions, this way of thinking led Acela into error.

The assumptions embedded in the plan are too tightly coupled to one another. They feed from one to the next - remember the word - seamlessly, like brittle airline timetables so tightly scheduled that a delay anywhere in the densely-interwoven mesh of connections cascades through the entire system. When it all succeeds, it’s magnificent, but if any aspect of it fails, the whole thing falls apart.

I could almost hear the clicking sound when several ideas bouncing around my head fell into place as I read this.

Wednesday, June 27, 2007

Functioning Form: "Design for the Edges"

Functioning Form has a new entry about Design for the Edges: Managing Edge Cases. This is a fascinating topic for me, because in my opinion this is one of the core ways in which designing "enterprise software" differs from designing "consumer software". Enterprise software is not about achieving ubiquity, it's not about being "good enough", it's about satisfying the often very complex requirements for enterprises beyond what the "good enough" products can do. The edge cases aren't something to avoid... they are the core value proposition of the software. For example, Jamie Hoover in the article advises "Make your software as simple as possible. Less complexity decreases the possibility of edge cases in the first place." While this is obviously good advice in general, when it comes to most enterprise software, this is not useful whatsoever. The complexity of the system into which the enterprise software is plugging already exists. That complexity is the source of the requirements. By making the software "simpler", we'd simply be reducing the number of systems in which our software would be useful.

It's unfortunate that so much of our attention as designers has been focused on ways to simplify our own jobs to reduce the complexity of design problem by using techniques that try to find commonality across roles and scenarios... which makes our lives easier but oftentimes does not help our end users, who experience a much richer life that the one we've forced upon them.

Tuesday, June 26, 2007

A List Apart: "Testability Costs Too Much"

Gian Sampson-Wild has penned a deliciously provocative article at A List Apart titled "Testability Costs Too Much." It is specifically about accessibility and the W3C, and asks the question - is it possible to have a good guideline that isn't testable? Sampson-Wild emphatically argues "yes". It's definitely worth a read.

But what got me thinking is how often this same question is brought up outside of the accessibility domain. Specifically, I've dealt with this in both setting usability objectives and when creating design guidelines.

It goes without saying that setting usability objectives for a product or a release is a good thing, right? I'm not convinced. Usability objectives can do as much harm as good. First, who decides whether the objectives are met? And how do they decide? Is it useful to have an objective that "The product will be easier to use than the previous release"? Probably not. But what if we operationally define the objective as "User satisfaction will increase from 3.2 to 4.0"? That seems testable. But if that is a release objective then it needs to be tested before the release is shipped. That means a late-cycle design validation. That means creating meaningful, representative scenarios for users to perform (often in a lab setting). And do you test the new features or do you use the same scenarios as the previous release to have an apples-to-apples comparison? if you just test the new stuff, is that valid? If you just test the previous samples, are you really testing the new release? How does it cost in time and resources to conduct the testing? If the objectives aren't met, do you delay the release? How much resource do you need to apply before you've got enough testing data to make a delay-the-release decision? Is it worth it?

Another option is to set objectives that don't require user testing. For example, you can use quantitative measures of user experience like step counts. An objective might be, "Completing this task will go from 27 steps to 10 steps." But this has a lot of problems as well. First, defining a "step" is easier than it sounds. Second, defining a "task" is easier than it sounds. Third, and I think most importantly, reducing the number of steps does NOT mean the task is more usable. You can improve usability by reducing steps, but it's not a guarantee. These quantitative measures of user experience are almost always secondary effects, which can lead to all kinds of problems. And in the end, these objectives still need to be "tested", usually by UXers, and although it is cheaper than user testing, there's still a ROI problem.

Compare this to just having a smart UXer on your team that you trust, and ask her "Do you think we've done what we intended to do in this release from a usability perspective?" Very high ROI.

Usability objective testing costs too much.

Now what about design guidelines? In this case, the "test" relates closely to Sampson-Wild's description "reliably human testable—which means that eight out of ten human testers must agree on whether the site passes or fails each success criterion." Now I'm going to extend this slightly to make it harder... a design guideline is testable if eight out of ten developers agree on whether guideline has been met. In many cases, guidelines are used because UXers don't have the resources to design everything in the product, and developers forced into design work need guidelines to help them make decisions. Of course, for UXers every guideline is "It depends". "It depends" is not a good guideline to give to developers. But damn it, design is HARD and the Truth is that the right answer really does depend on a bunch of factors that don't lend themselves to pithy guidelines that a non-professional-designer can consume and understand at a shallow level.

But if your developers are doing design anyway, then that's just tilting at windmills. Dumbing down your guidelines so that they are testable by developers is the right thing to do. In this case, I think the cost of making it testable is worth it. Coming up with guidelines that require expert interpretation when you know non-experts need to follow the guidelines is not useful, it's arrogance.

Monday, June 25, 2007

Book review: "Small is the New Big" by Seth Godin

The first question is, why would anyone by a book that is actually a compilation of blog posts? Blog posts that are still available for free online for anyone who wants to read them. To answer that, a little background is in order.

Seth Godin is one of the new marketing gurus of the internet age, and runs an extremely popular blog (aptly titled Seth's Blog), as well as being the author is several books that extol the virtues of permission marketing and innovation. But pigeon-holing Godin as a marketer doesn't really tell the whole story. Godin is part marketer, part guru, part motivational speaker, part pundit, part critic... basically he has a strong belief in his own opinions, loves to share them, and enough people find enough of them to be insightful that an entire Seth Godin cottage industry has sprung up around them.

Since a colleague of mine pointed me to his blog awhile back, I read pretty much everything he posts. What I find most interesting is that I only find value in maybe half of what he says. Another 25% is what I would consider to be "motivational speaker crap". And the other 25% I simply disagree with. But holy cow, there's just so much of it, that even a 50% hit rate produces a lot of quality content. His book is the same way (not surprisingly, since it is from his blog originally). So why did I buy "Small is the New Big"? Basically, it was my way of supporting Seth's blog. It provided me a simple way of reading his archived blog entries in handy book form (for example, I could take it along to the pool during my kid's swim class), but mostly I just felt Godin had earned it.

Fortunately for Godin, I bought it online, because if I had picked the book up and turned it over and read, "you're smarter than they think" in big text at the top of the back cover I probably would not have been able to face the cashier at Barnes and Noble. Outside of Stuart Smalley, that's the kind of motivational speaker crap for which I have a very small tolerance. But here's the thing: I think Godin expects exactly this kind of reaction. In his introduction to the book, he says:

I guarantee you'll find some [blog entries] that don't work for you. But I'm certain that you're smart enough to recognize the stuff you've always wanted to do buried deep inside one of these riffs. And I'm betting that once you're inspired you'll actually make something happen.
Why is Godin convinced that I'm smart enough (and doggone it, people like me)? Apparently because I was smart enough to buy his book. But more importantly, Godin knows that his hit rate isn't 100%. He's fine with that. He's fine with being all over the map. But I'm also guessing that the parts that I like are not the same parts that another person would like. It's about taste, not about being right and wrong. He also hits on another point, and to his credit he's self-aware enough to recognize it - many of his posts say what people already know, but there's value in being reminded of it. Reading the book, I repeatedly had the reaction, "Yeah, that's true, I knew that... I wonder how I can apply that?"

The best thing about Godin is he makes you think. He has a knack for speaking in just enough generality that most of what he says feels applicable to your job, while not being so general that it's not useful. Even he act of disagreeing with him (like my disagreement with most of his riff on website design) is useful because disagreement requires thought. And he's engaging enough that even disagreement feels friendly and personable. I get the feel that I could have lunch with him and we could argue the entire time... and yet we'd both leave the meal having enjoyed ourselves with no hard feelings.

I recommend the book. You won't like all of it, but I bet by the time you finish it you'll feel it was time well spent.

Thursday, June 21, 2007

From UPA: The Future of Usability

A colleague sent me a link to the proceedings for the latest UPA conference that was held in Austin, Texas last week. One of the items was a panel discussion entitled, "Looking in the Crystal Ball: Future of Usability". I particularly enjoyed the charts by Daniel Szuc from Apogee. One of his charts asks the question, "Who/What do we want to be?" followed by these bullets:

  • User Tester v Designer (or both)
  • Closer (issues) v Opener (innovations)
  • Loner v Collaborator
  • Critic v Creator
  • Silo v holistic
Good questions, because other than the third bullet, the answers are not clear. Well, I have clear opinions on each one, but as an industry the answers are not clear. For example, the last question might seem obvious... but one of the panel speakers (Robert Schumacher from User Centric, Inc.) warned about the danger of diversity and the cheapening of our skills by being a blend of so many different disciplines, while advocating for the need for a clear UX certification process to separate the real professionals from the, um, amateurs. Basically, Schumacher wants to create more crisply defined UX roles because he believes that the "holistic" blending of our field with every field we come into contact with cheapens our value. He has a valid point, though I think it's a bit like trying to prop up local shops by legislating against Wal-Mart - it might be a noble goal, but it's ultimately useless because it fights against reality. We need to make things work in spite of the blending of roles, because the there is no other alternative, IMO.

Anyway, in the spirit of the exercise, here's a few of my thoughts on the future of usability, in easy-to-read bullet form:
  • In the category of "Well, duh!", specialization within the field will increase. One specialization that I think will emerge into a common category is the "Designer Developer" - not a developer who learns some design skills, but a UXer who learns some development skills.
  • Focus on user testing will decrease over time.
  • The importance of patterns and particularly patterns-enabled development tools (basically 4GL GUI tooling) will increase. Creating good design is too hard today... there will be a lot of incentives in the near future to make good design easier to implement.
  • There's going to be a lot of drama in the community in the near future as gurus begin to really differentiate... and think the other gurus are full of crap. And say it out loud. I think it'll be a good thing for our profession, but it'll be ugly while it happens.
  • Returning to the "Well, duh!" category, the UX community is going to experience an explosion of vibrancy thanks to blogs and wonderful article-based sites like Boxes & Arrows, A List Apart, UXmatters, and others. I think there's plenty of room for growth in this area.
  • Because of the previous bullet, the importance of ridiculously expensive professional organizations who publish ridiculously expensive professional journals will decrease... though not fast enough for my taste.

Am I the only one who doesn't love the iPod UI?

I love my iPod. There's something magical about burning several drawers full of CDs onto a device I can stick in my pocket. I even use iTunes to buy music, though not frequently.

(Side note: Hey, music industry, I have 1000s of songs on my iPod and NOT ONE of them is pirated. But I have two laptops, a desktop, two iPod shuffles, one iPod, one Disney MP3 player, and I'd love to find a car stereo that let's me store my music directly on the stereo instead of trying to remember my iPod whenever I drive somewhere. I have reason to copy my music between all these places. Stop making my life difficult. )

(Side note: Hey, Apple, that 99 cents-a-song thing was a great gimmick when you launched iTunes. But that what is was... a gimmick. It's over, man. First of all, 99 cents per song isn't cheap. Second of all, there's no real reason why all music should cost the same amount. Here's the deal - there's a whole bunch of music on iTunes that I'd pay a quarter to buy, but I won't pay a buck to buy. For a quarter, I'll try new things. I'll experiment. I'll get some older nostalgic songs from my misspent youth. For a buck, I'll get stuff I know I'll really like, and I'll be careful. If you took your music catalog and made half of it 25 cents a pop, you'd make more money.)

Anyway, back to the iPod and its UI. I had heard all about how "simple" the UI was. I expected to love it. Instead, I'm a bit shocked by some of the fundamental flaws in the UI that irritate me on a regular basis. Here's my top 3:

  • Multi-modal inputs stink. It annoys me that I need to press-and-hold the Play button to turn the thing off. And it turns on when you touch it. And the screen goes dark to save energy while it's playing. So how do I know that the thing is really off? I don't. And I've repeatedly experienced times when I have shut the thing down and then stuck it in my pocket or my backpack only to discover hours or days later that it must've turned on by mistake and the battery ran down to nothing. Hey, I have an idea! How about a fricking ON/OFF switch?
  • I have a ton of "specialty" playlists that I listen to on very rare occasions. In practice, I only have 3 playlists that I listen to on a regular basis (1. "3 or more stars" which translates to "every song that I like." 2. "Relax" which translates to "mellow songs that I listen to when I'm falling asleep." 3. "Wake Up" which translates to "rocking songs that I listen to when I need an adrenaline boost.") Why can't a bookmark those 3 playlists at the top of the iPod hierarchy so I can quickly get to each of them? I configured it so that I have "Playlists" at the top of my hierarchy, but considering that I have about 50 playlists, it's still not that easy to get to the exact playlist I want using the stupid spinwheel.
  • And the biggie. There are two ways I listen to music. I either listen to a playlist or a listen to an album. When I listen to a playlist, I want it to be in random order. When I listen to an album, I want it to be in track order. Who wants to listen to "American Idiot" in random order? But changing from random to ordered is a pain on the iPod. I can understand, perhaps, why they don't want another physical control on the iPod (though it works fine on the iPod Shuffle, which has a lot less room to work with), but at least make it a toggle button at the top of the software hierarchy. It's not a "preference". It's a frequently changed setting. Don't make me dig around for it. And hey, as long as we're talking, how about having a separate setting for albums, where you default to "track order" when listening to an album? I'm sure I'm not the only person who treats an album differently than a playlist.

Wednesday, June 20, 2007

Who cares about finding new usability problems?

I interact with a lot of UXers in my company and elsewhere. Based on those interactions I have to ask, "Why do we spend so much time talking about the best way to evaluate usability or find usability problems?" It's clear that the biggest obstacles to overcome in order to deliver a usable product have very little to do with knowing what the usability problems are, and everything to do with figuring out a way to fix the usability problems you already know about.

I'm not saying that running usability tests is a worthless activity. I'm not saying we shouldn't think about running tests in new ways (particularly more efficient ways). But I think most products very quickly reach the point where they know about more usability problems than they have resources to fix. At this point the return on investment for discovering new problems is very low.

Obviously, most usability testing is done on NEW design going into a product (or website or whatever), so one might argue that in this case usability testing is still obviously needed. But again, in my experience this isn't true -- by the time the usability test is run, so many compromises have been made to meet schedules that the usability tester already knows what the user is going to complain about. It's just busywork at that point.

In my career, I've run precious few usability tests where I honestly did not know going in what the best design was, and the usability testing provided exactly the right insight that I needed to make the right design decision. I love those moments. But in my opinion they are the exception to the rule.

(Note: This opinion does not apply to user research, which I find is almost always valuable and worthwhile)

Monday, June 18, 2007

Buyer's remorse and the Sony PS3

I'm a video game junkie. For the most part, I think video game systems are really good investments. Because my favorite genre is RPGs, and good RPGs are notoriously long, all I need is one or two great games to essentially justify the purchase of the system. For me, the comparison point for entertainment value is pulp fiction. I can go on Amazon and buy a paperback pulp fiction book for about $10. For example, I'm a big fan of Charlaine Harris, and I can pop over to Amazon and buy "Dead Until Dark" for $7.99 plus shipping. It probably takes me 5 hours to read a typical Harris book, so I'm paying about $2 an hour for the entertainment of book. This is roughly the same price I pay to watch a Netflix movie (Netflix should actually be cheaper, but I can never remember to return the damn movies after I watch them). Without concessions, going to the movies in the theater is a bit more expensive -- more like $4 an hour, but still fairly reasonable.

On the other hand, a video game system seems really expensive - we paid about $400 for our XBox360, and it didn't even include a game (or a second controller... grrr). If you include a game and a controller, the XBox360 was $500. Pricy, right? But let me pick one example from the XBox360 -- Elder Scrolls IV: Oblivion. This is one of my favorite games of all time. I am confident that I have played it for at least 200 hours. That means that if the only game I ever play on the XBox360 is Oblivion, I paid about $2.50 an hour for the entertainment value of the system. And considering that I've played many other games already on the 360 and that Fable 2 is supposed to come out this Christmas (the sequel to my favorite game ever), it's safe to say that eventually the entertainment value of the 360 will be pennies on the dollar.

We bought the Sony PS3 at Christmas for $600 and I have yet to play a game on it. Not one. Admittedly, one of the main reasons we bought it was the Blu-Ray support (high definition DVD), and we use that occasionally, but so far there just aren't any games I'm interested in playing.

And yet I STILL did not have buyer's remorse... until last night. I had a Blu-Ray movie I wanted to watch. I fired up the PS3 and it told me there was a required System Update that I needed to install. It had told me this a couple times and I ignored it, but I decided to bite the bullet last night and just install the thing before watching my movie.

An HOUR AND A HALF later the update was still not installed, there was no progress report (the PS3 would just periodically try to reboot itself to let me know it was still doing something), I was afraid to hard-reboot it (which is not a great idea during system updates) and I gave up and went to bed. This morning I checked on it and it told me... get ready for it... that it was now ready to install the update. That's right. During the hour and a half that it was doing whatever last night, it wasn't actually installing the update... it was just getting ready to install it. And it wasn't downloading it either, because it did that first (with a progress indicator, thank goodness, though it was a fake progress indicator) and we have broadband.

I wanted to throw the thing out the window.

What is Sony thinking?

Thursday, June 14, 2007

The user testing rope-a-dope

It's important to distinguish between situations when a development team is not implementing your design because they don't have the resources to do it and when they are not implementing your design because they disagree with it (or, perhaps, think that design doesn't matter). Obviously the correct response to these situations is different. The trouble is that sometimes it's hard to tell the difference between them, because developers sometimes use resources as an excuse to not do designs that they really don't believe in.

I was talking to a colleague yesterday who was having an issue with her development team. The developers on the project had created a design that (I'm not making this up) utilized multiple levels of tabs, as well as an bizarrely-placed "action button" that served as essentially a menu item. She provided an alternative design that fixed these problems. The response? The developers want her to run a series of usability sessions to verify with users that their original design is actually a problem, because they are short of resource and don't want to fix this if it isn't necessary. I call this the user testing rope-a-dope. Amazingly, this meeting happened after I wrote yesterday's blog entry about most design issues not requiring user input... using tabs-within-tabs as an example! We don't need to talk to users to know this is bad design. Heck, if the users told us they liked the tabs-within-tabs, we'd ignore them as statistical anomalies. The developers are trying to use user testing as a way to delay the decision long enough to make changing the design impossible.

It's amazing to me that a team would request UX support and then not listen to the UXer's guidance on design.

Wednesday, June 13, 2007

Is there such a thing as too much technical knowledge?

When I started my career in User Experience, back before it was called User Experience, I proudly avoided becoming too advanced in my technical knowledge of the product I was supporting. There was a perception at the time that if a UXer became a domain expert they'd lose touch with how user's perceive the product (or at least novice users). We even coined a term for when a UXer would get a little too close to the development team -- "going native". If we caught a UXer saying things like, "Well, if users don't understand shell scripts, they shouldn't be using the product", they were quickly admonished for going native.

Obviously every UXer had to have some domain knowledge, but there was a largely unspoken agreement that we should avoid going too far with it. Bear in mind, when it comes to enterprise software, technical domain mastery is very expensive. For example, my first job was working on DB2 on the mainframe... becoming a domain expert could take a decade. But the issue isn't laziness, it was philosophical -- a UXer should be an expert in design and usability methodologies, not in whatever product domain they happen to be supporting. AND not only is domain knowledge not necessary, it could also be dangerous, because it could cause the UXer to not see usability problems that a non-expert would encounter.

I now think I was completely wrong.

Well, not completely wrong. I still believe that many design issues require no domain knowledge to recognize and fix, and most product designs are so bad that a decent UXer can spend many releases just trying to fix standard design problems without ever needing more in-depth knowledge about the product. In other words, I don't need to know anything about a product to point out that the developer's grandiose tabs-within-tabs-within-tabs design might be flawed.

But at some point you start to hit the really tough design decisions. Where the answers are not obvious, and it becomes unreasonable and expensive to always address those questions with "let's run another usability test." This is where the intersection of design abilities and technical domain knowledge becomes truly valuable. The technical architects have the domain knowledge and they care about the users, but they (usually) aren't design experts and unlike the UX professional, they have many competing incentives. The UXer only cares about the user.

I now believe that technical domain knowledge is one of the most important tools in a UXer's toolbox, and should be pursued with vigor.

Monday, June 11, 2007

Book Review: "The Myths of Innovation" by Scott Berkun

Have you ever been to a party and met someone with a great job and a great sense of humor and ended up spending the entire party drinking beer and swapping interesting stories? That's what Scott Berkun's new book, "The Myths of Innovation", felt like to me. There are lots of books on my shelf that I know I ought to read, and many of them I struggle through and afterwards feel like it was a valuable investment of my time, however painful. This wasn't one of them - this is one of those rare books that feels like reading for pleasure, and yet you learn something along the way.

And I might add that the colophon alone is worth the price of the book (a sentence that perhaps has never been written).

I wonder how much time and research Berkun did on this book before he came up with the idea of orienting the book around myths? Was that the idea all along? Or did it emerge over time? Because it turns out to be a perfect way of presenting the material. First, everyone loves to feel like they know something that other people don't - the truth behind the myths. This "peeking behind the curtain" approach is a great way to keep the material interesting. Second, innovation is such a complex area that it would be very difficult to write a book about what innovation is -- it's a lot easier to talk about what it isn't. But by providing the boundaries via the myths, it inevitably provides great insight into how innovation really happens. And third, myth debunking seems to fit Berkun's auctorial voice. His casual, conversational tone is not only funny and engaging, but it naturally allows the type of speculation and interpretation that is necessary for the topic. In other words, a textbook-style examination of innovation would be a very poor choice.

While I enjoyed the entire book, I particularly enjoyed in the section on the myth of "the best idea wins". In it, Berkun describes the many factors that are involved in whether an innovation succeeds, and how being the "best" is only one of many factors. When it comes to design innovation in established software, the impact of "dominant design" is always a challenge - what is the cost of moving to something better when you have a large customer base who already knows how to use the product? One example in the book is the QWERTY keyboard that we all know and loathe. But to lesser degree this is always the case - I can't convince my wife to move from Paint to Photoshop for editing pictures because she knows how to use Paint. Whenever I try to tell her about how many great features there are in Photoshop, all she hears is "blah... blah... blah... [it will take lots of time to learn]... blah... blah... blah."

I recommend this book highly to anyone who has a job where innovation matters... which is just about everyone.

Friday, June 8, 2007

The Joy of Grids












Yet another reason why the internet is a wonderful thing. I found this presentation from Khoi Vinh from his Subtraction blog. It's a walkthrough for how to design web pages using grids, and it's a great tutorial for the uninitiated.

What did people do before the internet? Talk to each other? *shudder*

Thursday, June 7, 2007

Agile development and user experience

There's been a couple recent articles on agile development and UX:
Four Factors of Agile UX (on UXMatters)
Lessons from Google Mobile (on Boxes and Arrows)

I think this is an interesting topic. I have a colleague working in an agile development team, and it has definitely been a mixed blessing. On the one hand, the frequent iterations allows frequent opportunities to fix problems. On the other hand, the frequent iterations makes it difficult to fix BIG problems, particularly if the big problems require significant user testing to find the solution.

Here's what I think are some basic guidelines for how to do UX well in an agile environment:
1. Make sure the frequent iterations are available to users as beta code - this will likely be the best opportunity to get user feedback, even if it is informal.
2. Focus more on the "professional designer" role rather than "usability tester" role - most design problems do not need user feedback, they just need a smart designer.
3. Embrace the positives, accept the negatives - if the process does not allow big problems to be fixed, then focus on fixing the little things rather than tilting at windmills.
4. Set a user experience vision - it's easy to miss the forest for the trees in agile teams, so make sure that you're constantly evolving towards a well-understood goal, rather than constantly playing whack-a-mole with whatever the design defect du jour is.

Friday, June 1, 2007

SaaS as a user experience tool

Software as a Service (Saas) is an intriguing way to solve the problem I describe below. First, of of the reasons enterprise users want to avoid change is because of the cost of deploying the changes. SaaS allows a product to make changes without any deployment cost. Next, it allows incremental improvements to be made to the software. One of the great things about websites is the ability to make small, frequent changes. These small changes are not disruptive to regular users yet over time it allows substantial improvements to be made. Application middleware that requires user installation and maintenance don't inherit this benefit. But SaaS can solve this problem - buying the middleware doesn't mean you have to take it and install it and host it and maintain it... it just means you have access to it.

There are problems with this approach, of course. The biggest of which, to me, is the ability to extend a product with third-party software, which is critical in building a viable ecosystem for middleware. How do you extend something that you don't host yourself? This is tricky, but it's solvable.

Although SaaS is a new buzzword, it's not a new concept. But given the amount of time and expense that customers spend on installing and maintaining middleware, not to mention the user experience benefits, it might be useful to push the boundaries of SaaS to include traditional middleware offerings.

On release cycles, change, and stability

Users hate change. Once they invest in learning a product, they don't want it to change, even if the changes are theoretically for the better. In enterprise software, they are particularly interested in the stability of the code base AND the stability of training. If they have a large organization trained on one version of an product and we tell them, "We've changed everything in the new version! It's way better!!", they do not get excited. They get nervous and start calculating migration costs. Users want to settle on a stable release of the product and stay there until something happens that forces them to move (like the product going End of Service). Users tell us to stop putting out new releases... slow down... extend the release cycles. Users hate change.

Users want their new features added to the product, and they want them added NOW. Sure, the product is pretty good, but without features X, Y, and Z, it simply isn't good enough. And with enterprise software, when a customer tells us they want X, Y, and Z, chances are they are paying a LOT of money for the software, so we damn well better listen. Of course, each customer has a different list of what X, Y, and Z are, they don't care about the other customers' requirements, and they hate unnecessary change. "Unnecessary change" means "changes that my company doesn't need." Users do not want to wait for their favorite features to be added.

This dilemma is particularly difficult for user experience practitioners. Customers don't want to retrain, yet we're only got, perhaps, one opportunity every two years (depending on the length of the release cycle) to make improvements in the product. By essentially saving up all the improvements for two years we inevitably introduce fairly major changes with each major release - which is bigger change than customers want AND slower change than customers want.

What's the solution? It's possible that the answer is "there isn't one", but I'm wondering whether Software as a Service isn't a pretty good way of getting around the problem. More on that in my next post.