Review: Understanding Interactivity (book)
Developer: Chris Crawford
Trial: Chapters 1-5 available on the Web site.
Let me be totally up front with you: I want you to read this book. Understanding Interactivity is about just what the title leads you to believe. It gets you thinking about what interactivity is, how it should be used, and how little it is used to good effect in most programs and Web sites that you see today. Rather than being a programming book that tells you how to write code, it tries to give you an idea of how programs should work in order to give users an enjoyable and productive experience.
At times the book seems almost philosophical, examining ideas behind interactivity, how media over the course of time has become increasingly less interactive, and the value of play. Where it points out specific examples of things that need improvement, it does so to illustrate a bigger point—dare I say it—to try to encourage programmers to think about their users as they write their programs. If you’d like a solid understanding of what makes a program seem good or bad, this book is for you.
As I read Understanding Interactivity, I often caught myself saying, “yeah I know that already”; at times, the book seems to be preaching common sense. Unfortunately, the sense it is preaching doesn’t seem particularly common in the real world.
To take an example used in the book, most of the time when you want to print a document, you want one copy of it. Nothing fancy. But you hit Command-P, and instead of it printing, you get a dialog box full of options—options that nine times in ten, you don’t want. Wouldn’t it make more sense for Command-P to actually print one copy, and some other combination, Command-Shift-P, perhaps, to bring up the dialog box?
One of the astonishing things about bad interactivity is how easy it is to fix, in many instances. In the case of this example, some programs have taken a step in the right direction by adding Command-Shift-P to “Print One Copy” (leaving Command-P to bring up the dialog box), so at least you have the option to print quickly. But wouldn’t it make even more sense for the most commonly desired action to be activated by the simpler key sequence? Programs get it right with the “Save” command: Command-S saves the document, and no dialog box comes up as long as the document you’re saving already has a name. If you want the dialog box, you can do a “Save As…” from the File menu or with a more complicated key sequence.
Tempting as it must have been to write a book chock full of examples where user interfaces need help, that’s not what Crawford did. There is one chapter, “Bloopers,” which does that. Both entertaining and instructive, it comes about a third of the way through the book, offering a sort of intermission from weightier material, as well as practical examples of interactivity gone wrong. Such examples can be found scattered about in other parts of the book, especially in the earlier chapters, before things start getting too theoretical, but they’re always used to illustrate a point, or to provide a framework for discussing a particular aspect of interactivity.
So What’s the Problem?
All of us, as users of computer programs, visitors to Web sites, etc., have an idea of what constitutes good versus bad interactivity. When the Mac refuses to empty the trash because “an error of type -110 occurred,” we are uniformly annoyed at the worthlessness of the error message. It doesn’t explain what’s wrong or offer up a solution. (I finally gave up and restarted.) Wouldn’t it be a great improvement if the error message said explicitly, “Sorry, there’s a problem. You’ll have to restart before the trash can be emptied.”? That sort of error message would sure cut down on tech support calls from confused users.
Crawford doesn’t just recommend that error messages be worded more clearly to give the user useful information. He goes to the root of the problem: programmers are programmers, not interactivity designers. They’re perfectly happy with an error message giving a number designating the kind of error. They know what it means, and can track down the problem or respond accordingly. Maybe they think the problem won’t occur, and so they don’t see a need to bother with a user-readable error message. Programmers know what their program is designed to do and what its limitations are, and they accept those limitations without thinking about it.
Users, on the other hand, aren’t nearly as intimately familiar with programs they interact with. The result is that users see problems that programmers never encounter and that, even if they did, they wouldn’t think problematic. There are a few solutions to this kind of problem. One of Crawford’s favorites is, learn to program. This is admittedly a long-term solution, but people with some artistic sensibilities, who understand that a good program is more than a haphazard collection of technically cool features, would do well to start learning how to design programs on their own, to work as they should. In a lot of cases, such people end up as shareware authors: programming isn’t their main job, but they do it astoundingly well, because they put a lot of thought into their work.
Which brings me to another suggestion offered in the book. A program goes from development to alpha to beta to sale. The beta stage is used to work out bugs, either in-house or by letting users mess around with the program and give feedback. But usually, feedback about anything other than bugs is ignored: the company isn’t interested in hearing that it’d be nice if you could skip the dialog box when you want to print a document, or that an error message is worded in a confusing way. Fixing such problems would delay the product getting to market, and hey, there’s already an error message there, so what’s the problem? The beta is feature-complete as it is; making the Print command easier to use would be a new feature.
Software companies need to start paying attention to these kinds of comments, either during the beta period or during a separate stage, Crawford suggests, when bugs have been squashed, and programmers are open to suggestions on last-minute tweaks of the user interface, before the product goes to market.
Hey, my degree is in philosophy. If I’m going to read a book about a vague computer term that I’ve seen used in all kinds of different ways in countless different contexts, I want a definition of the word—clear, robust, and precise—preferably with references to Kantian metaphysics.
The definition given of interactivity, however, is simple and straightforward, a nice little soundbyte of a definition, bolstered later on in the book by discussion and examples, but left somewhat hazy around the edges. I’m going to take it on, but first let me point out explicitly that what’s going on here is that I started out with only a vague idea of what interactivity is and a handful of dislikes about how programs work; then I read Understanding Interactivity, and now I’m thinking about how to come up with a better definition of interactivity. This is a good thing: whether I (or you) fall to my knees and praise Crawford’s definition or not, the point is that both users and programmers should be thinking about this term, because we are all heavily affected by it.
“Interaction: a cyclic process in which two actors alternately listen, think, and speak.”
Sounds good to me. It uses metaphor, and the metaphor comes from the earliest form of interactivity: two human beings having a face to face verbal discussion. Ideally, a computer should respond to us as well as a person would, and that fact is stressed by saying e.g. “listen” rather than “accept input.” Crawford simply points out that a computer listens through the keyboard and speaks through the monitor and thinks with the CPU. The problem, of course, is gray areas. When you open your refrigerator door and the light goes on, are you interacting? Well, technically yes, Crawford admits, but not in any particularly interesting way. There isn’t much thinking (“delivered content”) going on there, on the part of the refrigerator.
Trying to avoid the stigma of having presented a subjective definition, Crawford introduces the idea of degrees of interactivity. That seems on the face of it to work well, but it still leaves the definition subjective: my opinion of whether something represents relatively high interactivity may differ from yours. Let’s muddy the waters a bit with an example from the book. “To a child, a bouncing ball appears to possess its own free will, and therefore in the child’s mind, the ball is an active agent. To an adult, the same ball is merely a rubber sphere, obeying simple laws of physics, and therefore the adult sees no active agent in it.” This places the entire question of whether something is interactive, at all, in the mind of the interactor.
The objection that leapt into my mind when I first read this was, well, to a really good computer programmer who knows his way around hardware and software, a computer program is just a bunch of electrons jumping around according to laws of physics. Clearly we can’t place the question of whether something is interactive in the mind of the beholder. It would also be really nice, and useful towards a real discussion of interactivity, if we could eliminate from consideration the extremely uninteresting cases of stimulus and response. A more objective and picky definition is required. To say that a ball obeying the laws of gravity is “thinking,” even metaphorically, even on a really low level, is a pretty long stretch.
So where do you draw the line between where the metaphor works and where it doesn’t, if you want to do so objectively? I’d propose this: for an agent to be an interactive agent, it must be capable of interacting with itself as well as with another agent. That clearly rules out the ball. It still allows a computer to be an interactive agent: when you tell the computer to save a file, it asks itself, is the file already saved? If so, where and what is it called? What do I do with all these new bytes of data? If not, I need to get a name and a place to put it from the user. In other words, the computer does something that might acceptably be called “thinking,” at least metaphorically. You can interact with animals: we’ve all seen them do things on their own. But you cannot interact with a stuffed animal, even one that says a random phrase when you pull a string. Even if you believe in your heart of hearts that Teddy Ruxpin is alive, Mommy, he really is. What this does, then, is rule out certain examples of what Crawford would consider extremely low-level interactivity, the uninteresting, unmeaty stuff, from consideration. It draws a line in the sand, insisting that a toy company may not market its basketball as “interactive.”
There may be some flaws with this add-on of mine to what can constitute an active agent, and I’d love to see some discussion on the matter. But what it does do, I think, is make the question of whether something is interactive an objective one. It also suggests a partial criteria for deciding whether remaining interactions are high- or low-level: how much self-interaction is going on in the minds of the two interacting agents? (I’ll spare you my argument as to why this isn’t circular.) I say partial because good interactive design requires good speaking and listening on the parts of the agents as well; thinking alone won’t cut it.
I’ve only touched on a few of the issues discussed in Understanding Interactivity. It is an insightful and thorough book that makes many observations and suggestions from which software and Web site designers would benefit greatly. Ordinary computer users are encouraged to think about what makes a program enjoyable versus bothersome to use and given an idea where the problems come from. While some of the book, towards the end, gets fairly abstract, it is very readable throughout, and is written in an enjoyable style that makes it engaging and understandable to a reader without programming experience.
If Crawford is right (and I’m convinced he is) that interactivity is “the essence of the computer revolution,” then this book deserves a place on the bookshelves of all us revolutionaries.