In the mid to late 90's, there was something spreading across the internet like herpes. It promised freedom from the tyranny of table-based layouts, rich animations, vector graphics that could scale to any size and pixel perfect reproduction on any machine, regardless of browser, OS or platform.
That infection was (and is) Flash (Now Adobe, then Macromedia, previously FutureSplash).
What made this technology so appealing to designers was the promise that they could have complete and utter control over the presentation of their designs. No more worrying about how IE4 would render that table vs Netscape 3.2. No more sticking with Arial, Times New Roman and Comic Sans. Build your flash file at 400x600 and everything will always be exactly where you want it. But more than that, you are free to completely re-imagine the entire concept of web navigation. Forget about that back button, forget about users deep-linking to a specific page, your website is now a black box within which you, the designer, are god - usability be damned. In the immortal words of Jeff Goldblum in Jurassic Park, "We were so busy figuring out if we could, we didn't stop to think about whether we SHOULD".
With the growing ubiquity of touch-based interfaces, we're seeing the first real paradigm shift in user-interface since Steve Jobs visited Xerox PARC back in 1979. While Flash helped us to learn that interfaces could be fluid, living and changing things, touch is teaching us new lessons.
What makes touch such an interesting development is where it's being used primarily - mobile devices. In the mouse and cursor world, the interface can do anything, as long as it can be manipulated with a single point traveling across the screen. Those who maintain this thinking moving into the touch world do so at their peril. Sure, there will always be software that just needs a series of clicks (now taps) to function, but in the mobile world, those too are multi-touch apps.
Why? Because possibly more important than simply incorporating more than one finger on the screen is remembering a touch point that many seem to forget - the hand holding the device. On smartphone handsets, where it's possible to effectively hold the device in one hand and operate it with the thumb of that same hand, this is less of an issue than it is with the new, larger devices like the iPad and Galaxy Tab. On these devices, it's non-trivial to plan for how users will hold it in physical space.
The quintessential multi-touch experience for the iPad is Uzu, a particle/acid trip generator that can track all 10 fingers simultaneously. Obviously, if you are using this app by yourself, the only way to do so is to lay it on your lap or a table. Once you do so, its two-handed nature is a wonder to behold. Yet as fun as it is to play with, it can be awkward if there's no convenient place to lay it down. This becomes even more apparent if you try to thumb-type while holding an iPad in landscape orientation.
Then look at a game like Snood, a game that has historically used interfaces from controllers to mice and keyboards. The touch and drag mechanic works for aiming but the firing mechanic requires you to tap directly on the cannon. During development, it was probably assumed that most people would hold the device with one hand and manipulate the game with the other hand. But in practice, I have found that firing with an index finger is far less accurate than a thumb. Why? Because when held as you see in the second photo below, the thumb is anchored to the device. An index finger is essentially floating over the device. As you then move in to tap, your aim can shift and you tap (or even tap and drag) in a way you didn't intend. Most attribute this to some sort of "fat finger syndrome". Another way to say this is that touch interfaces have no state. When you stop moving a mouse, the cursor stays where you left it. When you finish a tap, the cursor disappears (if it ever existed in the first place).
I often play simple games like snood while I "watch" TV and I can tell you, holding the device like this for an hour leads to quite the cramp in my "firing hand". The designers of snood probably don't think of that game as "multi-touch" and that is why it's a game I can only play in short bursts. They've forgotten (or failed to learn) than in the world of mobile devices, EVERY app is a multi-touch app.
Congratulations - you are now a hardware designer
What this all means for the future of software interface design is that the lines between software and hardware are going to become VERY blurry. The world of flash began to teach us that just because you CAN put the navigation in a spiral around the center of the screen, that doesn't mean you should. Similarly, the touch world is beginning to teach us that EVERY piece of software is multi-touch, even if it's just a series of single taps because the hand holding the device is just another touch point.
This is why it's so awkward to do full typing on the iPad. Apple (paragons of usability though they may sometimes be) completely failed to plan for MOBILE typing on their MOBILE device. When it came time to tackle typing, maybe in an effort to avoid the "big iPod touch" moniker, maybe because it just didn't occur to them, they completely threw out everything they learned about thumb typing from the iPhone and instead, tried to build a touch-based laptop keyboard. If you are in portrait and need to type something on your iPad, your options are simple: double the length of your thumbs, find a table, or contort you body in to the "iPad crunch" as I call it (Knees together, back hunched. See below)
In a world where the software designer has planned for the hardware, you instead get something like this (click for a larger version) :
As we move into 2011, there will undoubtedly be a number of cool innovations in the multi-touch space. But the most important innovation has already happened, and it's simply time for everyone involved in interface design to remember -
Every App is Multi-Touch.