Couplett 1 of 3: Conception and Planning (and delays)

Couplett is finally coming out tomorrow, February 8th. For those that are interested, I thought I'd give a little insight into the year-long process that has brought Couplett to the App Store. A whole year, you say? Indeed. So how does such a simple concept take a year to come to fruition? Read on to find out.In January, 2011, I was having lunch with my daughter. We were having one of our favorites, Panera mac and cheese, while taking a break from some errands. I find that my favorite times with my children are when I get to spend time with them individually. My daughter had requested “the mac and cheese restaurant” and I was happy to oblige.

Being the age of cell phone cameras, I was taken by an impulse we all seem to have from time to time, “I want to capture this moment”. So there she was, across the table from me and I began the typical dance:

  • Launch camera app
  • lean over the table
  • extend arm
  • realise that I’m not sure the framing
  • weigh the option of using the front facing camera despite it’s lower quality...

Wait a second.

Why am I switching cameras? This device has 2 cameras, if I could just drop the phone in between us and use both cameras, that would be so much simpler!

Thus began an App Store search and I found that there wasn’t much available (or my search skills weren’t up to the task). I filed the idea away and set a RE.minder to talk to John about it on Monday.

After relaying the idea, John started playing around with the camera system and we made some interesting discoveries, the most important of which is that, on the iPhone 4, there is no way to have both cameras active at the same time. No matter how we sliced it, the best we could hope for was two images, captured one immediately after the other. We took this prototype with us to WWDC 2011 and confirmed this with Apple. Barring a redesign of the underlying camera system (or a significant change deep within the OS), we could not have both cameras active at once.

Armed with this knowledge, it was time to dive into active development. Oh if only.

After WWDC in June, we were knee deep in the development of Uncle Slam. In addition, we now had a bright and shiny  (if borderline alpha) new iOS version to content with. But as we all know, Apple is never exactly forthcoming with projected release dates. Could Couplett be iOS5 only? Should we shift Uncle Slam's focus to iOS 5? Decisions, decisions.

(read part 2 and part 3)

Every App is Multi-touch (even if it's not)

Back then…

In the mid to late 90's, there was something spreading across the internet like herpes. It promised freedom from the tyranny of table-based layouts, rich animations, vector graphics that could scale to any size and pixel perfect reproduction on any machine, regardless of browser, OS or platform.

That infection was (and is) Flash (Now Adobe, then Macromedia, previously FutureSplash).

What made this technology so appealing to designers was the promise that they could have complete and utter control over the presentation of their designs. No more worrying about how IE4 would render that table vs Netscape 3.2. No more sticking with Arial, Times New Roman and Comic Sans. Build your flash file at 400x600 and everything will always be exactly where you want it. But more than that, you are free to completely re-imagine the entire concept of web navigation. Forget about that back button, forget about users deep-linking to a specific page, your website is now a black box within which you, the designer, are god - usability be damned. In the immortal words of Jeff Goldblum in Jurassic Park, "We were so busy figuring out if we could, we didn't stop to think about whether we SHOULD".

As with most new technologies, it took some time for people to learn what flash was good at and what it wasn't, when to use it and when it was over kill, and probably most importantly, WHY to use flash (some are still fighting to learn this lesson). Flash brought a bunch of new functionality to interface design. For instance, JavaScript offered rollovers but now flash could give you animated rollovers with dynamic hit areas. What this meant to the overall goal of usable interfaces is still up for debate but one thing that DIDN'T change through this r/evolution was the method of interaction - onscreen cursor, driven by a mouse.


With the growing ubiquity of touch-based interfaces, we're seeing the first real paradigm shift in user-interface since Steve Jobs visited Xerox PARC back in 1979. While Flash helped us to learn that interfaces could be fluid, living and changing things, touch is teaching us new lessons.

What makes touch such an interesting development is where it's being used primarily - mobile devices. In the mouse and cursor world, the interface can do anything, as long as it can be manipulated with a single point traveling across the screen. Those who maintain this thinking moving into the touch world do so at their peril. Sure, there will always be software that just needs a series of clicks (now taps) to function, but in the mobile world, those too are multi-touch apps.

Why? Because possibly more important than simply incorporating more than one finger on the screen is remembering a touch point that many seem to forget - the hand holding the device. On smartphone handsets, where it's possible to effectively hold the device in one hand and operate it with the thumb of that same hand, this is less of an issue than it is with the new, larger devices like the iPad and Galaxy Tab. On these devices, it's non-trivial to plan for how users will hold it in physical space.

The quintessential multi-touch experience for the iPad is Uzu, a particle/acid trip generator that can track all 10 fingers simultaneously. Obviously, if you are using this app by yourself, the only way to do so is to lay it on your lap or a table. Once you do so, its two-handed nature is a wonder to behold. Yet as fun as it is to play with, it can be awkward if there's no convenient place to lay it down. This becomes even more apparent if you try to thumb-type while holding an iPad in landscape orientation.

Then look at a game like Snood, a game that has historically used interfaces from controllers to mice and keyboards. The touch and drag mechanic works for aiming but the firing mechanic requires you to tap directly on the cannon. During development, it was probably assumed that most people would hold the device with one hand and manipulate the game with the other hand. But in practice, I have found that firing with an index finger is far less accurate than a thumb. Why? Because when held as you see in the second photo below, the thumb is anchored to the device. An index finger is essentially floating over the device. As you then move in to tap, your aim can shift and you tap (or even tap and drag) in a way you didn't intend. Most attribute this to some sort of "fat finger syndrome". Another way to say this is that touch interfaces have no state. When you stop moving a mouse, the cursor stays where you left it. When you finish a tap, the cursor disappears (if it ever existed in the first place).

I often play simple games like snood while I "watch" TV and I can tell you, holding the device like this for an hour leads to quite the cramp in my "firing hand". The designers of snood probably don't think of that game as "multi-touch" and that is why it's a game I can only play in short bursts. They've forgotten (or failed to learn) than in the world of mobile devices, EVERY app is a multi-touch app.

Congratulations - you are now a hardware designer

What this all means for the future of software interface design is that the lines between software and hardware are going to become VERY blurry. The world of flash began to teach us that just because you CAN put the navigation in a spiral around the center of the screen, that doesn't mean you should. Similarly, the touch world is beginning to teach us that EVERY piece of software is multi-touch, even if it's just a series of single taps because the hand holding the device is just another touch point.

This is why it's so awkward to do full typing on the iPad. Apple (paragons of usability though they may sometimes be) completely failed to plan for MOBILE typing on their MOBILE device. When it came time to tackle typing, maybe in an effort to avoid the "big iPod touch" moniker, maybe because it just didn't occur to them, they completely threw out everything they learned about thumb typing from the iPhone and instead, tried to build a touch-based laptop keyboard. If you are in portrait and need to type something on your iPad, your options are simple: double the length of your thumbs, find a table, or contort you body in to the "iPad crunch" as I call it (Knees together, back hunched. See below)

In a world where the software designer has planned for the hardware, you instead get something like this (click for a larger version) :

As we move into 2011, there will undoubtedly be a number of cool innovations in the multi-touch space. But the most important innovation has already happened, and it's simply time for everyone involved in interface design to remember -

Every App is Multi-Touch.

Redefining productivity

I used to work exclusively as a server in restaurants.  It's pretty easy to define productivity in such a setting, if you have a second to breathe, you're not being as productive as you can be.  As a server, your job is to be the conduit between the kitchen and the floor.  The kitchen can make food at certain rate and you need to make sure that the kitchen is fed orders at a rate they can keep up with and then deliver the food at a rate that allows people to enjoy their meal without feeling rushed.When I started a desk job, productivity began to be measured in output.  The amount of "stuff" that could be held in your hand, printed out, emailed to a client or posted to a webpage.  Without "something to show for it", you might as well have been watching youtube for 8 hours.  After doing that for years, I found I was having a hard time re-difining productivity from the point of view of an entrepreneur .

In my new role, productivity has a whole different meaning.  You see EVERYTHING I do now can be made to be productive, depending on how you look at it.  I spend at least an hour every morning just reading; Following links from twitter streams, seeing what's being dugg, reading press releases, etc.  Do I have any "output" after this time?  Maybe not.  But as the "head cheese" or "frontman", I have to measure productivity in more than just "product".  Maybe what I've produced is the knowledge that a competitor is going to beat us to market.  Maybe I've discovered a new market and I'm going to produce a brief for a new app that might serve that market.

One thing that does keep me grounded though is the fact that I still take some time each day to "produce" in the more traditional sense.  I make graphics, setup UIs, build models, etc.  Having some time each day dedicated to "something to show for it" style productivity helps me to keep from loosing my mind during the rest of my "non-traditional productivity" time.

What is good UI?

No Bad UIWhile most of the information garnered at WWDC is strictly confidential, one thing I can talk about is the fact that Handelabra got a chance to sit down with some Apple interface designers to go over our apps from a usability and UI point of view.  Specifically with regards to StyleAssist, some of the incites reaped were terrific.What we've constantly struggled with while building StyleAssist is the balance between design and usability.  The simple fact is that making a thing supremely usable often times means making it ugly.  This is a personal opinion and I know there are MANY people who would disagree with me.  Yes, usability offers it's own beauty but StyleAssist is meant to appeal to a certain type of person - someone who appreciates a certain amount of style (it's right there in the name, after all).  That being said, the one resounding incite that came from our session was this:

More pictures, less UI.

That spoke to the simple brilliance in this quote:

"Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away." -Antoine de Saint-Exuper

It's a constant push and pull between usability, design, simplicity and power.  The goal is finding the right amount of power to make a thing worth using while still being simple enough to remain usable, but at the same time not ugly.  It's a tough row to hoe and it was priceless getting another set of eyes to look at it.  Specifically eyes that have done it before, for a company that often times values usability, design and simplicity over power.

What we needed to do was find the "center" of the app and let the UI design itself.  In our case, the center of the app is pictures.  All we needed to do then was find the smallest amount of UI that would facilitate the functionality we wanted to accomplish.  Only then did we start to "design" those UI elements.  The new look of StyleAssist is, to me, a lot more focused and, hopefully, easy to use.

We'll see when we launch it later this summer!