Steven Hoober is the author of Designing Mobile Interfaces: Patterns for Interaction Design. He is also President of Design at 4ourth, a design studio focused on mobile design.
We interviewed him about mobile devices, interaction design and … thumbs.
In case you missed, here’s part one of our conversation.
How Can We Account for Touch Inaccuracy in Mobile Design?
We should stop talking about errors and failure and talk about tolerances. Things are going to fail, so let’s plan around it like it will happen.
Much like you plan for your website, you don’t tweak it until every pixel is the same on every browser. You just get it pretty good so nobody notices the broken bits.
I’ve even seen people freak out about how text wraps. “No, it’s supposed to break and go to the next line after this word.” No, it’s just text and it flows.
You need to be a little bit like that.
So we plan for failure. Make sure the touch targets are big enough. And space them out so there is hardly ever thing touching each other. And don’t put critical things next to each other.
Plan for failure
So a thing that people touch all the time, like, “View Next Page,” should not have “Delete Account” right next to it. And if it has to be, then provide ways out.
Never have anything catastrophic happen on an immediate click. If something bad happens, make sure they have a way out of it. And not just the normal “Are you sure?” dialogue.
Because you aren’t thinking of its errors, rather its tolerances. It’s the way the world works.
Do it organically.
It happens a lot with email. If you try to multi select in Gmail, you press and hold and instead it clicks onto the email. This is annoying but you are still in the email so you can press the trash can and keep moving.
So if instead it opens a dialogue that says, “What do you want to do?” You try not to annoy them, but to let them continue with the process even if they click the wrong item.
How Can We Make Sure to Design for Every User in Different Contexts and Scenarios?
Design from the center outward. Design it at the very highest level to be touch friendly. Have big enough buttons and space things out.
Make sure that when things are tapped that you can tell they are tapped. We have too many little tiny icons where the icon only changes and you can’t see around it and you don’t know what you’ve clicked.
Don’t use web paradigms on apps. Don’t try to use too many links in your app because that doesn’t necessarily mean anything. And likewise don’t use too many app paradigms in the website.
A bottom iPhone-like tab bar on a website doesn’t make a lot of sense
to people. Or building in back buttons in your website. And yes, I’ve seen all of these things. The browser has a back button, you don’t need to add one to your website. So use the proper paradigms for things like that.
We just have to be cautious. Make sure it’s big enough that people can see it. Make sure it’s visible under all lighting conditions. Make sure there is high enough contrast so you can read even if you are color blind or are outside and the sun is glaring on it.
Accessibility standards guide us a lot on this stuff. This is something I’ve only started to talk about lately. Some people think I’m terrible and mean and don’t worry about handicapped people or whatever whereas in fact I think the accessibility principles have to be used for everybody.
Color blind is a great one. I design every system as though everyone of my users is color blind not just because a lot of people are but because essentially everyone is because on mobile you will likely have glare on the screen. You carry the phone around. So there are lights from overhead in the room and the sun and everywhere else.
For example you think if somebody is old and shakes a little bit or they are old like me and they can’t see the phone well, how big do the targets need to be or how big do the words need to be?
If you default to that size of items and “deconflicting” buttons. So if you click the wrong one it doesn’t cause a catastrophe. Then it works better for everyone.
You’ll have the occasional 14 year old crowd who can have high resolution vision and can see tiny letters. They can either get by or they can drop the size of all the stuff on their phone. Then they’ll choose to make it tiny and they’ll get a different experience.
But if you start with working for everybody then it tends to work not just for people with vision or stability problems but it works for literally everybody in all of these weird environments.
Can You Tell Us About About Your Design Process? How Does It Look Like?
Functionally, I really try to start with architecture and task flow which no one does. It’s really sad how few people do this thing.
I tend to start with diagrams, we draw a box, this box is a feature. From that we go to this box, etc. And once we’ve got all of that worked out, I do it as a user task flow. We make sure we map in where the data is stored. Is it local or remote? How long does that take?
We get all of that junk worked out and then we start designing grids and templates. We design the headers and the footers and the basic structure. You figure out what’s going to go in the menu versus what’s going to go in tabs or lists or how we get to everything.
In an ideal world, then all I do is build components out. Many times I’m working with client libraries, I like to build widget libraries, pattern libraries specific to each group or client we are working with. And then you can tell, for your product, “here is a map and here are the five basic ways of displaying the information.” You can now go off and build it yourself.
However, we’re still in a world where even when they insist they’re on Agile projects, they really want giant wireframes, giant specifications.
So I still end up making 120 page documents of every view of every state of every screen way too often.
Voice interfaces could take over the world tomorrow
But my ideal state like I said, would be structure, principles, and components. Then build it and we collaborate on how it should actually work. We build prototypes, show them to people, and fix them, in a cycle.
So you start with something that’s kind of dumb, it’s not working well, and it’s not what you wanted. But that’s no problem because it took you 20 minutes to do it. And once we get feedback, we’ll do a different UI layer. It’s no big deal because we can reiterate quickly by reusing all of these components.
Is Voice Command Really Something that Will Change How We Interact with Mobile Devices?
It feels a lot like I’m back 15-20 years ago to the start of my world, where everybody is talking about AI and voice and virtual reality and all this stuff.
To my cynical life having seen these technologies pop up two or three times in my professional career, it’s hard to believe it’s real this time. I keep being cynical about it.
I think maybe VR will catch on a little this time. But maybe not. The hardware is kind of expensive.
The other problem is that it hardly matters how good they are unlike, say, 20 years ago when some awesome technology could’ve taken over the world by being awesome.
The Oculus stuff is super cool. I’ve tried it out and it’s really cool. But, it can’t go anywhere outside Android and iOS, Windows and Mac, and a handful of console systems. Nothing else matters because they are such small markets.
And the same for any other thing. If you were to launch a whole new platform, say the Mozilla Firefox phone. Very awesome hardware but it went nowhere because no matter how cool it is and how much backing you have, good luck breaking into the marketplace.
So if anything cool comes out it’s essentially going to have to be integrated into one of these existing platforms.
So let’s say Google jumped ahead and make an awesome voice command product. They have leaped ahead. It’s not just voice to text, it gets even better. You can literally talk to your phone and have it do things, even more than it does now.
Does that mean that Apple tries to catch up with their own product? Or does that mean they do something unique about it and it becomes a differentiable feature? And which is the market standard?
I have these terrible fears that people are thinking about things way too narrowly and way too competitively. And so in my cynical mind I’m not sure we are going to see the awesome things that we could build.
So voice interfaces could take over the world tomorrow. But it could be better.
The kind of thing where you can yell across the room.
If it had that I would’ve used it constantly. I could get up in the morning and go, “Pull up my email.”
When my phone is beeping, I could yell at it from across the room, “Why did you just beep?” And it would say, “You have a new meeting.” “OK, tell me what the meeting is?”
I don’t know why they haven’t built it yet. So we are waiting for people just to build this cool stuff.
Steven Hoober is President of Design at 4ourth, a design studio focused on mobile design and co-author of Designing Mobile Interfaces: Patterns for Interaction Design. He is widely known for his ongoing research into how people really use touchscreen phones and tables.
Head down to part three for more talk about UX research.
For More …
Contact Misael Leon at [email protected].