THE MOST CONVINCING lie Steve Jobs ever told was “you already know how to use it.” For years, Apple crowed about its ability to build gadgets that were so simple and obvious, they were practically ingrained—as if you could emerge from decades of cryogenic freeze and instinctively understand how to 3D-Touch the camera app icon and snap a selfie. Need proof? Just look at this adorable video of a two-year old, already playing games on her iPad!
That notion, of course, is false. Apple’s very good at teaching people to use products through ads, videos, even the setup process on their devices, but nobody’s born knowing how to use an iPhone. It never mattered, because Apple included one feature everyone did already know how to use: the home button. The home button was the thing you pressed when you didn’t know what else to do. It let you feel free to explore, smashing and swiping on the screen, because if it all went haywire you could quickly undo it all and start over. It was like Ted Danson snapping his fingers in The Good Place, starting the whole thing over with no one the wiser.
The companies making the next wave of tech products are increasingly forcing users to figure things out for themselves.
But in 2017, Apple did away with the home button. It’s not the only company to do so, but it’s the most prominent one. For the iPhone X, Apple replaced the obvious thing—the button you subconsciously want to push just because it’s right there—with a small horizontal line you’re supposed to know to swipe up on. Or swipe up halfway on. Or swipe down on. Nothing has buttons anymore, really. The future of tech, at least from what we saw this year, is filled with products much more versatile and complex than anything that came before. And the companies making those products are increasingly forcing users to figure things out for themselves.
Voice assistants like Alexa and Siri are probably the best example of this phenomenon. Every company who makes one of these assistants grapples with the same problem: users don’t know how to use them. They figure out the wake-word thing pretty fast, and sure, they know how to ask their Echo to play music or set timers. But Google, Amazon, and the rest want these assistants to be a constant companion, helping and participating in your entire life. They need to find ways to explain their capabilities, so users can discern the difference between “that didn’t work” and “it can’t do that.” Those distinctions are hard to communicate, especially on a device with no screen. And yet the people making Siri, Cortana, and the others are surprisingly relaxed about the process. They all say essentially the same thing: since the interface is so simple, people will just try stuff, and getting it wrong doesn’t really hurt. It’s just a matter of trial and error.
So far, though, that approach doesn’t work. A study in early 2017 found that nearly 70 percent of Alexa skills were “Zombie Skills” that next to nobody was using. Even when users found a skill, there was only a three percent chance they’d still be using it a week later. Another study found that only 14 percent of people reported using their assistant more than a few times a day, across all platforms. Users are discovering the very basics of their assistants, and enjoying them, but not digging much deeper.
The computer interfaces of 40 years ago were fundamentally unlike anything people had seen before, so of course they had to help them figure stuff out. Lots of early computers were made to look like people’s actual desks, so they’d know what to do. Notepads looked like notepads, torn yellow paper and all. Desktops looked like desktops, save buttons looked like floppy disks. Computers were an alien species, camouflaged to look a little more familiar.
Fast forward four decades, though, and users understand their computers better. So companies can try new, more experimental and useful things. But as new interfaces like AR and voice come into vogue, we seem to be skipping the hand-holding part. Because in theory these interfaces are so much more natural and human-like, the companies behind them think everyone will instantly be a power user. But that won’t be true. Sure, the start-up environment in Microsoft’s mixed reality system looks like a house. But the only way to move around your VR “house” is to teleport, which is probably not how you currently navigate your two-bedroom. The controllers are kind of like your hands, but you can’t just grab or poke stuff. You still have to learn the rules, and figure out for yourself what you can and can’t touch. Which is fine, except nobody’s doing much to teach you, because it’s supposed to be so natural that you already know how to use it.
It’s the same with voice, by the way. Yeah, voice is natural, we all know how to speak, sure. But nobody instinctively says “Hey Google, let me talk to Todoist.” It’s OK that we have to learn these things, and that tech companies need to teach them. But they’re not teaching them. They’re just hoping you’ll try a bunch of stuff and figure it out. That, or the tech will eventually get so good you will be able to use it however you want. Either one’s going to take a while, and neither one is certain to work. Until then, we’re stuck in a trough of user interaction, where you can sort of sense how things should work, but they don’t work that way.
Here’s the good news: After two months with the iPhone X, I don’t really miss the home button anymore. It took a while, but I figured it out. I’m OK with the trend in phones toward bigger devices with fullter screens, and slightly more ambiguous interfaces. Billions of users spent a decade learning how to use them, and now we know, so we can try new stuff. But as we move into new categories and interfaces, tech companies ought to not forget what made the home button great: it gave users permission and freedom to explore, because there was always a way home when things went wrong. And especially with early technology, no matter how supposedly smart or natural, things are going to go wrong.