Innovation is evolving in every industry, changing your experiences in the grocery store, in the shopping mall, in the car, and, more recently, in the living room. Users can now interact with our connected devices by communicating a message or a search query, verbally. As a baby, we learn to communicate and we learn from communication.

One of our most natural ways of communication is through voice. Verbal communication is the most praised form of communication a human receives, and it’s achievement is a stepping stool into toddlerhood. Similar to how a baby learns to “coo” and “babble,” to spout vowel and consonant sounds, and eventually to verbally and effectively communicate, a machine learns and adapts. What’s important to note, however, is that bad experiences with voice devices are not a result of poor technology, but rather a result of poorly designed user interfaces.

Natural language is difficult for a machine to understand. Imagine I ask my uncle who is a teacher, “how was school today?” After any generic day, he might respond “hectic,” or “good,” or “alright.” Now imagine my uncle has just returned to school to teach after an invasive surgery. His response will most certainly differ. He might say, “it was much better than I thought,” or, “I was in immense pain.” What a human understands when they have conversations is the context, which can be demonstrated through body language and through intonation. What technology struggles to understand is intention, and this is what makes designing user interfaces so complex.

Connected devices are programmed to collect information, meaning they lack context and they may struggle with certain vernacular, accents, dialects and/or languages. As technology continues to take the trouble, stress, and effort away from people, the natural user communicating through voice and conversation develops higher expectations. Because the user is a natural user when engaging with connected devices, developers cannot use the same methods, processes and systems that they might use when a human interacts with, for example, a mobile application.


Marc Paulina, a UX designer on Google Assistant for WEAR emphasized during a Panel on Voice UI: What’s all the Noise About, the value of understanding the user and how his team goes about understanding the user in order to create a more user-centric design. Paulina responded by saying he constantly reiterates two questions:

  1. What’s the user’s motivation?

  2. What’s the user’s goal?

He went on to shed some insight on the best practices for conversation design. Paulina recommended focusing on designing for failure. This is because of the stage the technology is in. As demonstrated, ambiguity and misrecognition still exist due to varying languages, vernaculars, dialogues and a lack of context clues.

Your Devices Personality

Beyond interactions and conversations with your connected devices is a device’s character. I, for example, asked my device, “Hey Siri, what’s your favorite food?” Siri responds, “I don’t eat much [name], I’ll leave it up to you.” Here, she is demonstrating character by giving the user the power. The device wants to connect on a human level with the user, while keeping distance. Most importantly, the device wants to establish trust with the user. In the example given, the device provides trust and personalization by using my name. The voice is neutral and feminine with an intelligent and friendly persona. This was a deliberate branding tool used by Apple designers in California. Voice User Interface Design advises three things a brand should consider when creating a persona, and that is brand image, company role and the significance of establishing familiarity.