Voice Navigation – What UX/UI Designers Need to Know

Design Services Fuselab Creative

User experience has fast become one of the most meaningful differentiators. Concurrently, users have much less tolerance for bad experiences.

We simply expect any digital platform to understand us and provide what we are looking for.

Up until now, UX/UI designers have primarily used graphical elements to support an excellent user experience.

But they are facing a new challenge, a new design realm full of possibility.

As strange as it may sound, the future of product design is no longer just about selecting or creating the right visuals. Instead, there is another realm to consider: audible tools.

Voice navigation is skyrocketing in popularity. Smart speakers are everywhere: Alexa, Siri, Google Voice, the list goes on. The way users engage with voice user interfaces (VUI) is extremely different from how they navigate conventional graphical ones.

At its essence, voice navigation should feel like a conversation – the AI-powered listening device should just get what the user is trying to communicate, regardless of how they go about it.

Achieving this is an incredible challenge, but one that UX/UI designers must contend with.

In this article, you’ll find some practical advice on designing voice navigation. But first, let’s discuss the limitations of this increasingly popular medium.

The limitations of voice-based AI

Voice navigation is nothing like a visual interface. If, for example, we ask Alexa to play us a song, the only way that song can be retrieved and played is if the title has been entered into the source as text.

VUI relies solely on text – design elements like graphics and photography are not translatable.

Here are some of the other critical limitations of voice-based AI:

  • It can only perform tasks it was programmed to perform, often resulting in inaccuracies.
  • The user is not usually aware of all the tasks it was programmed to do.
  • Voice navigation is not suitable for tasks that require visuals.

Despite these limitations, the use of voice navigation is increasing worldwide.

Check out also our Mozilla Common Voice interface project, which helps make voice recognition open and accessible to everyone.

The graph below shows the number of digital voice assistants in use from 2019 to 2023 in billions.

As humans, we tend to take the easy way out, the lazy option. If we can speak instead of type, we’ll probably do that. It requires less effort and takes less time. It’s about content accessibility, too. And websites that already meet the WCAG (Web Content Accessibility Guidelines) 2.0 standard are better positioned for being found by voice navigation.

Designing for VUI

Designing for voice user interface means coming head-to-head with one, indisputable fact: Users typically have unrealistic expectations about how they can communicate with voice-based AI. (Especially our youth. My 9-year-old has no patience for Siri’s limitations.)

Because the technology is still relatively new, users may overestimate voice-based AI’s ability, or at the very least, misunderstand it.

They might expect to have what feels like a very natural conversation, one in which their language quirks and colloquialisms are registered by the device.

Unfortunately, this isn’t usually the reality. And as UX/UI designers, it’s up to us to create user experiences that deliver on expectations within the confines of current-day technology.

Here are a few practical tips to get you started.

Give users information about what they can (and can’t) do

Think about a graphical user interface. When a user is prompted to take an action, they are given options to choose from. They can search by artist or album when browsing music. They can start a new game or load an old game when playing an app. They can reply to an email or forward it.

It’s essential that a voice user interface delivers this same kind of information – it must tell users what is possible to set realistic expectations in the users’ minds. If the user isn’t given this information, they might ask for something that isn’t possible in the system, leading to poor user experience.

Keep the amount of information given to a minimum

When users engage with visual content, they can look back over the information they missed or forgot. The same can’t be said for VUI. Users are tasked with keeping track of the information given in each subsequent interaction.

With this in mind, the amount of information given in any one answer or instruction should be kept to a minimum.

If, for example, you need to communicate a list, start with the most popular items and ask users if they’d like to hear more.

It’s not just about avoiding forgetfulness and confusion, either. Ever been frustrated by a droning phone bot: press one to do this, press two to do that, press three to speak with a representative, and so on. Users don’t want to wait around all day to achieve a simple goal. They want answers – fast.

Provide visual feedback

A little visual feedback goes a long way in reassuring users that their voice is being heard. We’ve all had the experience of speaking on the phone with someone only to hear silence in response.

Are you still there? Are you listening?

The same kind of frustration can brew when using voice navigation.

If possible, use visual feedback to show users that the interface is registering their voice. For example, when you say ‘Alexa’ to the Amazon Echo Dot, a blue light swirls at the top of the device. Alexa is awake, and she is listening. However, there’s still a lot of room for improvement here, just watch any sci-fi movie from the 50’s that includes a robot of any kind.

Establish the type of language to be used

People don’t always express what they are trying to say in a formal, proper manner. Instead, they use slang, cut corners, and make mistakes. The problem is, that UX designers working on VUI cannot account for the many, many ways of expressing the same thing – it’s almost an impossible task.

To minimize the friction caused by this problem, establish the type of language users are to adopt when communicating with a voice-based AI. In examples and instructions, include as much detail as possible about a request. Make it clear that interactions must be communicated accurately for the best possible outcome.

Looking forward

Voice navigation and user interface design open a whole new opportunity for leading UX/UI designers to innovate and explore how voice and sound can contribute to a brand’s identity, as well as other emotions and connections to users that visuals struggle to communicate.

Although voice navigation poses its fair share of risks and challenges, when executed with meticulous perfection, businesses can connect with their customers in exciting, more meaningful ways, which, let’s face it, there these opportunities don’t pop up every day—Let’s hit this one out of the park!.

Voice navigation will change the way we work and search forever, and in many way, already has!

Voice navigation will change the way we work and search forever, and in many way, already has!

Voice, along with traditional search methods will continue to overlap, with voice soon taking the lead in terms of general usage rates. If you want to be found by these search engines, you need to be rethinking how you design and develop content in the future.

The avatar of the Marc Caposino - author of the publication

CEO

Marketing Director -
Senior Strategist

About Author

Marc Caposino

Marc has over 20 years of senior-level creative experience; developing countless digital products, mobile and Internet applications, marketing and outreach campaigns for numerous public and private agencies across California, Maryland, Virginia, and D.C. In 2017 Marc co-founded Fuselab Creative with the hopes of creating better user experiences online through human-centered design.