The IUI objectives are to increase productivity, decrease costs, and improve efficiency. The IUI needs to work with the human, not against them. It has to be intuitive and have the ability to anticipate a user’s needs.
This includes the use of machine learning and adaptability. Examples like email filters, automated emails, and spoken word interfaces represent how IUIs enhance the user experience.
In the world of user interfaces, there have been many long-held best practices of designing platforms, most of which are still important when discussing IUI—you just have the added layer of intelligence.
Now, IUI seeks to perceive, interpret, use, and learn.
Those core compelling elements include:
So how will these elements transition to IUI, especially since the evolution of IUI is veering toward voice? How will these visuals translate to voice?
To boost engagement, brands have had to evolve IUI elements. IUI was once based on what the user saw on the screen, whether that be on a desktop or on an app on a smartphone. However, that’s no longer convenient enough for users. They want to simply use their voice, hence the rise of the digital assistant. In fact, research estimates that 50% of search will be by voice by 2020.
What’s interesting about the digital assistant beyond the typical is how it shifts IUI to a new realm, stripping away all the visual interfaces. Instead, now the IUI has to be focused on “conversation” between the user and the digital assistant.
Technology is so integrated into daily life that consumers expect more convenience. As a result, screen-less user experiences are what’s important now when designing a compelling IUI. The rise and implementation of the Internet of Things (IoT) has revolutionized IUI.
It probably seems somewhat strange to be discussing “design” when there is no visual screen, but that’s where the state of IUI lives now. The approach now is in terms of functionality. It still calls upon on the basics of effective IUI, it just doesn’t formulate into buttons and text on a screen.
To start a design for voice, it’s critical to understand how people naturally communicate vocally. With a comprehension of voice interaction, you’ll have the foundation for creating IUI for voice.
Next, it’s important to know how the user will interact, considering if voice is an alternative to screen interaction or if it’s the only way to use the device or application.
Apple’s Siri is an example of voice IUI that coexists with a physical screen interface.
When you add voice interface to IUI, be aware there is a big difference versus a product that uses screens. You cannot apply the same design guidelines. In voice user interfaces, there are no visuals or clear indicators that an action has taken place or what the options even are. Users are still unsure about what they can expect from voice IUI. Communication, after all, is conditioned to be something we consider human-to-human, not human-to-machine. This is, of course, changing with voice search growing and will eventually supersede traditional text search.
Since this segment of UI is changing rapidly, these best practices are merely parameters to be expanded upon as technology improves and as users become more dependent on human-to-computer communication.
You’ll notice that these are influenced by traditional IUI elements.
Voice doesn’t have a way to show you this, like the buttons on a screen—thus the app needs to offer helpful information like, “You can ask for today’s forecast or a weekly one,” for a weather app. So, this practice is not unlike how with visuals there may be some AI-enabled how-to visuals.
Again, with no screen, it’s hard to know which area you are in related to function, so the app needs to give guidance with specificity by responding, “Today’s forecasted high temperature is 70 degrees with a chance of rain.” It’s not unlike how you have to “reset” a user when they move to a new screen.
Most people speak in shorthand, and thoughts aren’t necessarily fully expressed. However, voice IUI needs full expression, so demonstrate the way in which they need to ask with examples. It’s intuition at work on an auditory level rather than visual.
Visually, users can go back-and-forth, but that’s not how voice interaction works. For that reason, options need to be grouped. Just as you want to keep it “clean” on the screen, voice should follow the same approach.
Ultimately, the elements used in IUI, whether on the screen or via voice, are at their foundation the same.
Most of these relate to guiding the user, which is what AI’s main role is in improving the user experience.
As the use of voice IUI expands, any major player in the space will need to adapt or be rendered irrelevant. It’s not a hard transition to take what’s on the screen and adapt it to voice.
The most important elements relate to intuition and anticipation—if your platform can do that users will adopt and use in high numbers.