Category:
Machine Learning UI Design
Duration: Duration icon 8 min read
Date: Duration icon Sep 1, 2021

The Future of AI-Constructed Design

Machine Learning and AI-Constructed Design

Is machine learning ready to take adaptive user interfaces into The Future?

Ready, Player One?

Recently, Facebook announced the “next phase” of its expansion: an ambitious foray into what tech giants are calling “The Metaverse.”

If it sounds pretty sci-fi-ish, that’s because it is: the term first appeared in Neal Stephenson’s 1992 novel, Snow Crash, which accurately described a convergence of physical reality, virtual environments, and augmented “players” in a shared space online.

Given Facebook’s focus on the integration of products, social communities, creators, and commerce, this seems par for the course.

But as the company gears up to reinvent the very fabric of our realities, you have to wonder how this gargantuan effort is ever going to come about. The answer may be closer than you think — just look at your screen.

Screens and the interfaces they display have become the entry point into a fully realized digital world that exists beyond our singularly analog one. As Facebook progresses into the metaverse through Oculus 2 headsets, the data collected right now about user interaction, avatars, and preferences about digital experiences will be the data skeleton of metaversality.

Before you can run, you’ve got to crawl — and adaptive UI powered by machine learning and AI is precisely the current stop-gap that’s setting the stage for our collective foray into the metaverse.

What is adaptive design?

Adaptive UI harnesses AI to track and learn from user actions.

At its core, adaptive UI harnesses AI — artificial intelligence — to track and learn from user actions while the user is interacting with the interface of a program.

An example of this is Siemens PLM program NX, a CAD software that harnesses machine learning to make productivity gains in a user’s workflow. Based on machine-learning algorithms, the adaptive UI then predicts and presents likely or useful commands for the user to take advantage of.

A rough schematic model of a machine learning-based AUI might look like this:

Adaptive design

What is adaptive design?

It’s useful to think of AUIs as a more sophisticated version of a command-based digital assistant like Alexa.

Adaptive UI thinks, learns, and anticipates. It knows what you need before you even know that you need it. Imagine an overbearing, invasive, but highly effective Moneypenny, and you’ve sort of got it.

To power an adaptive UI, you’d need to integrate machine learning algorithms that watch your interactions with a screen, a browser, or a piece of software. The commands and sequences of actions you take set a context for your later use.

There’s a behavioral pattern, and an IUI powered by AI would understand that pattern and reorganize itself based on your preferred context.

Right now, there are multiple “dimensions” along which interfaces could adapt. These include:

  • Generating new knowledge — recommendations that draw the user’s attention to something related but not entirely known yet. Netflix’s recommendation list, “You Might Also Like…” is a great example of this.
  • Entering data or information — predictive keystrokes, form-filling, and commands. UX for search is a good example — a smart search engine will deploy machine learning to spot interaction patterns that then allow for a search experience based on filters, recent or related searches, or federated or group searches.
  • Filtering information — a program like NewsWeeder, which would harness user ratings on the information presented to “learn” about a user’s preferences for news articles, then create a user avatar.
  • Optimization — route advisors or layout and adaptive design elements reorganized based on a user’s interests, intended actions, and even visual or optical tracking.

Of course, you can’t talk about user interfaces without branching off into the actual consequences of those user interactions — otherwise known as user experience. Right now, we have plenty of examples of adaptive UX — and they’re only scraping the surface of what’s to come with the metaverse.

What is adaptive UX design?

Adaptive UX takes into account the devices or channels.

While adaptive UI focuses on the user’s preferences, adaptive UX takes into account the devices or channels through which a user interacts with a digital element. UI/UX experts are in charge of all user experience activities.

Take, for example, sites optimized for certain browsers or screen sizes. Adaptive user experience sets parameters for graphic elements like layout and visibility, as well as information presentation, such as deciding what content to display for which type of context.

The Fuselab Creative site is a good example of this basic form of adaptive UX in motion. It scales and conforms according to device screen size, but it also reformulates layouts based on previous mobile interactions versus actions taken on a desktop screen:

Adaptive UX powered by machine learning would be an aid, not an agent. In other words, its prime objective is to help the user sort through the mountains of data and information or potentials and possibilities available to them by presenting only options that they’re likely to find relevant.

The way that AUX does this would be through machine learning algorithms that observe interactions, build patterns about the data it collects and “learns” from, and then re-prioritize information on a screen in a way that aligns with a user’s most likely goal.

responsive design

Where machine learning for content personalization is about spurring conversions and influencing user decisions, adaptive UX is the precursor. It sets the graphical and layout elements that presage a conversion by simplifying a user’s ability to interact with on-page elements in a personal way.

You’re probably well aware of adaptive user experience already through such uses as:

  • Recommendation lists (movies to watch next or products you might also want to purchase)
  • Deals and special offers personalized to a user’s clicks, past purchases, and browsing history
  • Personalized, one-click shortcuts to the most likely “next” or desired action
  • Ad visibility (choosing what you want to see and what you want to turn off)

So we’re already using AUI to inform AUX, and the two go hand in hand. With machine learning algorithms, the sky’s the limit when it comes to the overall goal of making digital environments — apps, the internet, and more — simple, enjoyable, and beneficial.

Or so it would seem. The fact is that users are still experiencing limitations when it comes to machine learning-powered adaptive user experiences and interfaces, and it’s not about a lack of power or accuracy — it’s about a lack of transparency.

The Black Box Model Holding Back Adaptive User Interfaces

When interacting with machine learning-powered user interfaces, the inputs are deceptively simple — but we’re not always sure which input matters and why.

Similarly, the outputs produced aren’t always useful, beneficial, or even consistent with user interaction.

And this begs the question: why?

Right now, tech giants like Uber, Google News, Facebook, Instagram, Netflix, and even Apple are facing issues with the consistency of their adaptive user interface because we’re all operating on the black-box model of machine learning.

Ostensibly, if they’re looking for a change to their interface or digital environment, it’s because elements within the environment are irrelevant to them. On the surface, it may seem that this occurs because the hidden layers of DNNs or proprietary algorithms are “still learning” and are in the process of expanding those neural networks. Low precision, then, pars for the course, right?

Not quite.

A truly delightful user experience for adaptive interfaces builds a certain expectation about users regarding how the algorithm works. Disruptions to these mental models result in confusion and experiences that are less than delightful or consistent.

An example of obscure predictive layouts is the low-precision or completely irrelevant Netflix suggestions on the “Because You Watched…” lists. Though the first few suggestions have a clear pattern, scrolling through the list often results in raised eyebrows, and a user is left wondering about particularly strange recommendations.

Black box models of machine learning also present notable biases in terms of “personalized” offers or seemingly decentralized or “democratic” search results that run awry of AI engineers’ initial efforts or intentions:

twitter-post

What were the parameters of inputs that mattered? How did David Heinemeier Hansson’s wife interact with Apple Card? And did it even matter? The answers to these questions are, of course, obscured. It may not seem to matter much, but the issue becomes more serious if the stakes are higher — like deciding who receives treatment in a hospital, for example.

So, the next stage of AUIs isn’t only about operability or accuracy but also about explainabilityTruly adaptive user interfaces, linked to adaptive user experiences, don’t only present a pre-made iteration of a layout, content, recommendations, or ads. They also need to:

  • Make room for transparency into which of a user’s inputs count.
  • Allow users to take more action or tweaks

Yet, even the above mitigation interrupts the user experience because personalization comes at the price of increased user effort.

To take the Netflix example once more, personalization of layout and recommendations based on user, session, and device provides a very specific and granular adaptive user experience — right down to the thumbnails.

But the actions of browsing, adding, and watching often create an environment riddled with “duplicates,” which then increases interaction cost and downgrades usability at its most fundamental.

Stay Ahead of Your Competition and Improve Your UX at the Same Time

Stay Ahead of Your Competition and Improve Your UX at the Same Time

Stay Ahead of Your Competition and Improve Your UX at the Same Time

UI/UX DESIGN SERVICES

The Making of the Metaverse

Companies (businesses) using sophisticated ML-powered adaptive, intelligent interface design in user experience still have their issues to sort through.

Before we make it to the metaverse, even giants like Facebook will need to reconsider how to refine interactive elements of the user experience.

There are some clear ways forward because the black box model isn’t inevitable (no matter what tech companies preach):

  • Giving users clear and consistent insight into which of their actions contribute to the output of ML algorithms.
  • Allowing users to control — reorganize, and resort — elements of the output in ways that are easier for them. Track that reorganization and set of actions as data and use that to inform future predictive layouts and recommendations.
  • Frontloading descriptions and headlines in ways that allow users to scan the data, gain context quickly, and decide which actions to take.
  • Personalizing based on a user profile or “avatar” (which will be key in the metaverse), rather than varying the layout or experience by session or visit.

Player One, hang tight.

Marc Caposino
CEO, Marketing Director

20

Years of experience

7

Years in Fuselab

Marc has over 20 years of senior-level creative experience; developing countless digital products, mobile and Internet applications, marketing and outreach campaigns for numerous public and private agencies across California, Maryland, Virginia, and D.C. In 2017 Marc co-founded Fuselab Creative with the hopes of creating better user experiences online through human-centered design.