5/6: UI Patterns to power your AI products

Delve into the world of UI patterns in AI driven experiences, exploring how they’re reshaping experiences in modern applications. We’ll look at the rise of intent-based interaction and the various UI patterns emerging in response to AI’s unique demands.

Dilip jagadeesh
Bootcamp

--

Browse full 6 part series written by Dilip Jagadeesh and Kristina Rakestraw

The shape of a brain filled with lots of different bright patterns indicating a system of connections.
Image produced using DALL·E 3

One of the fundamental shifts we are seeing in user interfaces as AI gets better and better is a shift away from traditional ‘Command and Control’ interactions to ‘Goal and intent’ based interactions (Source). The traditional command-and-control UI patterns, where people mostly toggle controls to achieve desired outcomes, are being challenged by new intent-based interactions. This new paradigm enables people to express what they want using natural language, not how to do it. However in order to do so they must, of course, know what it is that they want. Let’s be honest — most people don’t know how to express what they want.

Imagine typing the following message into your device: “I want to book a flight to Paris”. The next thing you know the tool takes care of the rest — from finding the best flights to booking them with your correct information. This shift opens up a new realm of possibility for designing comprehensive and intuitive experiences. As simple as it sounds, this transition isn’t without hurdles. Modern large language models — while adept at understanding natural language — still place a high cognitive load on people which requires them to articulate their needs in prose text. Designing experiences that facilitate this new form of interaction — while remaining intuitive — is a significant challenge for designers.

In this article, we’ll delve into the world of UI patterns in AI driven experiences, exploring how they’re reshaping experiences in modern applications. We’ll look at the rise of intent-based interaction, its implications for UX design, and the various UI patterns emerging in response to AI’s unique demands. From chatbots to AI-first applications, contextual to invisible workflows, we’ll explore how these patterns are being implemented to enhance experiences.

Make it obvious in the experience

Every business and product has its own philosophy for leveraging AI in user experiences and how it will be perceived by people. For example, it can be perceived as a co-pilot or assistant to something that is invisible yet still makes the experience better. Depending on the unique product approach to adopting AI in the interfaces, it is super important to be transparent about when AI is being leveraged..

Symbology matters
Designers are getting clever with how they introduce generative AI to people through intentional symbols.

Leveraging commonly used symbols can help make these high-tech capabilities more familiar and obvious. For example, a magic wand and sparkles are popping up a lot in Generative IA. Why sparkles? Because this kind of AI can do things that seem magical. It can whip up creative ideas that we’d usually sweat over. Rather than try to reinvent the wheel and create a new innovative symbol, we recommend you lean into the obvious and meet customers where they’re at.

The image shows a person standing against an orange background. They are smiling and wearing an orange t-shirt that features a graphic print of a cartoon cat dressed as an astronaut in space, surrounded by stars and planets. On the left side of the image, there is a digital interface with the title “Orange astronaut cat” and a “Generate” button. Below this title, there are two smaller images showing variations of the astronaut cat character, one with a space helmet and the other in a full space
Adobe Firefly
The image is a screenshot of a video call, featuring a smiling man in a denim shirt against a virtual sci-fi city background. Two dialog boxes show prompts for creating sci-fi trivia and designing ancient sci-fi cities, with sample images provided. Icons for a microphone, camera, and other call controls are visible, and the Google Workspace logo is at the bottom.
Google workspace: Hangouts Gen AI Background creation

The robot symbol is another go-to symbol for generative AI, hinting that some smart AI technology is working behind the scenes. Creepy and cool, right? While the robot icon is on trend, it may also feel like they are conversing with a cold, unfriendly machine.

We see it as a bit of a balancing act — using the robot icon to show off the power without making people feel like they are chatting with a heartless tin can.

The image displays a logo consisting of the word “Jasper” in bold black letters. To the left of the text is a stylized icon featuring a circle with a gradient of blue to pink colors, enclosing a simplistic, smiling face. The logo is set against a plain white background.
Jasper.ai — Logo

And then there are abstract icons. These are the more enigmatic elements we’ve seen. With their use of basic shapes and lines, they’re less about direct representation and more about sparking curiosity. They invite people to interpret and engage, adding a layer of intrigue. Who doesn’t like a bit of mystery? This approach encourages exploration and personal interpretation, enhancing the AI experience by making them feel more immersive.

The image features an icon of a white knot-like design with interlocking loops, centered on a teal green rounded square background. The design is abstract and appears modern, likely representing a logo or an app icon.
OpenAI — Logo
Microsoft Copilot — Logo

Colors have meaning

Colors play a crucial role in how people perceive and interact with generative AI features in applications. Designers often use vibrant colors to make AI elements pop, while others prefer muted tones for a sense of trustworthiness. While we don’t personally believe that color alone can ensure trust, we get it.

Purple is the standout color in AI experiences to date. According to Google Arts & Culture, purple “signifies both the stability of blue and the energy of red. The color purple is generally associated with royalty, luxury, nobility, power, and ambition. Purple is also used to represent creativity, extravagance, dignity, grandeur, independence, pride, peace, mystery, and magic.” (Source). It’s becoming the go-to choice to symbolize Ai’s magic and power, often paired with gradients. Adobe’s insights reveals that the popularity of purple in AI isn’t surprising, given its association with the “magical” aspects of new AI technology. Now, purple is almost synonymous with AI, guiding people to AI-driven experiences.

Choosing colors isn’t just about following trends; it’s about aligning with your brand’s identity. It’s less about ‘to purple or not to purple’ and more about how to make AI recognizable yet true to your brand.

For instance, Notion uses brand-consistent purple icons for their AI capabilities, across the tool, leaning on symbology and descriptive copy to convey AI elements. The essence lies in balancing brand identity with the role of color in enhancing AI features.

The image shows a user interface with an “Ask AI to edit or write…” prompt, accompanied by a toolbar offering options like “Generate from selection,” “Find action items,” “Translate,” “Explain this,” and “Summarize.” There’s a cursor icon indicating the “Write with AI” function, suggesting interaction with an AI-powered writing or productivity tool.
Notion AI

Other applications using purple to indicate AI capabilities include Figjam, Canva Magic Edit and Framer

The image is a snapshot of a digital workspace for a team’s weekly meeting. It includes sections for an icebreaker activity labeled “Two Truths & a Lie” and updates, with a time allotment for each. There’s a sidebar suggesting AI-generated assistance for creating meeting agendas and next steps. The workspace has a playful and colorful design, with purple and yellow note sections, and is part of the “World Peas / Team Weekly Sync.”
Figjam
The image is a screenshot of the Canva interface in ‘Magic Edit’ mode, where a user has brushed over a man’s face to indicate where to generate sunglasses. The AI has generated a pair of sunglasses, and the user can select the desired result from options presented. There are also options to share or add more elements to the image, with the Canva logo in the top left corner.
Canva Magic Edit
The image shows a text snippet from a chat interaction, possibly with a virtual assistant or a chatbot. The text reads: “A playful landing page for a startup called ‘Magic’. The landing page needs to introduce the company and its products, provide compelling reasons to s”
Framer start with AI

Labeling brings clarity

While designing AI experiences, clear and concise labeling is key, especially for complex generative AI features. These labels act as navigational guides — helping people understand what the features are, how they work, and how to use them effectively.

By demystifying AI functionalities with straightforward labeling, designers can make these advanced features more intuitive. Good labeling transforms the often-intimidating world of AI into an approachable and manageable experience for people, bridging the gap between sophisticated technology and everyday usability.

The image shows a user interface with an “Ask AI to edit or write…” prompt, accompanied by a toolbar offering options like “Generate from selection,” “Find action items,” “Translate,” “Explain this,” and “Summarize.” There’s a cursor icon indicating the “Write with AI” function, suggesting interaction with an AI-powered writing or productivity tool.
Notion AI
An email composition tool is offering suggestions to make the message more inspirational. The suggested text praises the team’s remarkable progress and boundary-pushing efforts. Options to replace, rephrase, improve, simplify, shorten, or adjust tone are available.
Grammarly AI

Location matters

Depending on your product approach, AI capabilities benefit from being contextually available as well as having a global entry point in a persistent and obvious location. If you are leveraging invisible AI to make a recommendation, it is critical that you indicate that within the recommendation.

The role of motion

Motion is playing a critical role in making generative AI features not just more interactive, but also more intuitive and engaging. Designers are increasingly using animation to demystify AI processes, allowing people to visually grasp how it works. This approach transforms complex operations into understandable narratives, making the technology easy to use and less intimidating. Animations that depict data processing or idea generation add a layer of transparency, helping people connect with and trust the system.

Moreover, motion is being employed to infuse a sense of playfulness into interactions. This tactic is particularly effective in making AI approachable for a broader audience, including those who might be new to AI or find it daunting. By integrating playful animations, designers can turn complex AI interactions into enjoyable experiences that enhance engagement and contribute to more user-friendly interactions. This helps make the technology feel less cold and machine-like and more playful and conversational.

A spreadsheet displaying an agenda for an Annual Sales Conference. It includes columns for time, session title, presenter, location, description, and status. Sessions include sales training, keynotes, and an awards ceremony, with various statuses from ‘Not Started’ to ‘Completed.’ Locations vary from Hotel Lobby to Grand Ballroom
Google Workspace
A screenshot of an image editing interface in Canva, with a feature labeled ‘Magic Replace’ activated. The sidebar shows options to brush over the image and describe what to generate. The main canvas area displays a colorful still life setup with a biscuit, a purple cup, and a green plate on a yellow background. A tag labeled ‘Maya’ is visible, suggesting a collaborative editing session.”
Canva Magic replace

The elements of trust

One of the foundational elements of AI driven experiences is ensuring that people have trust in the workflow. Whether it is a simple recommendation or complex data analysis it is critical that people have a way to trust the output of the system. In order to do this, we recommend making some of the unusual data available to people. Let’s explore the elements that help make this possible.

Response confidence indicators

People are often in the dark about how an AI-driven system comes up with predictions and they’re clueless about how sure the system is about its guesses. A simple solution to this is to let the machine ‘speak up’.

For example, adding a confidence indicator is like giving the AI a voice to say, “Hey, I’m pretty sure about this” or “Hmm, I’m not too certain.” This is helpful for people to determine how much they want to rely on what the AI suggests. It’s all about building trust. When people see how confident the AI is, they’re more likely to pitch in with more data, helping the system get even smarter.

A promotional graphic displaying three smartphones against a red and blue abstract background. Each phone showcases a different user interface feature of a streaming service app. The left phone displays a show description for ‘Stranger Things’ with options to play and download. The middle phone highlights a ‘Continue Watching’ section with a user’s current shows. The right phone features a pop-up asking users to rate shows for better recommendations, with response options tailored to their view
Netflix rating and recommendation system

Additional reading: How netflix recommendation system works

The image shows a user interface for a quality assessment tool, indicating a “Quality Confidence Score” of 88%. Suggestions for improving the score include incorporating visual context and adding an internal review. A radar chart visualizes factors influencing the score, with areas like “TM Leverage,” “Glossary Terms,” and “Workflow Steps.” A circular progress bar also displays the 88% score prominently.
Smartling — Translation score

Explainers

Similar to confidence indicators, when doing advanced data analysis the logic behind the computation can easily get obscured. In order to build trust and have a way to audit the computation, people need a way to understand how the system computes. For example, a clarification of what data was used, how it was filtered and what function was applied to produce the end results.

The image is a screenshot of the Tableau Pulse analytics interface, displaying a “Device Sales” metric card. It shows 1,675 units sold and includes a trend graph indicating recent sales data. Details such as “Last Refreshed” date of Sep 12, 2023, data source, definition filters, and a brief description of the data are visible. The interface also includes interactive questions like “Which Products drove this sudden increase?” indicating user engagement features.
Tableau Pulse — explains how the data is being calculated

Tableau AI allows people to query their input in natural language and returns charts and interpretations of the chart. If someone wants to know the logic behind the outcome, they can easily get access behind the curtains.

“A screenshot showing a user asking for research on the economic impact of population growth on GDP. The response shows titles of academic papers, including one on renewable energy and another on GDP and unemployment in Bangladesh
ChatGPT — Plugin: Paperpile shows what was the request sent and response received.

Disclaimers

It is not possible to claim that any AI system is always accurate. Being upfront about the shortcomings of the responses and potential hallucination from the model is critical for building trust with people.

For example, Both Bard and Chatgpt have a standing disclaimer next to the input field explaining the potential mistakes these models can make.

The image shows a message box with text that reads “Message ChatGPT. ChatGPT can make mistakes. Consider checking important information.” This is a reminder that while I strive for accuracy, it’s always a good idea to double-check critical information
ChatGPT — Disclaimer in the footer
The image displays a user interface element that seems to be a text input field with the placeholder text “Enter a prompt here”. Below the input field, there is a disclaimer that reads: “Gemini may display inaccurate info, including about people, so double-check its responses. Your privacy & Gemini Apps.” This seems to be a screenshot of a part of an application interface, likely related to a conversational AI or an application that uses machine learning to interpret and respond to user inputs.
Gemini — Disclaimer in the footer

Badges to set the expectation

Another approach to encourage people to engage with AI-driven experiences yet help them understand the models are still a work is progress is through the use of badges. A simple “Preview” or “Beta”, helps better set expectations and allows products to fail and learn without losing trust in their overall brand.

For example, when Canva- Magic Design and Adobe-firefly launched their AI design capabilities, they added a “BETA” badge next to features so people could understand it was still in its early days.

The image appears to showcase a feature called “Try Magic Design” which is currently in beta. This feature seems to generate creative designs from user-provided media. Displayed are three sample invitation cards for a “Palm Springs Party” and a “Birthday” event. Each card has a distinct layout and style, but they share common elements like tropical imagery, a retro aesthetic, and event details. The designs have a playful and vibrant look, using pastel colors and bold typography, evoking a sense
Canva’s Magic design feature
Screenshot of adobe firefly beta page
Adobe Firefly (Beta)

Feedback loop

In order to facilitate continuous learning for AI systems, there needs to be a way to receive feedback on responses. This enables underlying models to learn someone’s preferences and better recognize when the system responses do not meet expectations. Depending on the architecture, models can automatically adjust the response or build a catalog of the feedback which is later analyzed and fine tuned by human intervention to increase accuracy.

Like / Dislike button

One of the most common patterns is a like / dislike button, due to its simplicity. These tend to get higher use because they do not require a lot of motivation to use to express sentiment.

A screenshot of a digital interface, possibly a chat or feedback system, highlighting a message labeled ‘Bad response’ with a button to communicate further with ‘Message ChatGPT’. The message above suggests consulting economic and demographic databases for detailed analysis and reminds that many factors can influence population and economic growth.”
ChatGPT —Response feedback mechanism

Contextual feedback

Another powerful way to help the system learn is by offering contextual simple-feedback-questionnaires immediately after the response is generated. It is often also paired as the next action after a person responds with like/dislike.

For example Microsoft copilot has a simple contextual feedback form after like/dislike in order to collect context about the expectation gap.

A user interface pop-up asking for feedback on a rating with the options ‘Offensive / Unsafe’, ‘Not factually correct’, and ‘Other’. A text box is provided to give additional feedback. A note at the bottom indicates that feedback will include up to the last 24 hours of conversation to help improve the service, with a ‘Learn more’ link
Gemini — Contextual feedback

Offer controls to guide users to intent

We believe that to have an intuitive AI-driven experience means to provide controls that offer people guardrails for focusing and understanding what is possible — as well as leveraging common GUI patterns with the context of prompt interfaces. Some example include skill selection, response modifier, multi-modal interaction and suggested prompts.

Skill selection

Some products have started incorporating skill selection in order to narrow down on a person’s intent. This allows people to explicitly mention the type of task they are looking to engage in. For example, Clickup requires people to specify the intent by selecting a skill from a dropdown list before engaging with the prompt.

Clickup — AI Tools / skills selections

Similarly, much more robust and open-ended tools such as co-pilot let people choose the conversation style so the model is aware of their intention for how to engage.

“Screenshot of a chat interface with a message that says ‘Thanks for chatting. How can I help you today?’ and icons for thumbs up, thumbs down, flag, and download below it. There are options to choose a conversation style, labeled ‘More Creative’, ‘More Balanced’, and ‘More Precise’, with ‘More Precise’ highlighted. Below, a prompt for ‘New topic’ and a text box that says ‘Ask me anything…’ with example questions are displayed. There’s also a note prompting the user to sign in to ask more Qs
Microsoft bing chat- Response style selection

Response modifier

Though one would wish it worked like a magician by understanding the person’s intent through one single prompt, we all know that is never the case. AI tools learn to understand a person’s intent and produce good results when there is dialogue between it and the person. In order to achieve this in a more seamless way, some tools have adopted a pattern that enables the person to modify their response by using the initial output as the canvas. What does that mean? Let’s look at Copy.ai as an example.

It is a Gen-AI tool that can create long forms of text and then lets people highlight only the text they want to be rewritten. It also offers the option to choose from alternatives without the person having to prompt “I need you to rewrite this “…”.

A header for a blog post with a toolbar at the top showing text formatting options. With an AI option to rewrite
Copy.ai — Selective text rewrite
A header for a blog post with a toolbar at the top showing text formatting options. With an AI suggestions for renaming the part of the text
Copy.ai — Selective text rewrite, Choosing the response.

Google Bard allows people to modify the response with preset options.

A screenshot of an interactive content editing tool providing personalized advice for writing an introductory message to a former professor. Options include making the message shorter, simpler, more casual, or more professional.”
Gemini — Response modifier

Many image generation tools offer explicit controls over the underlying model, so people can specify the exact style, relevance to reference image, model type, negative prompt and many more.

A digital painting of two cats styled in the manner of Van Gogh, with vibrant colors and expressive brush strokes, displayed in a generative art interface.
Stablecog

Multi-modal interaction

As the field is rapidly evolving, the underlying model’s ability to interpret the contents in different formats is getting better. This enables people to upload a file, image or video and give the task intent through text or voice and then run the prompt.

For example, Microsoft co-pilot lets people upload the relevant file and add a prompt through an inline prompt input component.

A screenshot of a word processing software interface with a popup feature titled ‘Create content with Copilot’ suggesting to ‘draft a proposal from yesterday’s meeting notes’.
MS copilot

ChatGPT lets you upload an image and annotate on it to direct it better and ask questions about it.

A complex parking sign with multiple restrictions; the text conversation below confirms that parking is allowed for one hour starting at 4 pm on Wednesday.
ChatGPT — Multimodality with image. Image Credit — Dr.Ray on X

Adobe firefly uses classic photoshop like annotation tools on an image along with a prompt to accurately understand what needs to be changed in the image.

Screenshot of a graphic editing software interface with a jaguar selected, and a ‘Generate’ button prompt suggesting the creation of similar images.”

Suggested prompts

One of the best ways to onboard people and help them discover the capabilities of the AI tools is through showing example prompts. This UI pattern also enables the person who is in the middle of the workflow to get a contextual recommendation on how they can use the AI tool.

A screenshot of google gemini chatbot which has prompt suggestions that are grouped in 3 categories. Understand, create and explore
Google Gemini — Suggested prompts
A screenshot of prompt library feature, which enables user to browse throught various categrory of prompts and select one
Copy.ai — prompt library

Prompt builder

One of the most difficult barriers with writing prompts is learning how to write them effectively. Explanations can create intentional guardrails to help establish some structure for users to define their intent.
Additional reading: NN- group research on articulation barrier

Examples:
Copy.ai has a feature that improves the prompt which gives people a structure to add more details to their original prompt.

A screenshot of a text editor with an article outline on the importance of regularly updating an iPhone system. The outline includes sections for an introduction with a hook to grab attention about the significance of updates, a brief explanation of the article’s purpose, and a section on understanding iPhone system updates with a clear definition and function explanation.”
A screenshot of a text editor with an article outline on the importance of regularly updating an iPhone system. The outline includes sections for an introduction with a hook to grab attention about the significance of updates, a brief explanation of the article’s purpose, and a section on understanding iPhone system updates with a clear definition and function explanation.”

Clickup example:
Select the skill you want to engage and fill in the details .

A screenshot of AI tool selection menu.
A screenshot of an application that is to capture the information about product structured to be used as a prompt

Moving with the shift

As we shift into the new goal and intent based way of interacting, it’s vital to leverage all the tools that make this a more seamless and dynamic experience for people. As humans, we are not very good at nailing our intent and want on the first try so it’s important to lean into this and provide mechanisms for people to get closer to what they are looking for.

📚 Continue reading the full series

  1. The resurgence of prompt interfaces in product design
  2. Exploring the mechanics of AI
  3. Design principles to consider when leveraging AI
  4. How AI is shaping the future of product experiences across industries
  5. UI Patterns to power your AI products
  6. Leveraging AI as a collaborator throughout the UX process

--

--

I have been a filmmaker, social entrepreneur, engineer, and now a soothsayer for SasS businesses. I use design to communicate & have thoughtful conversations.