Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

Follow publication

Opinion: What Generative-AI powered tools in the field of UX Research and Design need to get right to succeed

Photo by Faizur Rehman on Unsplash

The writing’s been on the wall for a while: For better or worse, Generative AI is here to stay. Over 100 million people use ChatGPT every week. (Marr, n.d.) And now that OpenAI has successfully completed the 4-minute mile, every tech company in the world wants in on the action.

From what can be seen, there are two distinct categories of AI offerings currently in the market. They are:

  1. General purpose LLMs: These are primarily developed by big tech companies such as OpenAI, Google, Meta, Amazon etc. They are often ‘multimodal’ in nature i.e. they “train with and use video, audio, speech, images, text and a range of traditional numerical data sets.” (What Is Multimodal AI?, n.d.) and can essentially do a little bit of everything. The resources required to build, train and maintain these models are simply not feasible for the majority of the industry.
  2. Special purpose Gen-AI Models: These are purpose-built Gen-AI models that serve very specific niches and workflows. Need an assistant that gives you code suggestions as you go about programming? What about really specific icons or stock image requirements that you either cannot find or afford? This is where startups and the medium-sized tech industry are currently thriving, and products are being developed around such purpose-built AI models.

However, since Gen-AI is still in its infancy, these models are far from perfect. Problems such as ‘hallucinations’ (Generative AI Hallucinations, n.d.) and misinterpretation of tasks and prompts have long hindered the possibility of total automation of tasks. As such, the best way for current professionals to adapt to the new normal has been to integrate AI into their workflow.

Naturally, this also applies to the field of UX Design and research. But after six or so weeks of trying the bleeding edge of what the industry had to offer, I have identified certain areas for improvement or ‘principles’ that UX-specific AI tools need to focus on to achieve widespread adoption. In my opinion, Gen-AI-powered UX tools…

1. …should springboard, not take-over

This primarily applies to the current state of Gen-AI technology. The University of Southern California’s research guide warns of the various limitations of Gen-AI, including the following: “The nature of generative AI models, particularly when given simple prompts, can be very reductive, resulting in content that is over-simplified, low quality, or very generic.” This applies to use cases outside of academia as well.

For instance: by using tools like UIZard’s Autodesigner, which aims to abstract the entire process of UX Design into simple prompting, one runs the risk of missing out on a lot of details and nuances that a UX Designer going through all the motions and processes of User Research and iterative design might catch onto.

A prompt for UIzard’s AutoDesigner
The Prompt
A critique of one of the screens generated by UIzard’s AutoDesigner
A critique of one of the screens generated by UIzard’s AutoDesigner

Gen-AI is far from perfect, but while we wait for technology to catch up, what today’s tools should focus on is either:

  • To provide an easily editable starting point for UX Designers to quickly gain inspiration from, or;
  • To only automate a relatively smaller part of the entire process (e.g. icon generation, typeface and color palette selection etc.) so as to eliminate the decrease in output quality that occurs due to the scope of the task assigned to the AI model.

2. …should be extensible

For a product to continue to be the best offering in a product space, it needs to constantly evolve and gain new capabilities. While a company might capture lightning in a bottle once with a particular offering, it’s not realistic for them to be able to anticipate and cater to every way users wish to interact with a product. This is why so many companies turn to the community after gaining a critical mass of users. By letting the community contribute to and build upon an initial winning formula, you increase their loyalty to said product, while also incubating continued innovation within your product’s ecosystem. There are several successful examples of this model in several product categories. From WordPress’s plugin ecosystem in website builders, some of Valve Corp’s biggest video games being borne out of community-built mods for their previous offerings, Visual Studio Code’s market dominance in Integrated Development Environments (IDEs) thanks to a vast collection of community-made extensions: it’s a tried and tested method.

So it’s safe to say that the same can be applied in the space of Gen-AI. In fact, OpenAI’s paid subscription tier to ChatGPT called ChatGPT Plus has a plugin store!

A screenshot of the ChatGPT Plus Plugin Store (Via searchenginejournal.com)
The ChatGPT Plus Plugin Store (via searchenginejournal.com)

Alternatively, several AI tools aimed toward UX Design and Research have a corresponding Figma plugin, which makes them easier to adopt and use out of the box in an environment familiar to the user base you’re trying to target.

A screenshot of Magician: A Figma plugin that uses Gen-AI to create iconography
Magician: A Figma plugin that uses Gen-AI to create iconography

From the users’ perspective: there’s never going to be a shortage of creativity in the community, and the positive feedback loop created from a good plugin ecosystem will allow you to extract much more value from a product than would otherwise be possible. Thus, it is vital for Gen-AI tools to be extensible.

At this point, the vision I’d had for my article was becoming clearer. I realized that my opinion was now mostly consistent, and I had a deadline looming over me. This made me turn to every writer’s friend in times like these: good ol’ ChatGPT.

I fired up a new conversation and provided the article draft (excluding this aside) to ChatGPT. I then asked it to ‘complete the article’ using brief explanations I had added next to the final three principles. Here is a link to that conversation.

After some edits and inclusion of examples, here is the rest of the article, co-authored by both ChatGPT and I.

3. …should be iterable

One of the pitfalls of many Gen-AI tools, including those in the UX Design and Research space, is their lack of flexibility after the initial output. Creativity is an iterative process, and the ability to refine and build upon generated designs is crucial for professionals striving for perfection. AI tools should not just generate a single output and call it a day; they need to empower designers to iterate and refine their creations.

Imagine trying to generate a set of wireframes for a mobile application using something like Wireframe Designer for Figma. You really like one of the generated outputs, but it’s not perfect yet. The problem here is that after a certain point, it’s more reliable to make the changes yourself than rolling the dice with a modified prompt, as it’s really hard to generate consistent output using AI that doesn’t have contextual memory of what was asked of it earlier.

However, one of ChatGPT and most other conversational LLMs’ biggest selling points is the fact that they can remember and recall details from older prompts within the conversation. This feature allows the user to ‘tweak’ the output produced by AI in an iterative manner: something that is essential in the field of UX.

A demonstration of how ChatGPT remembers key details
A demonstration of how ChatGPT remembers key details (via openai.com)

4. …should be contextualized

Every field has technical terms and phrases that refer to certain related concepts. Gen-AI tools in this domain must not only understand general language but also the specialized jargon inherent to the field to be usable.

For example, when using the Magician plugin for Figma, the terms ‘outlined’, ‘solid’, ‘stroke’, and ‘rounded’ have different meanings in designer-speak than they would have if interpreted in their literal sense. ‘Rounded’ in this case refers to icons with rounded corners. However, Magician’s AI fails to understand this context and interprets it as ‘circular’, often encasing regular icons within circles.

A screenshot of a rounded icon and the output generated by the Magician plugin for Figma
How AI sometimes misunderstands jargon

By training task / domain-specific AI to pick up on related jargon and terminology, we can make AI tools more effective.

5. …should attribute and be responsibly sourced

AI models often learn from vast datasets, and what better dataset than the open internet? This was the approach that OpenAI sought using ChatGPT, and this often drums up the debate on what information is fair game for a model to be freely trained on.

Several entities, such as nonfiction authors and the New York Times are currently suing OpenAI over the unauthorized and uncompensated use of their intellectual works. According to AP News, “the legal team representing The Times wrote Tuesday that what OpenAI and Microsoft are doing is ‘not fair use by any measure’ because they’re taking from the newspaper’s investment in its journalism ‘to build substitutive products without permission or payment.’ ” This, combined with misinformation that AI-generated content has the potential to spread has organizations and law enforcement scrambling to find a solution.

The first problem can be (in my opinion) tackled by greater transparency from the AI model on what sources it cited to provide the user with a response. Microsoft’s new Bing with CoPilot currently does this by actively searching the internet for answers and providing the user with the links to said articles for further reading.

The second problem is being worked on by entities such as Adobe, who have created an icon to depict AI-generated content, complete with metadata on which AI tool was used and who owns the copyright to said image. Thus, by pushing for greater transparency in how an AI generates its output and by creating protocols for identifying AI-generated content out on the web, we can make AI tools more reliable and effective.

In conclusion, the success of generative-AI-powered tools in UX Research and Design lies in their ability to complement human creativity and adapt to the unique demands of the field. By adhering to these principles of springboarding, extensibility, iterability, contextual understanding, and responsible resourcing, Gen-AI tools can play a pivotal role in shaping the future of UX Design and Research as a profession.

References

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Bootcamp
Bootcamp

Published in Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

Aditya Menon
Aditya Menon

Written by Aditya Menon

Just another engineering student penning his thoughts one keystroke at a time.

No responses yet

Write a response