AI-driven dark patterns in UX design
How AI is expanding the boundaries of dark patterns in UX design and what we can do about it.

In 2010, when Harry Brignull coined the term “dark patterns,” he likely never imagined how quickly technology would evolve. Fast forward to today, over a decade since the revelation of dark UX design practices, and we find ourselves at a crossroads where artificial intelligence (AI) and user experience (UX) intersect.
AI undoubtedly brings many benefits. However, we don’t want to overlook its potential to shape user experiences through manipulative means. As AI continues to reshape the way we interact with digital products, it appears the boundaries of dark patterns need to be redrawn.
In this article, we’ll define AI-driven dark patterns, share what these dark patterns look like in a real-world scenario, and discuss best practices to avoid designing experiences that exploit users.
What are Dark Patterns?
Dark patterns (also known as deceptive patterns) are design tricks used in websites and apps to manipulate users into making decisions that benefit the company, but not necessarily the user. They exploit our psychology and habits to push us to take actions that we ordinarily wouldn’t take, such as:
- Signing up for subscriptions we don’t intend to keep (“free trial” with confusing cancellation options)
- Sharing more personal data than we realize (pre-checked boxes for data collection)
- Spending more money than we planned (hidden fees or confusing pricing structures)
These tricks can be subtle, like using vague language or making it difficult to find the “unsubscribe” button. In other cases, they can be more blatant, like using urgency tactics or fake limited-time offers.
The key thing about dark patterns is that they’re designed to be misleading. They take advantage of our desire for convenience, our tendency to trust authority figures, and our aversion to missing out.
What are AI-Driven Dark Patterns?
How would we define AI-driven dark patterns? I find this definition by Luiza Jarovsky interesting. She proposes that AI dark patterns “would be AI applications or features that attempt to make people:
- believe that a particular sound, text, picture, video, or any sort of media is real/authentic when it was AI-generated (false appearance)
- believe that a human is interacting with them when it’s an AI-based system (impersonation)”
In this definition, Luiza identifies two kinds of AI dark patterns: false appearance and impersonation. Let’s see the various scenarios where these patterns play out.
False Appearance
AI is getting so good that it’s difficult for people to differentiate between AI-generated content and one created by a human. With written content, we have various software that can help us detect AI-generated content. But those are not available for graphic and audio content and this is where the snare lies.
Scenario: An e-commerce company might use AI-generated images of their products on their website or app instead of real product photos. Maybe they want to save the time and money it would take to photograph each product. Maybe because AI would produce a more flattering replica of their product and could sway the user’s decision towards making a purchase. Whatever the case, this is a dark pattern because users might not realize they’re looking at AI images, not real products. So, they could end up buying something different from what they expect.
Impersonation
Impersonation is when an AI poses as a human to interact with a user. The chat capability of AI is unquestionably remarkable. It’s very efficient at giving natural responses that make it sound human. Maybe even too efficient. And this is where it gets tricky for users.
Scenario: A user completes an inquiry form on a company’s website. Later, they receive an email from what appears to be a member of the company’s customer support team, requesting further details to assist with their issue. Unbeknownst to the user, this “person” is actually an AI chatbot employed by the company to provide virtual assistance.
This is deceptive because the user is unaware that they are interacting with an AI system and might share personal information that they otherwise would not have shared with an AI.
How to Avoid These Dark Patterns
The European Union’s AI Act Proposal lays out essential principles for reducing dark patterns in AI-driven UX design. Following these principles enables designers and developers to cultivate ethical and user-centered AI implementations. Based on that, here are some best practices to follow to ethically weld AI into UX:
- Always disclose when users are interacting with AI systems, whether it’s through chatbots, content generation, or personalized recommendations.
“Natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.” — The European Union’s AI Act.
2. Give users control over their interactions with AI systems. Allow them to opt-in or opt-out of AI-driven features and provide settings to customize their preferences.
3. Ensure that AI-generated content is clearly labeled as such to distinguish it from authentic content. Be transparent about the origin and nature of AI-generated content to prevent misrepresentation and confusion.
“Users who use a system to generate or manipulate image, audio, or video content that appreciably resembles existing persons, places, or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labeling the artificial intelligence output accordingly and disclosing its artificial origin. “— The European Union’s AI Act.
Conclusion
The future of AI in UX design is a double-edged sword. It holds immense potential to enhance user experiences, but also carries the risk of manipulation. As UX designers, the responsibility to ethically merge AI and UX, avoid deceptive patterns, and prioritize user-centered design rests firmly on our shoulders. We want to commit to designing a future where AI serves, not deceives.