Artificial Intelligence
October 18, 2024

AI in product design: what to expect beyond the hype

Gábor Szabó

About two years ago, when the AI boom started, like many of us, I was both excited and a bit troubled. I felt that something revolutionary was happening, but I also worried about how it would affect my career as a designer. Don Norman predicted that AI would create enormous opportunities for us, and I believed him. But even though I tried to envision it, I couldn’t quite see what it would actually look like. Now, having worked on an AI project at UX studio, I have some firsthand experience and it’s not what I expected. Here’s what I’ve learned:

1. Thinking of AI user experience as just an input box with answers is an oversimplification
2. AI is more than just large language models

Through design, I’ve also encountered some new challenges:

- Building trust with AI is crucial but requires careful attention to transparency and reliability.
- Defining the human-AI boundary
is essential for balancing automation with human oversight.
- Incorporating human in the loop
is necessary for refining AI performance through user feedback and interaction.

Let’s dive into these one by one. 

Abstract illustration of AI and design, showing a brain, a UI element, and arrows to indicate moving forward

Thinking of AI user experience as just an input box with answers is an oversimplification

When people think about AI and design, they often picture something like ChatGPT or CoPilot: a simple input box where users type their questions and receive responses. This can be interesting and has its own challenges, like how we can pivot from exclusively prompt-based output generation to a more controlled interface, as seen in experiments by Midjourney and Adobe Firefly. But let’s be honest, the interface alone won’t provide enough challenge for product designers in the coming years.

AI projects go beyond interface design: they often take us into uncharted territory where best practices haven’t been established yet. There are few guidelines to rely on, which means we have to experiment, adapt, and create new approaches from scratch. This makes working on AI projects a bit harder, as designers must experiment and innovate without a clear roadmap. Additionally, we’re getting used to collaborating with new roles, such as data scientists and machine learning engineers, whose expertise is crucial for making AI-driven products successful.

AI Is More Than Just Large Language Models

A common misconception is that AI is synonymous with tools like ChatGPT: AI that interacts directly with users. However, in most cases, AI operates silently in the background, driving decisions and processes that are invisible to the end-user. These parts of AI operations are much older than LLMs. For example, Spotify has been using AI to create personalized playlists, and Google personalizes your search results with it.

Spotify's AI-controlled recommendation feed

But AI’s role is even more critical in industries that don’t dominate your mobile screen. In fields like banking or healthcare, AI systems might be analyzing data, predicting outcomes, or making recommendations without the user ever being aware of it. Designing for these systems requires a different approach, as the focus shifts from creating an engaging interface to ensuring the AI’s decisions are trustworthy and understandable.

Building Trust with AI

One of the most significant challenges in AI-related UX design is building trust. At UX studio, we recognized this early on and even conducted our own research on it. Trust is essential because users need to feel confident in AI’s decisions and capabilities, especially when it’s handling tasks that impact their lives or business. Without trust, even the most advanced AI solutions can struggle to gain acceptance and deliver real value.

What I’m referring to, however, isn’t just about whether an AI like ChatGPT provides accurate answers. It’s about the extent to which business leaders, users, and stakeholders are willing to trust AI to handle critical aspects of their operations or reputations. The difference is crucial because we’re dealing with people’s lives or significant amounts of money.

When AI is used in high-stakes environments, the consequences of errors or misinterpretations can be severe. Our job as designers is to create interfaces and experiences that foster confidence in AI, ensuring that users feel secure in allowing AI to take on these tasks. Building trust with AI is a challenging task, and it’s also new territory—we don’t have established best practices yet. Here are some areas I found crucial:

Striving for transparency 

We can try to provide clear, understandable explanations of how the AI works, especially in decision-making processes. This can be done through dashboards that display data and trends, allowing users to see how decisions are made and what their consequences might be.

A familiar and simpler example of AI transparency can be found in Netflix’s recommendation system. 

Netflix's recommendation fields, with categories like "Top picks for you," "trending now," "Because you watched Narcos" and "New releases"

Netflix uses AI to suggest shows and movies based on a user’s viewing history and preferences. To enhance transparency, Netflix could show users why specific recommendations are made. For instance, they could display a message like, “Because you watched X, Y, and Z, we think you’ll enjoy this movie.” This kind of insight helps users understand the reasoning behind the AI’s suggestions, making the system feel more personalized and trustworthy.

Supporting interpretability

Interpretability refers to the ability to understand and explain how an AI system arrives at its decisions. It’s about making the decision-making process transparent and understandable for users. We can implement features that allow users to see why the AI made a particular decision, such as highlighting factors that influenced the output. 


Google has been incorporating Explainable AI (XAI) techniques into its tools to help users understand why an AI model made a particular decision. For example, in Google Cloud, if an AI model predicts that a loan application should be rejected, the system can highlight the specific factors that contributed to this decision, such as low credit score or insufficient income. This helps users see the reasoning behind AI decisions, making the technology more trustworthy.

Bias mitigation 

We can show users how we’re addressing biases in the AI system by using diverse datasets and regularly auditing the AI for fairness.

IBM Watson Studio provides a powerful example of how bias mitigation can be made transparent and actionable. In applications like mortgage approval models, IBM Watson Studio monitors key attributes, such as gender, to ensure fairness in AI-driven decisions. 

A slide from our study showing that the monitored female group received favorable outcomes 45.5% of the time.

Related to trust, it’s also important to define the boundaries of AI operations.

Defining the Human-AI Boundary

One of the most important decisions in AI projects is determining when human intervention is required. This issue has even caught the attention of lawmakers, with California proposing the SB-1047 bill, which would mandate a “kill switch” for AI systems. While some may view this as alarmist, it highlights a legitimate need to clearly define the boundaries of AI and identify the moments when humans must take control.

A great real-world example of this boundary can be seen in autonomous vehicles, like those from Tesla. These cars can manage routine driving tasks—such as staying in lanes, adjusting speed, and handling traffic—without human input. However, when the system encounters an unfamiliar situation, like a construction zone or unusual weather conditions, it prompts the driver to take over. This handoff is essential for ensuring safety in unpredictable circumstances.

The same principle applies in business settings. Deciding when AI should hand off tasks to humans isn’t always straightforward and varies by context. Take banking, for example: while an AI system may handle routine transactions autonomously, certain risk factors—like suspicious account activity—might require a human to intervene. 

The challenge for designers is to create a smooth transition between AI and human oversight, ensuring that the system remains efficient, reliable, and user-friendly. Whether it’s a simple “kill switch” or a more complex intervention process, giving people control when needed is key to building trust in AI.

Creating Feedback Loops Between Humans and AI

Finally, one of the most important aspects of working on AI projects is establishing feedback mechanisms between users and the AI system. AI learns and improves over time, but it requires quality input from human interactions to do so. Designing these feedback loops is crucial for the continuous improvement of AI systems. For example, in a healthcare setting, an AI might assist doctors by suggesting potential diagnoses. However, the final decision remains with the doctor, who can then provide feedback on the AI’s suggestions, helping the system learn and refine its future recommendations. How can this be done?

• Identifying critical decision areas: Determine which areas of the AI process are critical enough to require human oversight. These might include decisions that have ethical implications, high risk, or where AI confidence is low.

• Risk assessment: Conduct a risk assessment to identify where the consequences of incorrect AI decisions are significant, thus necessitating human intervention.

• Helping by monitoring AI decisions: Design interfaces that allow users to view recommendations and provide input or corrections. These interfaces should be designed to make it easy for users to interact with the AI.

A screenshot of GA4' suggestions based on traffic geolocation

Google Analytics provides a great example of the “human in the loop” concept with its AI-generated insights feature. It automatically surfaces insights about user behavior, such as the geographic distribution of users and trends in user activity over time. These insights are suggested by the system based on patterns detected in the data.

Conclusion

Working on an AI project was whole new territory for me. I had to learn a new terminology, work with new roles like data scientists and machine learning engineers, and face challenges that were totally unfamiliar. But it’s a journey worth taking—a journey that pushes us to think differently, collaborate more deeply, and constantly adapt as we shape the evolving relationship between AI and UX design.

Throughout this experience, I’ve learned that building trust is not just a goal, but a necessity. It requires designing transparent AI systems where users can understand and rely on the technology’s decisions. Defining the human-AI boundary is equally important to ensure the right balance between automation and human oversight, giving users a sense of control. Finally, establishing effective feedback loops allows AI to continuously learn and improve based on user input, making the technology smarter and more aligned with human needs.

If you’re looking for a partner to build better AI, I’d recommend checking out our UX/UI design and research service for AI.