As a design agency, we must keep usability heuristics aligned with emerging technologies like AI. This way, we can apply professional, up-to-date evaluation practices for clients developing AI products and experiences. Iterating heuristics, that is, the pragmatic mental shortcuts of our users, ensures we master industry best practices.
This year is the 30th anniversary of Jakob Nielsen’s 10 usability heuristics. Heuristic evaluation, according to Nielsen, is a method for finding usability issues in a digital interface based on a set of evaluators. By revisiting these usability heuristics, we can maintain their usefulness as guides for creating human-centered AI interfaces. In this article, we will take a look at Nielsen’s 10 usability heuristics and give you a short review of the principles examined on the 5 most used AI platforms: ChatGPT, Microsoft Copilot, Runway, DALL-E 2, and Gemini.
The 10 Usability Heuristics for Human-Computer Interaction provide a useful framework for evaluating AI systems.
When users interact with these AI systems, they should be kept informed of what the system is doing. Providing this real-time visibility into the status keeps users aware of progress, sets expectations about response times, and helps them perceive the causality between their inputs and the AI’s outputs.
Without proper status visibility, users may become confused why it is taking some time for these systems to produce responses. They may retry inputs thinking it was not received correctly.
Adding writing indicators with explanations improves visibility. Overall, these systems usually aim to provide clear, transparent feedback at all stages to create a predictable interaction, build user trust over time, and reinforce it’s an intelligent system and not a black box. This is why most of these tools work.
AI systems should use language, concepts, and examples that correspond to the real world and match the user’s context. Design interfaces should use language, concepts, and examples familiar to users and avoid internal jargon. It’s important to mimic real-world logic and patterns. Systems will be more intuitive and usable when they match the user’s mental models.
These systems are still evolving day by day, and their responses are not fully validated yet. The information they provide can have mistakes or be entirely false. Overall, the AI systems should build outputs that correlate to the user’s prompt in an intuitive, natural way. The closer the match between the prompt and response, the easier it is for the user to understand and continue interacting. Matching the real world builds trust and a sense that the AI understands the concepts. A mismatch will confuse users and undermine the experience.
It’s essential that these platforms provide users with clear ways to exit or undo actions if they make a mistake or change their mind. If there is an opportunity to easily reverse a step, it can enhance user control and freedom, building confidence in utilizing the full system capabilities.
People feel more comfortable exploring these AI systems knowing they can easily back out of unwanted outcomes. Without clear escape hatches, users can feel trapped once initiating a process. They may avoid certain interactions for fear of being stuck. This undermines adoption. Most systems currently lack undo or edit features that would give users more control, but they do offer options to start a new chat or provide suggestions for further actions. Some have voice control to add prompts without typing.
Adopting common patterns and standards could improve consistency. Also, these platforms must ensure that terminologies, actions, and outcomes are consistent with user expectations across similar tools. Using familiar interaction patterns and outputs reduces the user’s cognitive load. Leveraging familiar mental models makes the AI feel like a natural extension rather than a foreign outlier. Aligning with expectations integrates the experience seamlessly.
Adhering to consistency and standards streamlines adoption and creates intuitive, easy-to-use experiences. Overall, leveraging established conventions improves usability and accessibility. It demonstrates awareness of how these AI systems fit into the broader technology landscape users are familiar with.
The main aim of this heuristic is that the system should minimize errors by both preventing error-prone conditions upfront and detecting potential mistakes before users commit. Error prevention in AI involves not only avoiding user mistakes but also anticipating and mitigating errors in AI-generated content. They should incorporate guidance to help users frame requests effectively and offer real-time adjustments based on potential misunderstandings or inaccuracies in AI outputs. One more characteristic of AI tools is that it’s hard to predict all of the potential errors, so a lot of testing is required to discover edge cases.
Where possible, the AI systems should constrain inputs and steer users away from known bad outcomes. When errors can’t be eliminated entirely, the system should give clear confirmation messages to prevent mistakes. Preventing errors not only avoids frustration, but builds user trust that the AI will act as an assistant, and not take harmful or unpredictable actions. This provides a layer of safety and control – however, users should always be careful and consider to what extent they trust the AI, as it is not yet capable of realizing its own mistakes.
Overall, error prevention enhances the experience by guiding users, optimizing for positive outcomes, and minimizing unnecessary mistakes and backtracking. It demonstrates the thoughtfulness and care put into the AI design.
Applying this heuristic means that interfaces should minimize the need for users to memorize information when operating them. Information should always be visible in the interface or conversations so they can better guide users on effective prompts and inputs. AI interfaces have to minimize the user’s memory load by making options, commands, and potential actions visible or easily retrievable.
Reducing dependence on memory lessens the user’s cognitive load. They can rely on recognition instead of having to recall details from previous steps. Minimizing recall makes AI systems easier to use for a wider range of users. Interfaces that require heavy memorization create accessibility barriers.
AI tools should cater to both inexperienced and expert users, offering shortcuts or advanced features that can speed up interactions for frequent, expert users without overwhelming novices.
Enabling efficiency customizations cater to both beginners and experts. Novices use the basic interfaces while veterans can activate advanced options for accelerated workflows. Without shortcuts and tailoring, expert users may find the AI systems limiting. Providing flexibility allows a broader set of users to integrate the AI into their own processes. Overall, allowing power users to optimize interactions to their needs demonstrates thoughtful design. However, these advanced options should stay hidden until consciously activated to avoid confusing new users.
These interfaces are generally clean and minimalist and they adhere to this principle, but continuous evaluation is necessary to ensure that new features or information do not compromise design clarity.
Keeping the interfaces simplified and visually minimalist focuses user attention on key content and functionality. Removing irrelevant options reduces cognitive load. Overly dense interfaces overwhelm users, undercut usability, and make the AI feel opaque. An aesthetic, minimalist approach highlights what matters most. Well-designed interfaces should have the visual clarity and power of a sharp photograph: drawing the eye to the subject while fading unnecessary details into the background.
AI-specific errors, such as misunderstanding a prompt or generating inappropriate content, require clear, understandable feedback and straightforward correction paths. When errors inevitably occur, these AI systems should help users understand the problem and how to get back on track. Plain language error messages should explain what happened and why.
Visual treatments like color, icons, and animations should call attention to errors so users do not overlook them. Good error handling guides users to recognize, diagnose and overcome errors. Without support, people can feel confused and frustrated when issues arise. Purring care into errors establishes trust and confidence that the AI can gracefully handle unpredictable situations. This way, users gain resilience skills to productively move forward.
While the ideal experience is fully intuitive without assistance, helpful documentation can support users and improve adoption. These AI systems should provide easy access to documentation explaining core capabilities, limitations, and best practices. When these assistants offer help, it’s often scattered, incomplete, or not specific to tasks. More context-sensitive documentation tailored to specific use cases and tutorials integrated into the workflow would be a game-changer.
Content should be written with the user’s goals and terminology in mind, not developer jargon. Instructions should break down tasks into clear, concise steps. Search should make it easy to find help for common use cases. Tutorials and examples can further build user competence. Thorough documentation increases safe, effective use of the AI. However, over-reliance on help indicates the interfaces could be more intuitive. Strive for self-evident interactions. Well-designed help emphasizes learning over troubleshooting. It demonstrates care for enabling long-term user success, not just resolving immediate confusion.
As we have seen, the 10 classic usability heuristics continue to provide a strong foundation for evaluating and enhancing the user experience of AI systems. While the core principles hold up well, some expansion and refining of the heuristics is likely needed. The keys will be accommodating AI’s emergent capabilities while centering human needs and ethics at every turn. With the rapid advancement of AI systems, some aspects of the 10 Usability Heuristics may need re-examining or amending to continue serving as effective evaluation guidelines. Here are a few thoughts on what may need updating:
Although Nielsen’s heuristics are universal and provide a strong foundation, we’ve concluded that in the world of AI, it might be valuable to consider new heuristics specifically tailored to AI user experiences.
By expanding our evaluation frameworks, we can strive to create AI experiences that are not just usable, but socially responsible. While the basics endure, adapting heuristics to new technologies is key to upholding human values amidst AI’s rise. Nielsen’s fundamentals get us started on the right foot, but may only partially cover AI’s expanding terrain. With care and foresight, we can walk further down the path of ethical, humanistic AI UX design. Here are some examples of what can be new elements of the heuristics to consider in the future.
While the core heuristics remain relevant, some expansion and refining are likely required to accommodate AI’s emerging capabilities and focus on human needs and ethics.
However, the analysis also reveals opportunities to iterate on the heuristics to make them even more applicable to AI.
As AI capabilities grow, evaluation guidelines must evolve to maintain human-centric design, trust, and ethical alignment. Refining heuristics like Nielsen’s for the AI context will enable continued assessment of usability and progress toward truly human-friendly AI systems.
This article explored how to update classic usability heuristics to responsibly guide AI’s rapid growth. By refining enduring principles and proposing ethical new measures, we can uphold human values as AI capabilities advance that can support products with AI systems.
We have also tested the best AI tools for Design and Research. Check our website for the latest articles.
UX studio works with rising startups and established tech giants worldwide.
Should you want to improve the design and performance of your digital product, message us to book a consultation with us. We will walk you through our design processes and suggest the next steps!
Our experts would be happy to assist with the UX strategy, product and user research, pr UX/UI design. Check out our full list of services here.
In UX, most of the attention is on the surface: people are interested in wireframes,…
Slowly but surely, 2024 comes to an end, but the significance of investing in user…
Don’t judge a book by its cover. In the case of e-commerce websites from the…
You are standing in front of the participants, wondering where it went wrong. Why aren't…
The FinTech industry is rapidly growing and it can’t be stopped. As everything becomes more…
Lab projects are the heart of UX studio's commitment to constant learning and experimentation. In…