How can we build trust in intelligent automation?
Trust is a relatively new design component in interactive intelligent systems and is especially important in building Artificial Intelligence (AI) and Machine Learning (ML) technologies. When it comes to the adoption and reliance on these technologies, trust is essential to support the relationship between the user and the system – even one breach of trust can highly influence user perception of that technology.
With AI, we can train computers to perform specific tasks, augmenting humans’ capabilities and performances. Despite the benefits this can bring, AI poses considerable risks and challenges to society, raising concerns about whether these systems are worthy of our trust. The United Nations are aware, for instance, that the world has seen an increase of cases where AI was found to be biased, discriminatory, manipulative, unlawful, or violated privacy.
At Emergn, we understand that trust is an important element of the design and development lifecycle. That’s why we’ve put together a set of perspectives to evaluate and encourage trust in intelligent systems to bring the concept to the forefront of the design process.
Why is trust important in AI?
By learning humans’ capabilities, AI has emphasized the possibility of creating intelligent systems that can reason and behave similar to human brains and substitute human reasoning in performing certain tasks. Autonomous intelligent agents (an agent that is designed to function in the absence of human intervention) can perform speech recognition, decision-making, visual perception, and translation between languages.
But the problem with these state-of-the-art AI capabilities can be a lack of transparency and interpretability. Users want to understand the rationale behind a decision – if the technology can’t provide that, its applications will be limited in industries that rely heavily on trusts like healthcare and finance. It’s hard to trust the decisions of systems that one cannot observe and understand.
Defining reliable, safe and trustworthy systems
In the coming years, a huge effort will be underway to scale the potential of AI to augment human capabilities and successfully partner humans and AI. This synergy will only be possible if designers and engineers build trustworthy solutions that can operate transparently in people’s best interests.
Some businesses are failing to accomplish this because although they have the necessary technological resources, they lack a holistic understanding of the end-to-end user experience. A system may have the best algorithm performance and user interface yet fail to meet its user and business goals because there is not a trusting relationship between the user and tool.
With AI services, usable experiences do not necessarily result in transparent and trustworthy experiences. It is difficult to manage expectations in any service, particularly under autonomous intelligent systems where the user may lack understanding of what the system can and cannot do.
Therefore, designers and engineers need to emphasize inclusivity and ethics to mitigate the consequences of bias and recalibrate trust. For instance, with Google PAIR, this can be done by explaining predictions, recommendations, and other AI output to users. Designers and engineers must enable users to understand when to trust predictions or when to apply their own judgment. By failing to design this balance, intelligent systems will just add another layer of frustration to the user experience. Designers need to ensure users can understand the potentially unexpected output of the AI.
Establishing the right level of trust is an ongoing process. AI can change and adapt over time, and so will the user’s relationship and their trust with the product.
How can we encourage trust in intelligent automation?
Users often perceive outcomes from AI as a black box, which is why transparency and control improve users’ understanding of the actions that stem from AI. Presenting confidence levels, for example, can be a strategy to inform users’ decisions in calibrating their trust. It’s important to build proper AI feedback loops that lead the user to understand why the system is making certain decisions on their behalf. Opportunities for users’ feedback include presenting “error” messages as well as “correct” system outputs that will help to create a dialogue with users.
Designers’ roles under AI services are to clearly communicate 3 things:
- How user inputs are related to the goals being pursued by the automated process
- How control is distributed between the automated system and the user
- How transparency and explainability ultimately lead users to trust your AI system
An AI model’s accuracy will quickly degrade if the user does not trust the solution – although the model might be 95% accurate, if it is failing to meet users’ goals, model accuracy stands to be almost irrelevant without explainability and trust.
At the same time, if your algorithm is consistently making mistakes or poor predictions, it will erode any trust in your product that the user has built. The good news is that this erosion of trust can be mitigated by involving the end-user in the entire decision-making process and the model accuracy will also improve to further maintain user trust.
Explainable user interaction in the algorithm decision-making process is key to building user trust in your intelligent automated solution. Designers can support trust in AI systems in three ways:
- Increasing the available information: Why is the AI making those decision? System feedback will explain the AI decision-making process allowing the users to have a clear understanding of if they should accept a prediction.
- Taking into account human feedback loops: The AI system must be able to use the user feedback to evolve and improve the system performance and accuracy based on users’ implicit and explicit needs.
- Balancing control and automation: Like any relationship, trust is built upon feedback and transparency. Through user control, designers can empower the user while maintaining the usefulness of the product.
Integrating AI into your intelligent automation solution means establishing a relationship that evolves as the user interacts with your AI system. When designing for AI solutions, designers and engineers must build in opportunities for user feedback. Many AI errors or failures can be mitigated by user feedback that improves the system while building user trust in your AI solutions. As the first generation to bring AI to the world at scale, we have a professional and personal responsibility, but also the opportunity, to help users calibrate their trust throughout the product experience.