Human-Centered AI: A Reliable Rudder in Rough Seas
Article - June 21 2021
We interact with and make decisions using AI systems every day: we decide whether to carry an umbrella with us through checking the weather forecast, and we scroll through our social media feeds of AI-curated content. “As our decision-making environments have become more complex, we become more dependent on extensive data analytics offered by AI,” said Ujwal Gadiraju, Assistant Professor at the Web Information Systems group of the Faculty of Engineering, Mathematics and Computer Science (EEMCS), Delft University of Technology, “what we perhaps haven’t yet understood is how we, as humans, can best work together with AI.” We are starting to see the unwarranted consequences of this: an increasingly polarised society as social media users perceive their feeds as the sole source of truth, policies built on algorithm output that result in discrimination against marginalised groups. How can we bring AI and humans to the same table, and build collaborative design methods that can truly harness the virtues of both worlds?
The unspoken limits or a glamorous by-line?
From Ujwal’s research, the design of AI systems has a significant influence on how we interact with them, especially how we come to trust them. “The design of an AI system can lead to varying levels of trust formation among users, resulting in over- or under-reliance on the system.” Ujwal described, “Imagine that you follow the advice of an AI routing system while driving, and over time, you find that it consistently gives you reliable advice, like suggesting a detour to avoid traffic. At some point, you develop a subconscious level of trust in the system, so much so that you may inadvertently follow its instructions, even if they turn out to be incorrect, since the advice may be based on outdated data, for example.”
The issue of over-trust is aggravated by the focus on communicating the “shiny aspects” of innovative technologies: the new possibilities and enhanced capabilities. “In most academic literature, the limitations section is a small part of the paper, and for good reason – we want to communicate the progress we have made scientifically.” However, it is precisely the shortcomings and pitfalls in the systems that often cause the most dangerous and undesirable societal consequences. “For AI systems to better serve our society as a whole, we need to keep human needs at the core of our design and developmental processes.”
Human-centered explainable AI
Human-centered explainable AI has the potential to facilitate appropriate reliance on AI systems. Instead of portraying AI systems as magical “black boxes”, a set of explanation methods and workflows can be designed to make users aware of the limitations of relying on an AI system. “In the car AI system example: when advising me on speed limits on a route, the AI system can also communicate bits of information about the uncertainty of the advice in an intelligible manner,” Ujwal suggested. “For example, if I’m made aware that when in a rural area and the data is more likely to be outdated than when I am in a big city, then I would be more cautious about how I rely on the system.”
The attention on explainable AI systems has resulted in a surge in interpreting decisions of complex machine learning models to explain their actions to humans. Yet, little is known about what constitutes a sufficient explanation from a user’s vantage point and the contextual settings. “Different individuals and user groups alike can have varying attitudes towards the same technology or narrative due to a range of factors, including their familiarity with the technology, individual traits, cultural differences, or contexts,” Ujwal elaborated. “To ensure that diverse users can effectively interact with the AI systems and their explanations, it is vital to understand who needs the explanations, why they are needed, and how we can measure their effectiveness.”
Bringing more (diverse) hands on deck!
To explore how explanations can be adapted and personalized across diverse stakeholders, these diverse stakeholders need to be involved in the design and development iterations of corresponding AI systems. Crowd computing is a promising approach to allow citizens and others from diverse backgrounds to take part in the design of AI systems, by contributing their knowledge and experience.
Ujwal and his colleagues at the Delft AI “Design@Scale” Lab are also looking at how design methods that are traditionally conducted with small groups, e.g., participatory design and ethnography, can be scaled up through the use of conversational agents – dialogue systems that can converse with a human. “Conversational agents can support designers by stimulating design thinking and guiding novice designers through various stages of the design lifecycle, while at the same time gather relevant knowledge from several users that will enhance contextual understanding and inform design decisions,” Ujwal explained. Most crucially, conversational agents can lower the barrier for participation in design exercises, with the use of a familiar conversational interface (a phone messaging app, for example) to facilitate a turn-taking exchange. As a result of this, designers can reach more users, improve the representativeness of the participants in the design process, and reduce biases in their contextual understanding of different user groups and corresponding needs. “I see this as an iterative process that improves over time. The fact that we keep human values at the core of the AI system design and development process means that we have a better chance of not making the notoriously bad calls that affect more people or a few people in a grave way."
This story is part of the Open AI research at TU Delft series - also read the introduction and other stories in this series.