Table of Contents
Many that work with AI have tried to make systems that can provide explanations for how they think. As AI becomes more popular, explaining how they think could increase users trust. This is a good move for data scientists.
Trust in AI: Long Attempted
This is not a new thing. Researchers at the Bretagne Atlantique Research Center have been trying to do this for sometime. So have scientists at the French National Center for Scientific Research. The study explored and questions where or not a it would help increase user trust. Researchers’ hopes are to better understand how explaining an AI’s actions may prevent some of the resistance. Resistance to change often accompanies not understanding something. Their paper was published in Nature Machine Intelligence. It argues that an AI system’s explanations might not actually be as truthful as some users assume them to be.
This paper originates from our desire to explore an intuitive gap. As interacting humans, we are used to not always trusting provided explanations, yet as computer scientists, we continuously hear that explainability is paramount for the acceptance of AI by the general public. While we recognize the benefits of AI explainability in some contexts (e.g., an AI designer operating on a ‘white box’), we wanted to make a point on its limits from the user (i.e., ‘black box’) perspectiveErwan Le Merrer and Gilles Trédan
AI Needs Accountability
Many researchers have argued that AI algorithms and other ML (Machine Learning) tools should be able to explain their rationale. Trédan and Le Merrer, though, say an AI explanation would have value locally. Locally as in, providing feedback to its developers who are trying to debug it. The pair think they might be deceptive in remote contexts. This is because some AI systems are managed and trained by a specific providers and therefor, its decisions are delivered by a third party.
A user’s understanding of the decisions she faces is a core societal problem for the adoption of AI-based algorithmic decisions. We exposed that logical explanations from a provider can always be prone to attacks (i.e., lies), that are difficult or impossible to detect for an isolated user. We show that the space of features and possible attacks is very large, so that even if users collude to spot the problem, these lies remain difficult to detect.Le Merrer and Trédan
Providing a Better Explanation
So, to provide a better explanation to their idea’s reasoning, the pair drew an analogy to a bouncer outside a club. A bouncer might who might lie when talking to individual customers exactly why they are denied to enter the club. Similarly, the scientist suggest that remote service providers could possibly lie to users about an AI’s reasoning for its predictions and actions. They are referring to it as ‘the bouncer problem.’
Our work questions the widespread belief that explanations will increase users’ trust in AI systems. We rather conclude the opposite: From a user perspective and without pre-existing trust, explanations can easily be lies and therefore can explain anything anyhow. We believe that users’ trust should be sought using other approaches (e.g., in-premises white box algorithm auditing, cryptographic approaches, etc.).Le Merrer and Trédan
Le Merrer and Trédan showed examples of how explainability of AI’s could be affected in this way. They hope: That their work inspires further studies to explore developing such machine learning algorithms that can explain themselves. The whole point of this is to increase the average persons trust in AI.
We plan to continue studying AI systems from the users’ (i.e., ‘black box’) perspective, particularly exploring this question: What can regular users discover/learn/understand/infer about the AI systems that shape a growing part of their life? For instance, we are currently studying the phenomenon of user shadow banning (i.e., blocking or partially excluding a user from being able to reach an online community) on platforms that claim that they are not using this method.Le Merrer and Trédan