Explainable AI (XAI), often overlapping with interpretable AI, or explainable machine learning (XML), is a field of research within
artificial intelligence
Artificial intelligence (AI) is the capability of computer, computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. It is a field of re ...
(AI) that explores methods that provide humans with the ability of ''intellectual oversight'' over AI algorithms. The main focus is on the reasoning behind the decisions or predictions made by the AI algorithms, to make them more understandable and transparent. This addresses users' requirement to assess safety and scrutinize the automated decision making in applications.
XAI counters the "
black box
In science, computing, and engineering, a black box is a system which can be viewed in terms of its inputs and outputs (or transfer characteristics), without any knowledge of its internal workings. Its implementation is "opaque" (black). The te ...
" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.
XAI hopes to help users of AI-powered systems perform more effectively by improving their understanding of how those systems reason. XAI may be an implementation of the social
right to explanation.
Even if there is no such legal right or regulatory requirement, XAI can improve the
user experience
User experience (UX) is how a user interacts with and experiences a product, system or service. It includes a person's perceptions of utility, ease of use, and efficiency. Improving user experience is important to most companies, designers, a ...
of a product or service by helping end users trust that the AI is making good decisions. XAI aims to explain what has been done, what is being done, and what will be done next, and to unveil which information these actions are based on.
This makes it possible to confirm existing knowledge, challenge existing knowledge, and generate new assumptions.
Machine learning
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
(ML) algorithms used in AI can be categorized as
white-box or
black-box
In science, computing, and engineering, a black box is a system which can be viewed in terms of its inputs and outputs (or transfer characteristics), without any knowledge of its internal workings. Its implementation is "opaque" (black). The te ...
. White-box models provide results that are understandable to experts in the domain. Black-box models, on the other hand, are extremely hard to explain and may not be understood even by domain experts. XAI algorithms follow the three principles of transparency, interpretability, and explainability.
* A model is transparent "if the processes that extract model parameters from training data and generate labels from testing data can be described and motivated by the approach designer."
* Interpretability describes the possibility of comprehending the ML model and presenting the underlying basis for decision-making in a way that is understandable to humans.
* Explainability is a concept that is recognized as important, but a consensus definition is not yet available;
one possibility is "the collection of features of the interpretable domain that have contributed, for a given example, to producing a decision (e.g., classification or regression)".
In summary, Interpretability refers to the user's ability to understand model outputs, while Model Transparency includes Simulatability (reproducibility of predictions), Decomposability (intuitive explanations for parameters), and Algorithmic Transparency (explaining how algorithms work). Model Functionality focuses on textual descriptions, visualization, and local explanations, which clarify specific outputs or instances rather than entire models. All these concepts aim to enhance the comprehensibility and usability of AI systems.
If algorithms fulfill these principles, they provide a basis for justifying decisions, tracking them and thereby verifying them, improving the algorithms, and exploring new facts.
Sometimes it is also possible to achieve a high-accuracy result with white-box ML algorithms. These algorithms have an interpretable structure that can be used to explain predictions.
Concept Bottleneck Models, which use concept-level abstractions to explain model reasoning, are examples of this and can be applied in both image
and text
prediction tasks. This is especially important in domains like
medicine
Medicine is the science and Praxis (process), practice of caring for patients, managing the Medical diagnosis, diagnosis, prognosis, Preventive medicine, prevention, therapy, treatment, Palliative care, palliation of their injury or disease, ...
,
defense
Defense or defence may refer to:
Tactical, martial, and political acts or groups
* Defense (military), forces primarily intended for warfare
* Civil defense, the organizing of civilians to deal with emergencies or enemy attacks
* Defense industr ...
,
finance
Finance refers to monetary resources and to the study and Academic discipline, discipline of money, currency, assets and Liability (financial accounting), liabilities. As a subject of study, is a field of Business administration, Business Admin ...
, and
law
Law is a set of rules that are created and are enforceable by social or governmental institutions to regulate behavior, with its precise definition a matter of longstanding debate. It has been variously described as a science and as the ar ...
, where it is crucial to understand decisions and build trust in the algorithms.
Many researchers argue that, at least for
supervised machine learning, the way forward is
symbolic regression, where the algorithm searches the space of mathematical expressions to find the model that best fits a given dataset.
AI systems optimize behavior to satisfy a mathematically specified goal system chosen by the system designers, such as the command "maximize the accuracy of
assessing how positive film reviews are in the test dataset." The AI may learn useful general rules from the test set, such as "reviews containing the word "horrible" are likely to be negative." However, it may also learn inappropriate rules, such as "reviews containing '
Daniel Day-Lewis
Sir Daniel Michael Blake Day-Lewis (born 29 April 1957) is an English actor. Often described as one of the greatest actors in the history of cinema, he is the recipient of List of awards and nominations received by Daniel Day-Lewis, numerous a ...
' are usually positive"; such rules may be undesirable if they are likely to fail to generalize outside the training set, or if people consider the rule to be "cheating" or "unfair." A human can audit rules in an XAI to get an idea of how likely the system is to generalize to future real-world data outside the test set.
[.]
Goals
Cooperation between
agents – in this case,
algorithms
In mathematics and computer science, an algorithm () is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for per ...
and humans – depends on trust. If humans are to accept algorithmic prescriptions, they need to trust them. Incompleteness in formal trust criteria is a barrier to optimization. Transparency, interpretability, and explainability are intermediate goals on the road to these more comprehensive trust criteria.
This is particularly relevant in medicine, especially with
clinical decision support systems (CDSS), in which medical professionals should be able to understand how and why a machine-based decision was made in order to trust the decision and augment their decision-making process.
AI systems sometimes learn undesirable tricks that do an optimal job of satisfying explicit pre-programmed goals on the training data but do not reflect the more nuanced implicit desires of the human system designers or the full complexity of the domain data. For example, a 2017 system tasked with
image recognition
Computer vision tasks include methods for acquiring, processing, analyzing, and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the form o ...
learned to "cheat" by looking for a copyright tag that happened to be associated with horse pictures rather than learning how to tell if a horse was actually pictured.
[ In another 2017 system, a ]supervised learning
In machine learning, supervised learning (SL) is a paradigm where a Statistical model, model is trained using input objects (e.g. a vector of predictor variables) and desired output values (also known as a ''supervisory signal''), which are often ...
AI tasked with grasping items in a virtual world learned to cheat by placing its manipulator between the object and the viewer in a way such that it falsely appeared to be grasping the object.
One transparency project, the DARPA
The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. Originally known as the Adva ...
XAI program, aims to produce " glass box" models that are explainable to a "human-in-the-loop
Human-in-the-loop (HITL) is used in multiple contexts. It can be defined as a model requiring human interaction. HITL is associated with modeling and simulation (M&S) in the live, virtual, and constructive taxonomy. HITL along with the related hum ...
" without greatly sacrificing AI performance. Human users of such a system can understand the AI's cognition (both in real-time and after the fact) and can determine whether to trust the AI. Other applications of XAI are knowledge extraction
Knowledge extraction is the creation of knowledge from structured ( relational databases, XML) and unstructured (text, documents, images) sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must ...
from black-box models and model comparisons. In the context of monitoring systems for ethical and socio-legal compliance, the term "glass box" is commonly used to refer to tools that track the inputs and outputs of the system in question, and provide value-based explanations for their behavior. These tools aim to ensure that the system operates in accordance with ethical and legal standards, and that its decision-making processes are transparent and accountable. The term "glass box" is often used in contrast to "black box" systems, which lack transparency and can be more difficult to monitor and regulate.
The term is also used to name a voice assistant that produces counterfactual statements as explanations.
Explainability and interpretability techniques
There is a subtle difference between the terms explainability and interpretability in the context of AI.
Some explainability techniques don't involve understanding how the model works, and may work across various AI systems. Treating the model as a black box and analyzing how marginal changes to the inputs affect the result sometimes provides a sufficient explanation.
Explainability
Explainability is useful for ensuring that AI models are not making decisions based on irrelevant or otherwise unfair criteria. For classification
Classification is the activity of assigning objects to some pre-existing classes or categories. This is distinct from the task of establishing the classes themselves (for example through cluster analysis). Examples include diagnostic tests, identif ...
and regression models, several popular techniques exist:
* ''Partial dependency plots'' show the marginal effect of an input feature on the predicted outcome.
* ''SHAP'' (SHapley Additive exPlanations) enables visualization of the contribution of each input feature to the output. It works by calculating Shapley values, which measure the average marginal contribution of a feature across all possible combinations of features.
* ''Feature importance'' estimates how important a feature is for the model. It is usually done using ''permutation importance'', which measures the performance decrease when it the feature value randomly shuffled across all samples.
* ''LIME'' approximates locally a model's outputs with a simpler, interpretable model.
* '' Multitask learning'' provides a large number of outputs in addition to the target classification. These other outputs can help developers deduce what the network has learned.
For images, saliency maps highlight the parts of an image that most influenced the result.
Systems that are expert or knowledge based are software systems that are made by experts. This system consists of a knowledge based encoding for the domain knowledge. This system is usually modeled as production rules, and someone uses this knowledge base which the user can question the system for knowledge. In expert systems, the language and explanations are understood with an explanation for the reasoning or a problem solving activity.
However, these techniques are not very suitable for language models
A language model is a model of the human brain's ability to produce natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation,Andreas, Jacob, Andreas Vlachos, and Stephen Clark (2013)"S ...
like generative pretrained transformers. Since these models generate language, they can provide an explanation, but which may not be reliable. Other techniques include attention analysis (examining how the model focuses on different parts of the input), probing methods (testing what information is captured in the model's representations), causal tracing (tracing the flow of information through the model) and circuit discovery (identifying specific subnetworks responsible for certain behaviors). Explainability research in this area overlaps significantly with interpretability and alignment
Alignment may refer to:
Archaeology
* Alignment (archaeology), a co-linear arrangement of features or structures with external landmarks
* Stone alignment, a linear arrangement of upright, parallel megalithic standing stones
Biology
* Struc ...
research.
Interpretability
Scholars sometimes use the term "mechanistic interpretability" to refer to the process of reverse-engineering
Reverse engineering (also known as backwards engineering or back engineering) is a process or method through which one attempts to understand through deductive reasoning how a previously made device, process, system, or piece of software accompl ...
artificial neural networks
In machine learning, a neural network (also artificial neural network or neural net, abbreviated ANN or NN) is a computational model inspired by the structure and functions of biological neural networks.
A neural network consists of connected ...
to understand their internal decision-making mechanisms and components, similar to how one might analyze a complex machine or computer program.
Interpretability research often focuses on generative pretrained transformers. It is particularly relevant for AI safety and alignment
Alignment may refer to:
Archaeology
* Alignment (archaeology), a co-linear arrangement of features or structures with external landmarks
* Stone alignment, a linear arrangement of upright, parallel megalithic standing stones
Biology
* Struc ...
, as it may enable to identify signs of undesired behaviors such as sycophancy
In modern English, sycophant denotes an "insincere flatterer" and is used to refer to someone practising sycophancy (i.e., insincere flattery to gain advantage).
The word has its origin in the legal system of Classical Athens, where it had a d ...
, deceptiveness or bias, and to better steer AI models.
Studying the interpretability of the most advanced foundation models often involves searching for an automated way to identify "features" in generative pretrained transformers. In a neural network
A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either biological cells or signal pathways. While individual neurons are simple, many of them together in a network can perfor ...
, a feature is a pattern of neuron activations that corresponds to a concept. A compute-intensive technique called " dictionary learning" makes it possible to identify features to some degree. Enhancing the ability to identify and edit features is expected to significantly improve the safety
Safety is the state of being protected from harm or other danger. Safety can also refer to the control of recognized hazards in order to achieve an acceptable level of risk.
Meanings
The word 'safety' entered the English language in the 1 ...
of frontier AI models.
For convolutional neural networks
A convolutional neural network (CNN) is a type of feedforward neural network that learns features via filter (or kernel) optimization. This type of deep learning network has been applied to process and make predictions from many different type ...
, DeepDream can generate images that strongly activate a particular neuron, providing a visual hint about what the neuron is trained to identify.
History and methods
During the 1970s to 1990s, symbolic reasoning systems, such as MYCIN, GUIDON, SOPHIE, and PROTOS could represent, reason about, and explain their reasoning for diagnostic, instructional, or machine-learning (explanation-based learning) purposes. MYCIN, developed in the early 1970s as a research prototype for diagnosing bacteremia
Bloodstream infections (BSIs) are infections of blood caused by blood-borne pathogens. The detection of microbes in the blood (most commonly accomplished by blood cultures) is always abnormal. A bloodstream infection is different from sepsis, wh ...
infections of the bloodstream, could explain which of its hand-coded rules contributed to a diagnosis in a specific case. Research in intelligent tutoring systems resulted in developing systems such as SOPHIE that could act as an "articulate expert", explaining problem-solving strategy at a level the student could understand, so they would know what action to take next. For instance, SOPHIE could explain the qualitative reasoning behind its electronics troubleshooting, even though it ultimately relied on the SPICE
In the culinary arts, a spice is any seed, fruit, root, Bark (botany), bark, or other plant substance in a form primarily used for flavoring or coloring food. Spices are distinguished from herbs, which are the leaves, flowers, or stems of pl ...
circuit simulator. Similarly, GUIDON added tutorial rules to supplement MYCIN's domain-level rules so it could explain the strategy for medical diagnosis. Symbolic approaches to machine learning relying on explanation-based learning, such as PROTOS, made use of explicit representations of explanations expressed in a dedicated explanation language, both to explain their actions and to acquire new knowledge.
In the 1980s through the early 1990s, truth maintenance systems (TMS) extended the capabilities of causal-reasoning, rule-based, and logic-based inference systems. A TMS explicitly tracks alternate lines of reasoning, justifications for conclusions, and lines of reasoning that lead to contradictions, allowing future reasoning to avoid these dead ends. To provide an explanation, they trace reasoning from conclusions to assumptions through rule operations or logical inferences, allowing explanations to be generated from the reasoning traces. As an example, consider a rule-based problem solver with just a few rules about Socrates that concludes he has died from poison:
By the 1990s researchers began studying whether it is possible to meaningfully extract the non-hand-coded rules being generated by opaque trained neural networks. Researchers in clinical expert system
In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert.
Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as ...
s creating neural network-powered decision support for clinicians sought to develop dynamic explanations that allow these technologies to be more trusted and trustworthy in practice. In the 2010s public concerns about racial and other bias in the use of AI for criminal sentencing decisions and findings of creditworthiness may have led to increased demand for transparent artificial intelligence. As a result, many academics and organizations are developing tools to help detect bias in their systems.
Marvin Minsky
Marvin Lee Minsky (August 9, 1927 – January 24, 2016) was an American cognitive scientist, cognitive and computer scientist concerned largely with research in artificial intelligence (AI). He co-founded the Massachusetts Institute of Technology ...
et al. raised the issue that AI can function as a form of surveillance, with the biases inherent in surveillance, suggesting HI (Humanistic Intelligence) as a way to create a more fair and balanced "human-in-the-loop" AI.
Explainable AI has been recently a new topic researched amongst the context of modern deep learning. Modern complex AI techniques, such as deep learning
Deep learning is a subset of machine learning that focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience a ...
, are naturally opaque. To address this issue, methods have been developed to make new models more explainable and interpretable. This includes layerwise relevance propagation (LRP), a technique for determining which features in a particular input vector contribute most strongly to a neural network's output. Other techniques explain some particular prediction made by a (nonlinear) black-box model, a goal referred to as "local interpretability". We still today cannot explain the output of today's DNNs without the new explanatory mechanisms, we also can't by the neural network, or external explanatory components There is also research on whether the concepts of local interpretability can be applied to a remote context, where a model is operated by a third-party.
There has been work on making glass-box models which are more transparent to inspection. This includes decision tree
A decision tree is a decision support system, decision support recursive partitioning structure that uses a Tree (graph theory), tree-like Causal model, model of decisions and their possible consequences, including probability, chance event ou ...
s, Bayesian network
A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Whi ...
s, sparse linear model
In statistics, the term linear model refers to any model which assumes linearity in the system. The most common occurrence is in connection with regression models and the term is often taken as synonymous with linear regression model. However, t ...
s, and more. The Association for Computing Machinery Conference on Fairness, Accountability, and Transparency (ACM FAccT) was established in 2018 to study transparency and explainability in the context of socio-technical systems, many of which include artificial intelligence.
Some techniques allow visualisations of the inputs to which individual software neurons respond to most strongly. Several groups found that neurons can be aggregated into circuits that perform human-comprehensible functions, some of which reliably arise across different networks trained independently.
There are various techniques to extract compressed representations of the features of given inputs, which can then be analysed by standard clustering techniques. Alternatively, networks can be trained to output linguistic explanations of their behaviour, which are then directly human-interpretable. Model behaviour can also be explained with reference to training data—for example, by evaluating which training inputs influenced a given behaviour the most, or by approximating its predictions using the most similar instances from the training data.
The use of explainable artificial intelligence (XAI) in pain research, specifically in understanding the role of electrodermal activity for automated pain recognition: hand-crafted features and deep learning models in pain recognition, highlighting the insights that simple hand-crafted features can yield comparative performances to deep learning models and that both traditional feature engineering and deep feature learning approaches rely on simple characteristics of the input time-series data.
Regulation
As regulators, official bodies, and general users come to depend on AI-based dynamic systems, clearer accountability will be required for automated decision-making
Automated decision-making (ADM) is the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying d ...
processes to ensure trust and transparency. The first global conference exclusively dedicated to this emerging discipline was the 2017 International Joint Conference on Artificial Intelligence
The International Joint Conference on Artificial Intelligence (IJCAI) is a conference in the field of artificial intelligence. The conference series has been organized by the nonprofit IJCAI Organization since 1969.Jointly sponsored by the IJCAI O ...
: Workshop on Explainable Artificial Intelligence (XAI). It has evolved over the years, with various workshops organised and co-located to many other international conferences, and it has now a dedicated global event, "The world conference on eXplainable Artificial Intelligence", with its own proceedings.
The European Union introduced a right to explanation in the General Data Protection Regulation
The General Data Protection Regulation (Regulation (EU) 2016/679), abbreviated GDPR, is a European Union regulation on information privacy in the European Union (EU) and the European Economic Area (EEA). The GDPR is an important component of ...
(GDPR) to address potential problems stemming from the rising importance of algorithms. The implementation of the regulation began in 2018. However, the right to explanation in GDPR covers only the local aspect of interpretability. In the United States, insurance companies are required to be able to explain their rate and coverage decisions. In France the Loi pour une République numérique
The loi pour une République numérique (abr. loi numérique) is a French law first proposed by Axelle Lemaire, Secretary of State for Digital Affairs, voted on 7 October 2016.
This law aims to fulfill a two-fold purpose : "to give France a head ...
(Digital Republic Act) grants subjects the right to request and receive information pertaining to the implementation of algorithms that process data about them.
Limitations
Despite ongoing endeavors to enhance the explainability of AI models, they persist with several inherent limitations.
Adversarial parties
By making an AI system more explainable, we also reveal more of its inner workings. For example, the explainability method of feature importance identifies features or variables that are most important in determining the model's output, while the influential samples method identifies the training samples that are most influential in determining the output, given a particular input. Adversarial parties could take advantage of this knowledge.
For example, competitor firms could replicate aspects of the original AI system in their own product, thus reducing competitive advantage. An explainable AI system is also susceptible to being “gamed”—influenced in a way that undermines its intended purpose. One study gives the example of a predictive policing system; in this case, those who could potentially “game” the system are the criminals subject to the system's decisions. In this study, developers of the system discussed the issue of criminal gangs looking to illegally obtain passports, and they expressed concerns that, if given an idea of what factors might trigger an alert in the passport application process, those gangs would be able to “send guinea pigs” to test those triggers, eventually finding a loophole that would allow them to “reliably get passports from under the noses of the authorities”.
Adaptive integration and explanation
Many approaches that it uses provides explanation in general, it doesn't take account for the diverse backgrounds and knowledge level of the users. This leads to challenges with accurate comprehension for all users. Expert users can find the explanations lacking in depth, and are oversimplified, while a beginner user may struggle understanding the explanations as they are complex. This limitation downplays the ability of the XAI techniques to appeal to their users with different levels of knowledge, which can impact the trust from users and who uses it. The quality of explanations can be different amongst their users as they all have different expertise levels, including different situation and conditions.
Technical complexity
A fundamental barrier to making AI systems explainable is the technical complexity of such systems. End users often lack the coding knowledge required to understand software of any kind. Current methods used to explain AI are mainly technical ones, geared toward machine learning engineers for debugging purposes, rather than toward the end users who are ultimately affected by the system, causing “a gap between explainability in practice and the goal of transparency”. Proposed solutions to address the issue of technical complexity include either promoting the coding education of the general public so technical explanations would be more accessible to end users, or providing explanations in layperson terms.
The solution must avoid oversimplification. It is important to strike a balance between accuracy – how faithfully the explanation reflects the process of the AI system – and explainability – how well end users understand the process. This is a difficult balance to strike, since the complexity of machine learning makes it difficult for even ML engineers to fully understand, let alone non-experts.
Understanding versus trust
The goal of explainability to end users of AI systems is to increase trust in the systems, even “address concerns about lack of ‘fairness’ and discriminatory effects”. However, even with a good understanding of an AI system, end users may not necessarily trust the system. In one study, participants were presented with combinations of white-box and black-box explanations, and static and interactive explanations of AI systems. While these explanations served to increase both their self-reported and objective understanding, it had no impact on their level of trust, which remained skeptical.
This outcome was especially true for decisions that impacted the end user in a significant way, such as graduate school admissions. Participants judged algorithms to be too inflexible and unforgiving in comparison to human decision-makers; instead of rigidly adhering to a set of rules, humans are able to consider exceptional cases as well as appeals to their initial decision. For such decisions, explainability will not necessarily cause end users to accept the use of decision-making algorithms. We will need to either turn to another method to increase trust and acceptance of decision-making algorithms, or question the need to rely solely on AI for such impactful decisions in the first place.
However, some emphasize that the purpose of explainability of artificial intelligence is not to merely increase users' trust in the system's decisions, but to calibrate the users' level of trust to the correct level. According to this principle, too much or too little user trust in the AI system will harm the overall performance of the human-system unit. When the trust is excessive, the users are not critical of possible mistakes of the system and when the users do not have enough trust in the system, they will not exhaust the benefits inherent in it.
Criticism
Some scholars have suggested that explainability in AI should be considered a goal secondary to AI effectiveness, and that encouraging the exclusive development of XAI may limit the functionality of AI more broadly. Critiques of XAI rely on developed concepts of mechanistic and empiric reasoning from evidence-based medicine
Evidence-based medicine (EBM) is "the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. It means integrating individual clinical expertise with the best available exte ...
to suggest that AI technologies can be clinically validated even when their function cannot be understood by their operators.
Some researchers advocate the use of inherently interpretable machine learning models, rather than using post-hoc explanations in which a second model is created to explain the first. This is partly because post-hoc models increase the complexity in a decision pathway and partly because it is often unclear how faithfully a post-hoc explanation can mimic the computations of an entirely separate model. However, another view is that what is important is that the explanation accomplishes the given task at hand, and whether it is pre or post-hoc doesn't matter. If a post-hoc explanation method helps a doctor diagnose cancer better, it is of secondary importance whether it is a correct/incorrect explanation.
The goals of XAI amount to a form of lossy compression
In information technology, lossy compression or irreversible compression is the class of data compression methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size ...
that will become less effective as AI models grow in their number of parameters. Along with other factors this leads to a theoretical limit for explainability.
Explainability in social choice
Explainability was studied also in social choice theory
Social choice theory is a branch of welfare economics that extends the Decision theory, theory of rational choice to collective decision-making. Social choice studies the behavior of different mathematical procedures (social welfare function, soc ...
. Social choice theory aims at finding solutions to social decision problems, that are based on well-established axioms. Ariel D. Procaccia explains that these axioms can be used to construct convincing explanations to the solutions. This principle has been used to construct explanations in various subfields of social choice.
Voting
Cailloux and Endriss present a method for explaining voting rules using the axiom
An axiom, postulate, or assumption is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Ancient Greek word (), meaning 'that which is thought worthy or ...
s that characterize them. They exemplify their method on the Borda voting rule .
Peters, Procaccia, Psomas and Zhou present an algorithm for explaining the outcomes of the Borda rule using O(''m''2) explanations, and prove that this is tight in the worst case.
Participatory budgeting
Yang, Hausladen, Peters, Pournaras, Fricker and Helbing present an empirical study of explainability in participatory budgeting
Participatory budgeting (PB) is a type of citizen sourcing in which ordinary people decide how to allocate part of a municipal or public budget through a process of democratic deliberation and decision-making. These processes typically begin ...
. They compared the greedy and the equal shares rules, and three types of explanations: ''mechanism explanation'' (a general explanation of how the aggregation rule works given the voting input), ''individual explanation'' (explaining how many voters had at least one approved project, at least 10000 CHF in approved projects), and ''group explanation'' (explaining how the budget is distributed among the districts and topics). They compared the perceived ''trustworthiness'' and ''fairness'' of greedy and equal shares, before and after the explanations. They found out that, for MES, mechanism explanation yields the highest increase in perceived fairness and trustworthiness; the second-highest was Group explanation. For Greedy, Mechanism explanation increases perceived trustworthiness but not fairness, whereas Individual explanation increases both perceived fairness and trustworthiness. Group explanation ''decreases'' the perceived fairness and trustworthiness.
Payoff allocation
Nizri, Azaria and Hazon present an algorithm for computing explanations for the Shapley value
In cooperative game theory, the Shapley value is a method (solution concept) for fairly distributing the total gains or costs among a group of players who have collaborated. For example, in a team project where each member contributed differently, ...
. Given a coalitional game, their algorithm decomposes it to sub-games, for which it is easy to generate verbal explanations based on the axioms characterizing the Shapley value. The payoff allocation for each sub-game is perceived as fair, so the Shapley-based payoff allocation for the given game should seem fair as well. An experiment with 210 human subjects shows that, with their automatically generated explanations, subjects perceive Shapley-based payoff allocation as significantly fairer than with a general standard explanation.
See also
*
*
*
References
External links
*
*
*
*
*
*
*
*
{{DEFAULTSORT:Explainable AI
Quality control tools
Artificial intelligence
Artificial intelligence engineering