Making Decision Support Systems Intelligible: Explainability or Interpretability?
Artificial intelligence (AI) systems and, more specifically, so-called Decision-Support Systems (DSSs), routinely assist humans in decision-making tasks. Because of their ability to ‘learn’ enormous amounts of information extremely quickly and to provide insights and recommendations just as quickly, the use DSSs as decision-making aids should seem especially attractive to professionals whose day-to-day activities include making hard and high-stakes decisions under substantial time pressures. However, the ‘deep learning’ algorithms powering state-of-the-art DSSs effectively render these systems almost completely opaque ‘black boxes’, with the unfortunate consequence that, without substantial knowledge of machine learning (ML), the operations of a DSS are essentially unintelligible to the user. In this paper, we argue that, if we want DSSs to be utilized by expert professionals in all kinds of domains to assist with complex, high-stakes decisions, the outputs and inner workings of these systems must be made intelligible without requiring technical machine learning expertise. Instead, the required type of intelligibility should resemble as much as possible the way in which human beings usually give and ask each other for reasons. Furthermore, we contend that so-called “Interpretable” machine learning is better suited to provide this kind of intelligibility than so-called “Explainable” machine learning.
Bio:
Alessandra Buccella is Assistant Professor in the Department of Philosophy at the University at Albany.
Her current research focuses on articulating the ethical implications of Artificial Intelligence (AI) technologies and on understanding the broader social and political context in which AI is developed, deployed, and used. She has published papers on various topics in the philosophy of mind, the philosophy of cognitive neuroscience, AI ethics, and the philosophy of sport.