Toward an Explainable Framework for Intensional Defeasible Reasoning
Wednesday: September 7, 2022 - 12:00pm Eastern Time
We discuss progress toward an explainable framework for intensional defeasible reasoning, in two main parts. In Part One, we recall Ruth Byrne’s Suppression Task, which showed that human reasoners can be led to suppress valid inferences under the presence of irrelevant propositions. We then present the Intensional Suppression Task, a new version of Byrne’s experiment which is explicitly intensional. We expect that our task is not significantly more challenging for human reasoners than the original task. However, it is clearly more demanding from a formal reasoning perspective, as the purely extensional approaches to modeling the original Suppression Task of prior work are insufficient for modeling our intensional version. We present a cognitive calculus for reasoning about the Intensional Suppression Task and implement it in the automated reasoner ShadowAdjudicator. In Part Two, we note that the AI agent presented in Part One is perhaps uniquely capable of helping human reasoners understand the logical fallacy that the majority of subjects fell victim to in the Suppression Tasks. More generally, we envision that AI agents could someday serve as “thought partners”, which could supplement human reasoning and decision making in a variety of domains. In order to do that, these agents would need to be able to explain themselves to their human counterparts in natural language.
Hence, in Part Two, we implement a system for converting cognitive calculus proofs into natural-language explanations using a transformer language model. Our model was able to produce qualitatively satisfactory explanations in 60% of test cases.
Meeting link: https://rensselaer.webex.com/meet/giancm