Results
- Showing results for:
- Reset all filters
Search results
-
Conference paperRago A, Gorur D, Toni F, 2023,
ArguCast: a system for online multi-forecasting with gradual argumentation
, Knowledge Representation 2023, Publisher: CEUR-WS.org, Pages: 40-51Judgmental forecasting is a form of forecasting which employs (human) users to make predictions about specied future events. Judgmental forecasting has been shown to perform better than quantitative methods for forecasting, e.g. when historical data is unavailable or causal reasoning is needed. However, it has a number of limitations, arising from users’ irrationality and cognitive biases. To mitigate against these phenomena, we leverage on computational argumentation, a eld which excels in the representation and resolution of conicting knowledge and human-like reasoning, and propose novel ArguCast frameworks (ACFs) and the novel online system ArguCast, integrating ACFs. ACFs and ArguCast accommodate multi-forecasting, by allowing multiple users to debate on multiple forecasting predictions simultaneously, each potentially admitting multiple outcomes. Finally, we propose a novel notion of user rationality in ACFs based on votes on arguments in ACFs, allowing the ltering out of irrational opinions before obtaining group forecasting predictions by means commonly used in judgmental forecasting.
-
Conference paperNguyen H-T, Satoh K, Goebel R, et al., 2023,
Black-box analysis: GPTs across time in legal textual entailment task
, ISAILD symposium - International Symposium on Artificial Intelligence and Legal Documents, Publisher: IEEE -
Journal articleToni F, Rago A, Cyras K, 2023,
Forecasting with jury-based probabilistic argumentation
, Journal of Applied Non Classical Logics, Vol: 33, Pages: 224-243, ISSN: 1166-3081Probabilistic Argumentation supports a form of hybrid reasoning by integratingquantitative (probabilistic) reasoning and qualitative argumentation in a naturalway. Jury-based Probabilistic Argumentation supports the combination of opinionsby different reasoners. In this paper we show how Jury-based Probabilistic Abstract Argumentation (JPAA) and a form of Jury-based Probabilistic Assumptionbased Argumentation (JPABA) can naturally support forecasting, whereby subjective probability estimates are combined to make predictions about future occurrences of events. The form of JPABA we consider is an instance of JPAA andresults from integrating Assumption-Based Argumentation (ABA) and probabilityspaces expressed by Bayesian networks, under the so-called constellation approach.It keeps the underlying structured argumentation and probabilistic reasoning modules separate while integrating them. We show how JPAA and (the considered formof) JPABA can be used to support forecasting by 1) supporting different forecasters (jurors) to determine the probability of arguments (and, in the JPABA case,sentences) with respect to their own probability spaces, while sharing arguments(and their components); and 2) supporting the aggregation of individual forecaststo produce group forecasts.
-
Conference paperAyoobi H, Potyka N, Toni F, 2023,
SpArX: Sparse Argumentative Explanations for Neural Networks
, European Conference on Artificial Intelligence 2023Neural networks (NNs) have various applications in AI, but explaining their decisions remains challenging. Existing approaches often focus on explaining how changing individual inputs affects NNs' outputs. However, an explanation that is consistent with the input-output behaviour of an NN is not necessarily faithful to the actual mechanics thereof. In this paper, we exploit relationships between multi-layer perceptrons (MLPs) and quantitative argumentation frameworks (QAFs) to create argumentative explanations for the mechanics of MLPs. Our SpArX method first sparsifies the MLP while maintaining as much of the original structure as possible. It then translates the sparse MLP into an equivalent QAF to shed light on the underlying decision process of the MLP, producing global and/or local explanations. We demonstrate experimentally that SpArX can give more faithful explanations than existing approaches, while simultaneously providing deeper insightsinto the actual reasoning process of MLPs.
-
Conference paperPaulino Passos G, Satoh K, Toni F, 2023,
A dataset of contractual events in court decisions
, Logic Programming and Legal Reasoning Workshop @ ICLP 2023, Publisher: CEUR Workshop Proceedings, ISSN: 1613-0073The promise of automation of legal reasoning is developing technology that reduces human time required for legal tasks or that improves human performance on such tasks. In order to do so, different methods and systems based on logic programming were developed. However, in order to apply such methods on legal data, it is necessary to provide an interface between human users and the legal reasoning system, and the most natural interface in the legal domain is natural language, in particular, written text. In order to perform reasoning in written text using logic programming methods, it is then necessary to map expressions in text to atoms and predicates in the formal language, a task referred generally as information extraction. In this work, we propose a new dataset for the task of information extraction, in particular event extraction, in court decisions, focusing on contracts. Our dataset captures contractual relations and events that affect them in some way, such as negotiations preceding a (possible) contract, the execution of a contract, or its termination. We conducted text annotation with law students and graduates, resulting in a dataset with 207 documents, 3934 sentences, 4627 entities, and 1825 events. We describe here this resource, the annotation process, its evaluation with inter-annotator agreement metrics, and discuss challenges during the development of this resource and for the future.
-
Conference paperDe Angelis E, Proietti M, Toni F, 2023,
ABA learning via ASP
, ICLP 2023, Publisher: Open Publishing Association, Pages: 1-8, ISSN: 2075-2180Recently, ABA Learning has been proposed as a form of symbolic machine learning for drawing Assumption-Based Argumentation frameworks from background knowledge and positive and negative examples. We propose a novel method for implementing ABA Learning using Answer SetProgramming as a way to help guide Rote Learning and generalisation in ABA Learning.
-
Conference paperToni F, Potyka N, Ulbricht M, et al., 2023,
Understanding ProbLog as probabilistic argumentation
, ICLP 2023, Publisher: Open Publishing Association, Pages: 183-189, ISSN: 2075-2180ProbLog is a popular probabilistic logic programming language and tool, widely used for applications requiring to deal with inherent uncertainties in structured domains. In this paper we study someconnections between ProbLog and a variant of another well-known formalism combining symbolicreasoning and reasoning under uncertainty, namely probabilistic argumentation. Specifically, weshow that ProbLog is an instance of a form of Probabilistic Abstract Argumentation (PAA) underthe constellation approach, which builds upon Assumption-Based Argumentation (ABA). The connections pave the way towards equipping ProbLog with a variety of alternative semantics, inheritedfrom PAA/PABA, as well as obtaining novel argumentation semantics for PAA/PABA, leveraging onexisting connections between ProbLog and argumentation. Moreover, the connections pave the waytowards novel forms of argumentative explanations for ProbLog’s outputs.
-
Conference paperMihailescu I, Weng A, Sharma S, et al., 2023,
PySpArX - A Python library for generating Sparse Argumentative eXplanations for neural networks
, ICLP 2023, Publisher: Open Publishing Association, Pages: 336-336, ISSN: 2075-2180 -
Conference paperNguyen H-T, Toni F, Stathis K, et al., 2023,
Beyond logic programming for legal reasoning
, Logic Programming and Legal Reasoning Workshop@ICLP2023, Publisher: CEUR-WS.org, ISSN: 1613-0073Logic programming has long being advocated for legal reasoning, and several approaches have been putforward relying upon explicit representation of the law in logic programming terms. In this positionpaper we focus on the PROLEG logic-programming-based framework for formalizing and reasoningwith Japanese presupposed ultimate fact theory. Specifically, we examine challenges and opportunitiesin leveraging deep learning techniques for improving legal reasoning using PROLEG, identifying fourdistinct options ranging from enhancing fact extraction using deep learning to end-to-end solutionsfor reasoning with textual legal descriptions. We assess advantages and limitations of each option,considering their technical feasibility, interpretability, and alignment with the needs of legal practitionersand decision-makers. We believe that our analysis can serve as a guideline for developers aiming tobuild effective decision-support systems for the legal domain, while fostering a deeper understanding ofchallenges and potential advancements by neuro-symbolic approaches in legal applications.
-
Conference paperProietti M, Toni F, 2023,
A roadmap for neuro-argumentative learning
, 17th International Workshop on Neural-Symbolic Learning and Reasoning (NeSy 2023), Publisher: CEUR Workshop Proceedings, Pages: 1-8, ISSN: 1613-0073Computational argumentation (CA) has emerged, in recent decades, as a powerful formalism for knowl-edge representation and reasoning in the presence of conflicting information, notably when reasoningnon-monotonically with rules and exceptions. Much existing work in CA has focused, to date, on rea-soning with given argumentation frameworks (AFs) or, more recently, on using AFs, possibly automat-ically drawn from other systems, for supporting forms of XAI. In this short paper we focus insteadon the problem of learning AFs from data, with a focus on neuro-symbolic approaches. Specifically,we overview existing forms of neuro-argumentative (machine) learning, resulting from a combinationof neural machine learning mechanisms and argumentative (symbolic) reasoning. We include in ouroverview neuro-symbolic paradigms that integrate reasoners with a natural understanding in argumen-tative terms, notably those capturing forms of non-monotonic reasoning in logic programming. We alsooutline avenues and challenges for future work in this spectrum.
-
Conference paperJiang J, Leofante F, Rago A, et al., 2023,
Formalising the robustness of counterfactual explanations for neural networks
, 37th AAAI Conference on Artificial Intelligence (AAAI 2023), Publisher: Association for the Advancement of Artificial Intelligence, Pages: 14901-14909, ISSN: 2374-3468The use of counterfactual explanations (CFXs) is an increasingly popular explanation strategy for machine learning models. However, recent studies have shown that these explanations may not be robust to changes in the underlying model (e.g., following retraining), which raises questions about their reliability in real-world applications. Existing attempts towards solving this problem are heuristic, and the robustness to model changes of the resulting CFXs is evaluated with only a small number of retrained models, failing to provide exhaustive guarantees. To remedy this, we propose the first notion to formally and deterministically assess the robustness (to model changes) of CFXs for neural networks, that we call ∆-robustness. We introduce an abstraction framework based on interval neural networks to verify the ∆-robustness of CFXs against a possibly infinite set of changes to the model parameters, i.e., weights and biases. We then demonstrate the utility of this approach in two distinct ways. First, we analyse the ∆-robustness of a number of CFX generation methods from the literature and show that they unanimously host significant deficiencies in this regard. Second, we demonstrate how embedding ∆-robustness within existing methods can provide CFXs which are provably robust.
-
Conference paperPotyka N, Yin X, Toni F, 2023,
Explaining random forests using bipolar argumentation and Markov networks
, AAAI 23, Pages: 9458-9460, ISSN: 2159-5399Random forests are decision tree ensembles that can be used to solve a variety of machine learning problems. However, as the number of trees and their individual size can be large, their decision making process is often incomprehensible. We show that their decision process can be naturally represented as an argumentation problem, which allows creating global explanations via argumentative reasoning. We generalize sufficientand necessary argumentative explanations using a Markov network encoding, discuss the relevance of these explanations and establish relationships to families of abductive explanations from the literature. As the complexity of the explanation problems is high, we present an efficient approximation algorithm with probabilistic approximation guarantees.
-
Conference paperNguyen H-T, Goebel R, Toni F, et al., 2023,
How well do SOTA legal reasoning models support abductive reasoning?
, Logic Programming and Legal Reasoning Workshop@ICLP2023We examine how well the state-of-the-art (SOTA) models used in legal reasoning support abductivereasoning tasks. Abductive reasoning is a form of logical inference in which a hypothesis is formulatedfrom a set of observations, and that hypothesis is used to explain the observations. The ability toformulate such hypotheses is important for lawyers and legal scholars as it helps them articulate logicalarguments, interpret laws, and develop legal theories. Our motivation is to consider the belief thatdeep learning models, especially large language models (LLMs), will soon replace lawyers because theyperform well on tasks related to legal text processing. But to do so, we believe, requires some form ofabductive hypothesis formation. In other words, while LLMs become more popular and powerful, wewant to investigate their capacity for abductive reasoning. To pursue this goal, we start by building alogic-augmented dataset for abductive reasoning with 498,697 samples and then use it to evaluate theperformance of a SOTA model in the legal field. Our experimental results show that although thesemodels can perform well on tasks related to some aspects of legal text processing, they still fall short insupporting abductive reasoning tasks.
-
Journal articleLertvittayakumjorn P, Toni F, 2023,
Argumentative explanations for pattern-based text classifiers
, Argument and Computation, Vol: 14, Pages: 163-234, ISSN: 1946-2174Recent works in Explainable AI mostly address the transparency issue of black-box models or create explanations for any kind of models (i.e., they are model-agnostic), while leaving explanations of interpretable models largely underexplored. In this paper, we fill this gap by focusing on explanations for a specific interpretable model, namely pattern-based logistic regression (PLR) for binary text classification. We do so because, albeit interpretable, PLR is challenging when it comes to explanations. In particular, we found that a standard way to extract explanations from this model does not consider relations among the features, making the explanations hardly plausible to humans. Hence, we propose AXPLR, a novel explanation method using (forms of) computational argumentation to generate explanations (for outputs computed by PLR) which unearth model agreements and disagreements among the features. Specifically, we use computational argumentation as follows: we see features (patterns) in PLR as arguments in a form of quantified bipolar argumentation frameworks (QBAFs) and extract attacks and supports between arguments based on specificity of the arguments; we understand logistic regression as a gradual semantics for these QBAFs, used to determine the arguments’ dialectic strength; and we study standard properties of gradual semantics for QBAFs in the context of our argumentative re-interpretation of PLR, sanctioning its suitability for explanatory purposes. We then show how to extract intuitive explanations (for outputs computed by PLR) from the constructed QBAFs. Finally, we conduct an empirical evaluation and two experiments in the context of human-AI collaboration to demonstrate the advantages of our resulting AXPLR method.
-
Journal articleRago A, Russo F, Albini E, et al., 2023,
Explaining classifiers’ outputs with causal models and argumentation
, Journal of Applied Logics, Vol: 10, Pages: 421-449, ISSN: 2631-9810We introduce a conceptualisation for generating argumentation frameworks (AFs) from causal models for the purpose of forging explanations for mod-els’ outputs. The conceptualisation is based on reinterpreting properties of semantics of AFs as explanation moulds, which are means for characterising argumentative relations. We demonstrate our methodology by reinterpreting the property of bi-variate reinforcement in bipolar AFs, showing how the ex-tracted bipolar AFs may be used as relation-based explanations for the outputs of causal models. We then evaluate our method empirically when the causal models represent (Bayesian and neural network) machine learning models for classification. The results show advantages over a popular approach from the literature, both in highlighting specific relationships between feature and classification variables and in generating counterfactual explanations with respect to a commonly used metric.
-
Journal articleAlbini E, Rago A, Baroni P, et al., 2023,
Achieving descriptive accuracy in explanations via argumentation: the case of probabilistic classifiers
, Frontiers in Artificial Intelligence, Vol: 6, Pages: 1-18, ISSN: 2624-8212The pursuit of trust in and fairness of AI systems in order to enable human-centric goals has been gathering pace of late, often supported by the use of explanations for the outputs of these systems. Several properties of explanations have been highlighted as critical for achieving trustworthy and fair AI systems, but one that has thus far been overlooked is that of descriptive accuracy (DA), i.e., that the explanation contents are in correspondence with the internal working of the explained system. Indeed, the violation of this core property would lead to the paradoxical situation of systems producing explanations which are not suitably related to how the system actually works: clearly this may hinder user trust. Further, if explanations violate DA then they can be deceitful, resulting in an unfair behavior toward the users. Crucial as the DA property appears to be, it has been somehow overlooked in the XAI literature to date. To address this problem, we consider the questions of formalizing DA and of analyzing its satisfaction by explanation methods. We provide formal definitions of naive, structural and dialectical DA, using the family of probabilistic classifiers as the context for our analysis. We evaluate the satisfaction of our given notions of DA by several explanation methods, amounting to two popular feature-attribution methods from the literature, variants thereof and a novel form of explanation that we propose. We conduct experiments with a varied selection of concrete probabilistic classifiers and highlight the importance, with a user study, of our most demanding notion of dialectical DA, which our novel method satisfies by design and others may violate. We thus demonstrate how DA could be a critical component in achieving trustworthy and fair systems, in line with the principles of human-centric AI.
-
Conference paperSanthirasekaram A, Kori A, Winkler M, et al., 2023,
Robust Hierarchical Symbolic Explanations in Hyperbolic Space for Image Classification
, Computer Vision and Pattern Recognition -
Conference paperJiang J, Lan J, Leofante F, et al., 2023,
Provably Robust and Plausible Counterfactual Explanations for Neural Networks via Robust Optimisation.
, Publisher: PMLR, Pages: 582-597 -
Conference paperNguyen HT, Goebel R, Toni F, et al., 2023,
LawGiBa – Combining GPT, knowledge bases, and logic programming in a legal assistance system
, JURIX 2023: The Thirty-sixth Annual Conference, Maastricht, the Netherlands, 18–20 December 2023, Publisher: IOS Press, Pages: 371-374, ISSN: 0922-6389We present LawGiBa, a proof-of-concept demonstration system for legal assistance that combines GPT, legal knowledge bases, and Prolog’s logic programming structure to provide explanations for legal queries. This novel combination effectively and feasibly addresses the hallucination issue of large language models (LLMs) in critical domains, such as law. Through this system, we demonstrate how incorporating a legal knowledge base and logical reasoning can enhance the accuracy and reliability of legal advice provided by AI models like GPT. Though our work is primarily a demonstration, it provides a framework to explore how knowledge bases and logic programming structures can be further integrated with generative AI systems, to achieve improved results across various natural languages and legal systems.
-
Conference paperZhang K, Toni F, Williams M, 2022,
A federated cox model with non-proportional hazards
, The 6th International Workshop on Health Intelligence, Publisher: Springer, Pages: 171-185, ISSN: 1860-949XRecent research has shown the potential for neural networksto improve upon classical survival models such as the Cox model, whichis widely used in clinical practice. Neural networks, however, typicallyrely on data that are centrally available, whereas healthcare data arefrequently held in secure silos. We present a federated Cox model thataccommodates this data setting and also relaxes the proportional hazardsassumption, allowing time-varying covariate effects. In this latter respect,our model does not require explicit specification of the time-varying ef-fects, reducing upfront organisational costs compared to previous works.We experiment with publicly available clinical datasets and demonstratethat the federated model is able to perform as well as a standard model.
-
Conference paperAlbini E, Rago A, Baroni P, et al., 2022,
Descriptive accuracy in explanations: the case of probabilistic classifiers
, 15th International Conference on Scalable Uncertainty Management (SUM 2022), Publisher: Springer, Pages: 279-294A user receiving an explanation for outcomes produced by an artificially intelligent system expects that it satisfies the key property of descriptive accuracy (DA), i.e. that the explanation contents are in correspondence with the internal working of the system. Crucial as this property appears to be, it has been somehow overlooked in the XAI literature to date. To address this problem, we consider the questions of formalising DA and of analysing its satisfaction by explanation methods. We provide formal definitions of naive, structural and dialectical DA, using the family of probabilistic classifiers as the context for our analysis. We evaluate the satisfaction of our given notions of DA by several explanation methods, amounting to two popular feature-attribution methods from the literature and a novel form of explanation that we propose and complement our analysis with experiments carried out on a varied selection of concrete probabilistic classifiers.
-
Conference paperMaurizio P, Toni F, 2022,
Learning assumption-based argumentation frameworks
, 31st International Conference on Inductive Logic Programming (ILP 2022). We propose a novel approach to logic-based learning whichgenerates assumption-based argumentation (ABA) frameworks from positive and negative examples, using a given background knowledge. TheseABA frameworks can be mapped onto logic programs with negationas failure that may be non-stratified. Whereas existing argumentationbased methods learn exceptions to general rules by interpreting the exceptions as rebuttal attacks, our approach interprets them as undercutting attacks. Our learning technique is based on the use of transformationrules, including some adapted from logic program transformation rules(notably folding) as well as others, such as rote learning and assumptionintroduction. We present a general strategy that applies the transformation rules in a suitable order to learn stratified frameworks, and we alsopropose a variant that handles the non-stratified case. We illustrate thebenefits of our approach with a number of examples, which show that,on one hand, we are able to easily reconstruct other logic-based learningapproaches and, on the other hand, we can work out in a very simpleand natural way problems that seem to be hard for existing techniques.
-
Conference paperPotyka N, Yin X, Toni F, 2022,
On the tradeoff between correctness and completeness in argumentative explainable AI
, 1st International Workshop on Argumentation for eXplainable AI, Publisher: CEUR Workshop Proceedings, Pages: 1-8, ISSN: 1613-0073Explainable AI aims at making the decisions of autonomous systems human-understandable. Argumentation frameworks are a natural tool for this purpose. Among them, bipolar abstract argumentation frameworks seem well suited to explain the effect of features on a classification decision and their formal properties can potentially be used to derive formal guarantees for explanations. Two particular interesting properties are correctness (if the explanation says that X affects Y, then X affects Y ) and completeness (if X affects Y, then the explanation says that X affects Y ). The reinforcement property of bipolar argumentation frameworks has been used as a natural correctness counterpart in previous work. Applied to the classification context, it basically states that attacking features should decrease and supporting features should increase the confidence of a classifier. In this short discussion paper, we revisit this idea, discuss potential limitations when considering reinforcement without a corresponding completeness property and how these limitations can potentially be overcome.
-
Conference paperWard F, Toni F, Belardinelli F, 2022,
A causal perspective on AI deception in games
, AI Safety 2022 (IJCAI-ECAI-22), Publisher: CEUR Workshop Proceedings, Pages: 1-16Deception is a core challenge for AI safety and we focus on the problem that AI agents might learndeceptive strategies in pursuit of their objectives. We define the incentives one agent has to signal toand deceive another agent. We present several examples of deceptive artificial agents and show that ourdefinition has desirable properties.
-
Conference paperRago A, Baroni P, Toni F, 2022,
Explaining causal models with argumentation: the case of bi-variate reinforcement
, 19th International Conference on Principles of Knowledge Representation and Reasoning (KR 2022), Publisher: IJCAI Organisation, Pages: 505-509, ISSN: 2334-1033Causal models are playing an increasingly important role inmachine learning, particularly in the realm of explainable AI.We introduce a conceptualisation for generating argumenta-tion frameworks (AFs) from causal models for the purposeof forging explanations for the models’ outputs. The concep-tualisation is based on reinterpreting desirable properties ofsemantics of AFs as explanation moulds, which are meansfor characterising the relations in the causal model argumen-tatively. We demonstrate our methodology by reinterpretingthe property of bi-variate reinforcement as an explanationmould to forge bipolar AFs as explanations for the outputs ofcausal models. We perform a theoretical evaluation of theseargumentative explanations, examining whether they satisfy arange of desirable explanatory and argumentative propertie
-
Conference paperIrwin B, Rago A, Toni F, 2022,
Forecasting argumentation frameworks
, 19th International Conference on Principles of Knowledge Representation and Reasoning (KR 2022), Publisher: IJCAI Organisation, Pages: 533-543, ISSN: 2334-1033We introduce Forecasting Argumentation Frameworks(FAFs), a novel argumentation-based methodology forforecasting informed by recent judgmental forecastingresearch. FAFs comprise update frameworks which empower(human or artificial) agents to argue over time about theprobability of outcomes, e.g. the winner of a politicalelection or a fluctuation in inflation rates, whilst flaggingperceived irrationality in the agents’ behaviour with a viewto improving their forecasting accuracy. FAFs include fiveargument types, amounting to standard pro/con arguments,as in bipolar argumentation, as well as novel proposalarguments and increase/decrease amendment arguments. Weadapt an existing gradual semantics for bipolar argumen-tation to determine the aggregated dialectical strength ofproposal arguments and define irrational behaviour. We thengive a simple aggregation function which produces a finalgroup forecast from rational agents’ individual forecasts.We identify and study properties of FAFs and conductan empirical evaluation which signals FAFs’ potential toincrease the forecasting accuracy of participants.
-
Conference paperJiang J, Rago A, Toni F, 2022,
Should counterfactual explanations always be data instances?
, XLoKR 2022: The Third Workshop on Explainable Logic-Based Knowledge RepresentationCounterfactual explanations (CEs) are an increasingly popular way of explaining machine learning classifiers. Predominantly, they amount to data instances pointing to potential changes to the inputs that would lead to alternative outputs. In this position paper we question the widespread assumption that CEs should always be data instances, and argue instead that in some cases they may be better understood in terms of special types of relations between input features and classification variables. We illustrate how a special type of these relations, amounting to critical influences, can characterise and guide the search for data instances deemed suitable as CEs. These relations also provide compact indications of which input features - rather than their specific values in data instances - have counterfactual value.
-
Conference paperGaskell A, Miao Y, Toni F, et al., 2022,
Logically consistent adversarial attacks for soft theorem provers
, 31st International Joint Conference on Artificial Intelligence and the 25th European Conference on Artificial Intelligence, Publisher: International Joint Conferences on Artificial Intelligence, Pages: 4129-4135Recent efforts within the AI community haveyielded impressive results towards “soft theoremproving” over natural language sentences using lan-guage models. We propose a novel, generativeadversarial framework for probing and improvingthese models’ reasoning capabilities. Adversarialattacks in this domain suffer from the logical in-consistency problem, whereby perturbations to theinput may alter the label. Our Logically consis-tent AdVersarial Attacker, LAVA, addresses this bycombining a structured generative process with asymbolic solver, guaranteeing logical consistency.Our framework successfully generates adversarialattacks and identifies global weaknesses commonacross multiple target models. Our analyses revealnaive heuristics and vulnerabilities in these mod-els’ reasoning capabilities, exposing an incompletegrasp of logical deduction under logic programs.Finally, in addition to effective probing of thesemodels, we show that training on the generatedsamples improves the target model’s performance.
-
Conference paperSukpanichnant P, Rago A, Lertvittayakumjorn P, et al., 2022,
Neural QBAFs: explaining neural networks under LRP-based argumentation frameworks
, International Conference of the Italian Association for Artificial Intelligence, Publisher: Springer International Publishing, Pages: 429-444, ISSN: 0302-9743In recent years, there have been many attempts to combine XAI with the field of symbolic AI in order to generate explanations for neural networks that are more interpretable and better align with human reasoning, with one prominent candidate for this synergy being the sub-field of computational argumentation. One method is to represent neural networks with quantitative bipolar argumentation frameworks (QBAFs) equipped with a particular semantics. The resulting QBAF can then be viewed as an explanation for the associated neural network. In this paper, we explore a novel LRP-based semantics under a new QBAF variant, namely neural QBAFs (nQBAFs). Since an nQBAF of a neural network is typically large, the nQBAF must be simplified before being used as an explanation. Our empirical evaluation indicates that the manner of this simplification is all important for the quality of the resulting explanation.
-
Conference paperWard F, Belardinelli F, Toni F, 2022,
Argumentative Reward Learning: Reasoning About Human Preferences
, HMCaT 2022 (ICML)
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.