Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Vasileiou S, Kumar A, Yeoh W, Son TC, Toni Fet al., 2024,

    Dialectical reconciliation via structured argumentative dialogues

    , KR 2024

    We present a novel framework designed to extend model reconciliation approaches, commonly used in human-aware planning, for enhanced human-AI interaction. By adopting a structured argumentation-based dialogue paradigm, our framework enables dialectical reconciliation to address knowledge discrepancies between an explainer (AI agent) and an explainee (human user), where the goal is for the explainee to understand the explainer's decision. We formally describe the operational semantics of our proposed framework, providing theoretical guarantees. We then evaluate the framework's efficacy ``in the wild'' via computational and human-subject experiments. Our findings suggest that our framework offers a promising direction for fostering effective human-AI interactions in domains where explainability is important.

  • Journal article
    Kampik T, Potyka N, Yin X, Čyras K, Toni Fet al., 2024,

    Contribution functions for quantitative bipolar argumentation graphs: a principle-based analysis

    , International Journal of Approximate Reasoning, Vol: 173, ISSN: 0888-613X

    We present a principle-based analysis of contribution functions for quantitative bipolar argumentation graphs that quantify the contribution of one argument to another. The introduced principles formalise the intuitions underlying different contribution functions as well as expectations one would have regarding the behaviour of contribution functions in general. As none of the covered contribution functions satisfies all principles, our analysis can serve as a tool that enables the selection of the most suitable function based on the requirements of a given use case.

  • Conference paper
    Rapberger A, Toni F, 2024,

    On the robustness of argumentative explanations

    , 10th International Conference on Computational Models of Argument (COMMA 2024), Publisher: IOS Press, Inc., Pages: 217-228

    The field of explainable AI has grown exponentially in recent years.Within this landscape, argumentation frameworks have shown to be helpful ab-stractions of some AI models towards providing explanations thereof. While exist-ing work on argumentative explanations and their properties has focused on staticsettings, we focus on dynamic settings whereby the (AI models underpinning the)argumentation frameworks need to change. Specifically, for a number of notionsof explanations drawn from abstract argumentation frameworks under extension-based semantics, we address the following questions: (1) Are explanations robust toextension-preserving changes, in the sense that they are still valid when the changesdo not modify the extensions? (2) If not, are these explanations pseudo-robust inthat can be tractably updated? In this paper, we frame these questions formally. Weconsider robustness and pseudo-robustness w.r.t. ordinary and strong equivalenceand provide several results for various extension-based semantics.

  • Conference paper
    Lehtonen T, Rapberger A, Toni F, Ulbricht M, Wallner JPet al., 2024,

    On computing admissibility in ABA

    , 10th International Conference on Computational Models of Argument (COMMA 2024), Publisher: IOS Press, Inc., Pages: 121-132

    Most existing computational tools for assumption-based argumentation (ABA) focus on so-called flat frameworks, disregarding the more general case. Here, we study an instantiation-based approach for reasoning in possibly non-flat ABA. For complete-based semantics, an approach of this kind was recently introduced, based on a semantics-preserving translation between ABA and bipolar argumentation frameworks (BAFs). Admissible semantics, however, require us to consider an extension of BAFs which also makes use of premises of arguments (pBAFs).We explore basic properties of pBAFs which we require as a theoretical underpinning for our proposed instantiation-based solver for non-flat ABA under admissible semantics. As our empirical evaluation shows, depending on the ABA instances, the instantiation-based solver is competitive against an ASP-based approach implemented in the style of state-of-the-art solvers for hard argumentation problems.

  • Conference paper
    Ayoobi H, Potyka N, Toni F, 2024,

    Argumentative interpretable image classification

    , 2nd International Workshop on Argumentation for eXplainable AI co-located with the 10th International Conference on Computational Models of Argument (COMMA 2024), Publisher: CEUR Workshop Proceedings, Pages: 3-15, ISSN: 1613-0073

    We propose ProtoSpArX, a novel interpretable deep neural architecture for image classification in the spirit of prototypical-part-learning as found, e.g. in ProtoPNet. While earlier approaches associate every class with multiple prototypical-parts, ProtoSpArX uses super-prototypes that combine prototypical-parts into single class representations. Furthermore, while earlier approaches use interpretable classification layers, e.g. logistic regression in ProtoPNet, ProtoSpArX improves accuracy with multi-layer perceptronswhile relying upon an interpretable reading thereof based on a form of argumentation. ProtoSpArX is customisable to user cognitive requirements by a process of sparsification of the multi-layer perceptron/argumentation component. Also, as opposed to other prototypical-part-learning approaches,ProtoSpArX can recognise spatial relations between different prototypical-parts that are from various regions in images, similar to how CNNs capture relations between patterns recognized in earlier layers.

  • Conference paper
    Rapberger A, Ulbricht M, Toni F, 2024,

    On the correspondence of non-flat assumption-based argumentation and logic programming with negation as failure in the head

    , 22nd International Workshop on Nonmonotonic Reasoning NMR 24), Publisher: CEUR Workshop Proceedings, ISSN: 1613-0073

    The relation between (a fragment of) assumption-based argumentation (ABA) and logic programs (LPs) under stable model semantics is well-studied. However, for obtaining this relation, the ABA framework needs to be restricted to being flat, i.e., a fragment where the (defeasible) assumptions can never be entailed, only assumed to be true or false. Here, we remove this restriction and show a correspondence between non-flat ABA and LPs with negation as failure in their head. We then extend this result to so-called set-stable ABA semantics, originally defined for the fragment of non-flat ABA called bipolar ABA. We showcase how to define set-stable semantics for LPs with negation as failure in their head and show the correspondence to set-stable ABA semantics.

  • Conference paper
    Battaglia E, Baroni P, Rago A, Toni Fet al., 2024,

    Integrating user preferences into gradual bipolar argumentation for personalised decision support

    , Scalable Uncertainty Management, 16th International Conference (SUM 2024), Publisher: Springer, ISSN: 1611-3349

    Gradual bipolar argumentation has been shown to be aneffective means for supporting decisions across a number of domains. Individual user preferences can be integrated into the domain knowledge represented by such argumentation frameworks and should be taken into account in order to provide personalised decision support. This howeverrequires the definition of a suitable method to handle user-provided preferences in gradual bipolar argumentation, which has not been considered in previous literature. Towards filling this gap, we develop a conceptual analysis on the role of preferences in argumentation and investigate some basic principles concerning the effects they should have on the evaluation of strength in gradual argumentation semantics. We illustrate an application of our approach in the context of a review aggregation system, which has been enhanced with the ability to produce personalisedoutcomes based on user preferences.

  • Conference paper
    Sukpanichnant P, Rapberger A, Toni F, 2024,

    PeerArg: argumentative peer review with LLMs

    , First International Workshop on Next-Generation Language Models for Knowledge Representation and Reasoning (NeLaMKRR 2024)

    Peer review is an essential process to determine the quality of papers submitted to scientific conferences or journals. However, it is subjective and prone to biases. Several studies have been conducted to apply techniques from NLP to support peer review, but they are based on black-box techniques and their outputs are difficult to interpret and trust. In this paper, we propose a novel pipeline to support and understand the reviewing and decision-making processes of peer review: the PeerArg system combining LLMs with methods from knowledge representation. PeerArg takes in input a set of reviews for a paper and outputs the paper acceptance prediction. We evaluate the performance of the PeerArg pipeline on three different datasets, in comparison with a novel end-2-end LLM that uses few-shot learning to predict paper acceptance given reviews. The results indicate that the end-2-end LLM is capable of predicting paper acceptance from reviews, but a variantof the PeerArg pipeline outperforms this LLM.

  • Conference paper
    Oluokun B, Paulino Passos G, Rago A, Toni Fet al., 2024,

    Predicting Human Judgement in Online Debates with Argumentation

    , The 24th International Workshop on Computational Models of Natural Argument (CMNA’24)
  • Conference paper
    Yin X, Potyka N, Toni F, 2024,

    Applying attribution explanations in truth-discovery quantitative bipolar argumentation frameworks

    , 2nd International Workshop on Argumentation for eXplainable AI (ArgXAI) co-located with 10th International Conference on Computational Models of Argument (COMMA 2024), Publisher: CEUR Workshop Proceedings, ISSN: 1613-0073
  • Conference paper
    Yin X, Potyka N, Toni F, 2024,

    Explaining arguments’ strength: unveiling the role of attacks and supports

    , IJCAI 2024, the 33rd International Joint Conference on Artificial Intelligence, Publisher: International Joint Conferences on Artificial Intelligence, Pages: 3622-3630

    Quantitatively explaining the strength of arguments under gradual semantics has recently received increasing attention. Specifically, several works in the literature provide quantitative explanations by computing the attribution scores of arguments. These works disregard the importance of attacks and supports, even though they play an essential role when explaining arguments' strength. In this paper, we propose a novel theory of Relation Attribution Explanations (RAEs), adapting Shapley values from game theory to offer fine-grained insights into the role of attacks and supports in quantitative bipolar argumentation towards obtaining the arguments' strength. We show that RAEs satisfy several desirable properties. We also propose a probabilistic algorithm to approximate RAEs efficiently. Finally, we show the application value of RAEs in fraud detection and large language models case studies.

  • Conference paper
    Freedman G, Toni F, 2024,

    Detecting scientific fraud using argument mining

    , ArgMining@ACL2024, Publisher: Association for Computational Linguistics, Pages: 15-28

    proliferation of fraudulent scientific research in recent years has precipitated a greater interest in more effective methods of detection. There are many varieties of academic fraud, but a particularly challenging type to detect is the use of paper mills and the faking of peer-review. To the best of our knowledge, there have so far been no attempts to automate this process.The complexity of this issue precludes the use of heuristic methods, like pattern-matching techniques, which are employed for other types of fraud. Our proposed method in this paper uses techniques from the Computational Argumentation literature (i.e. argument mining and argument quality evaluation). Our central hypothesis stems from the assumption that articles that have not been subject to the proper level of scrutiny will contain poorly formed and reasoned arguments, relative to legitimately published papers. We use a variety of corpora to test this approach, including a collection of abstracts taken from retracted papers. We show significant improvement compared to a number of baselines, suggesting that this approach merits further investigation.

  • Conference paper
    Leofante F, Ayoobi H, Dejl A, Freedman G, Gorur D, Jiang J, Paulino Passos G, Rago A, Rapberger A, Russo F, Yin X, Zhang D, Toni Fet al., 2024,

    Contestable AI needs Computational Argumentation

    , KR 2024, Publisher: KR Organization

    AI has become pervasive in recent years, but state-of-the-art approaches predominantly neglect the need for AI systems to be contestable. Instead, contestability is advocated by AI guidelines (e.g. by the OECD) and regulation of automated decision-making (e.g. GDPR). In this position paper we explore how contestability can be achieved computationally in and for AI. We argue that contestable AI requires dynamic (human-machine and/or machine-machine) explainability and decision-making processes, whereby machines can (i) interact with humans and/or other machines to progressively explain their outputs and/or their reasoning as well as assess grounds for contestation provided by these humans and/or other machines, and (ii) revise their decision-making processes to redress any issues successfully raised during contestation. Given that much of the current AI landscape is tailored to static AIs, the need to accommodate contestability will require a radical rethinking, that, we argue, computational argumentation is ideally suited to support.

  • Conference paper
    Russo F, Rapberger A, Toni F, 2024,

    Argumentative Causal Discovery

    , The 21st International Conference on Knowledge Representation and Reasoning (KR-2024)
  • Conference paper
    Yin X, Potyka N, Toni F, 2024,

    CE-QArg: Counterfactual explanations for quantitative bipolar argumentation frameworks

    , 21st International Conference on Principles of Knowledge Representation and Reasoning, Publisher: International Joint Conferences on Artificial Intelligence Organization

    There is a growing interest in understanding arguments’strength in Quantitative Bipolar Argumentation Frameworks(QBAFs). Most existing studies focus on attribution-basedmethods that explain an argument’s strength by assigning importance scores to other arguments but fail to explain how to change the current strength to a desired one. To solve this issue, we introduce counterfactual explanations for QBAFs. We discuss problem variants and propose an iterative algorithm named Counterfactual Explanations for Quantitative bipolar Argumentation frameworks (CE-QArg). CE-QArg can identify valid and cost-effective counterfactual explanations based on two core modules, polarity and priority, which help determine the updating direction and magnitude for each argument, respectively. We discuss some formal properties of our counterfactual explanations and empirically evaluate CE-QArg on randomly generated QBAFs.

  • Conference paper
    Gould A, Paulino Passos G, Dadhania S, Williams M, Toni Fet al., 2024,

    Preference-Based Abstract Argumentation for Case-Based Reasoning

    , International Conference on Principles of Knowledge Representation and Reasoning

    In the pursuit of enhancing the efficacy and flexibility of interpretable, data-driven classification models, this work introduces a novel incorporation of user-defined preferences with Abstract Argumentation and Case-Based Reasoning (CBR). Specifically, we introduce Preference-Based Abstract Argumentation for Case-Based Reasoning (which we call AA-CBR-P), allowing users to define multiple approaches to compare cases with an ordering that specifies their preference over these comparison approaches. We prove that the model inherently follows these preferences when making predictions and show that previous abstract argumentation for case-based reasoning approaches are insufficient at expressing preferences over constituents of an argument. We then demonstrate how this can be applied to a real-world medical dataset sourced from a clinical trial evaluating differing assessment methods of patients with a primary brain tumour. We show empirically that our approach outperforms other interpretable machine learning models on this dataset.

  • Conference paper
    Proietti M, Toni F, De Angelis E, 2024,

    Learning Brave Assumption-Based Argumentation Frameworks via ASP

    , ECAI
  • Conference paper
    Vasileiou SL, Kumar A, Yeoh W, Son TC, Toni Fet al., 2024,

    DR-HAI: argumentation-based dialectical reconciliation in human-AI interactions

    , IJCAI 2023

    In this paper, we introduce DR-HAI – a novelargumentation-based framework designed to extend model reconciliation approaches, commonlyused in explainable AI planning, for enhancedhuman-AI interaction. By adopting a multi-shotreconciliation paradigm and not assuming a-prioriknowledge of the human user’s model, DR-HAI enables interactive reconciliation to address knowledge discrepancies between an explainer and an explainee. We formally describe the operational semantics of DR-HAI, and provide theoretical guarantees related to termination and success

  • Conference paper
    Jiang J, Rago A, Leofante F, Toni Fet al., 2024,

    Recourse under model multiplicity via argumentative ensembling

    , The 23rd International Conference on Autonomous Agents and Multi-Agent Systems, Publisher: ACM, Pages: 954-963

    Model Multiplicity (MM) arises when multiple, equally performing machine learning models can be trained to solve the same prediction task. Recent studies show that models obtained under MM may produce inconsistent predictions for the same input. When this occurs, it becomes challenging to provide counterfactual explanations(CEs), a common means for offering recourse recommendations to individuals negatively affected by models’ predictions. In this paper, we formalise this problem, which we name recourse-aware ensembling, and identify several desirable properties which methods for solving it should satisfy. We demonstrate that existing ensemblingmethods, naturally extended in different ways to provide CEs, fail to satisfy these properties. We then introduce argumentative ensembling, deploying computational argumentation as a means to guarantee robustness of CEs to MM, while also accommodating customisable user preferences. We show theoretically and experimentally that argumentative ensembling is able to satisfy propertieswhich the existing methods lack, and that the trade-offs are minimal wrt the ensemble’s accuracy.

  • Conference paper
    Jiang J, Leofante F, Rago A, Toni Fet al., 2024,

    Robust Counterfactual Explanations in Machine Learning: A Survey

    , The 33rd International Joint Conference on Artificial Intelligence, IJCAI 2024
  • Journal article
    Domínguez J, Prociuk D, Marović B, Čyras K, Cocarascu O, Ruiz F, Mi E, Mi E, Ramtale C, Rago A, Darzi A, Toni F, Curcin V, Delaney Bet al., 2024,

    ROAD2H: development and evaluation of an open-sourceexplainable artificial intelligence approach for managingco-morbidity and clinical guidelines

    , Learning Health Systems, Vol: 8, ISSN: 2379-6146

    IntroductionClinical decision support (CDS) systems (CDSSs) that integrate clinical guidelines need to reflect real-world co-morbidity. In patient-specific clinical contexts, transparent recommendations that allow for contraindications and other conflicts arising from co-morbidity are a requirement. In this work, we develop and evaluate a non-proprietary, standards-based approach to the deployment of computable guidelines with explainable argumentation, integrated with a commercial electronic health record (EHR) system in Serbia, a middle-income country in West Balkans.MethodsWe used an ontological framework, the Transition-based Medical Recommendation (TMR) model, to represent, and reason about, guideline concepts, and chose the 2017 International global initiative for chronic obstructive lung disease (GOLD) guideline and a Serbian hospital as the deployment and evaluation site, respectively. To mitigate potential guideline conflicts, we used a TMR-based implementation of the Assumptions-Based Argumentation framework extended with preferences and Goals (ABA+G). Remote EHR integration of computable guidelines was via a microservice architecture based on HL7 FHIR and CDS Hooks. A prototype integration was developed to manage chronic obstructive pulmonary disease (COPD) with comorbid cardiovascular or chronic kidney diseases, and a mixed-methods evaluation was conducted with 20 simulated cases and five pulmonologists.ResultsPulmonologists agreed 97% of the time with the GOLD-based COPD symptom severity assessment assigned to each patient by the CDSS, and 98% of the time with one of the proposed COPD care plans. Comments were favourable on the principles of explainable argumentation; inclusion of additional co-morbidities was suggested in the future along with customisation of the level of explanation with expertise.ConclusionAn ontological model provided a flexible means of providing argumentation and explainable artificial intelligence for a long-term condition. Exte

  • Conference paper
    Ulbricht M, Potyka N, Rapberger A, Toni Fet al., 2024,

    Non-flat ABA is an instance of bipolar argumentation

    , The 38th Annual AAAI Conference on Artificial Intelligence, Publisher: AAAI, Pages: 10723-10731, ISSN: 2374-3468

    Assumption-based Argumentation (ABA) is a well-knownstructured argumentation formalism, whereby arguments and attacks between them are drawn from rules, defeasible assumptions and their contraries. A common restriction im-posed on ABA frameworks (ABAFs) is that they are flat, i.e.,each of the defeasible assumptions can only be assumed, but not derived. While it is known that flat ABAFs can be translated into abstract argumentation frameworks (AFs) as pro-posed by Dung, no translation exists from general, possibly non-flat ABAFs into any kind of abstract argumentation formalism. In this paper, we close this gap and show that bipolar AFs (BAFs) can instantiate general ABAFs. To this end we develop suitable, novel BAF semantics which borrow from the notion of deductive support. We investigate basic properties of our BAFs, including computational complexity, and prove the desired relation to ABAFs under several semantics.

  • Conference paper
    Zhang D, Williams M, Toni F, 2024,

    Targeted activation penalties help CNNs ignore spurious signals

    , The 38th Annual AAAI Conference on Artificial Intelligence, Publisher: AAAI, ISSN: 2159-5399

    Neural networks (NNs) can learn to rely on spurious signals in the training data, leading to poor generalisation. Recent methods tackle this problem by training NNs with additional ground-truth annotations of such signals. These methods may, however, let spurious signals re-emerge in deep convolutional NNs (CNNs). We propose Targeted Activation Penalty (TAP), a new method tackling the same problem by penalising activations to control the re-emergence of spurious signals in deep CNNs, while also lowering training times and memory usage. In addition, ground-truth annotations can be expensive to obtain. We show that TAP still works well with annotations generated by pre-trained models as effective substitutes of ground-truth annotations. We demonstrate the power of TAP against two state-of-the-art baselines on the MNIST benchmark and on two clinical image datasets, using four different CNN architectures.

  • Conference paper
    Kori A, Locatello F, De Sousa Ribeiro F, Toni F, Glocker Bet al., 2024,

    Grounded Object-Centric Learning

    , International Conference on Learning Representations (ICLR)
  • Conference paper
    Paulino Passos G, Toni F, 2023,

    Learning case relevance in case-based reasoning with abstract argumentation

    , 36th International Conference on Legal Knowledge and Information Systems, Publisher: IOS Press, Pages: 95-1000, ISSN: 0922-6389

    Case-based reasoning is known to play an important role in several legal settings. We focus on a recent approach to case-based reasoning, supported by an instantiation of abstract argumentation whereby arguments represent cases and attack between arguments results from outcome disagreement between cases and a notion of relevance. We explore how relevance can be learnt automatically with the help of decision trees, and explore the combination of case-based reasoning with abstract argumentation (AA-CBR) and learning of case relevance for prediction in legal settings. Specifically, we show that, for two legal datasets, AA-CBR with decision-tree-based learning of case relevance performs competitively in comparison with decision trees, and that AA-CBR with decision-tree-based learning of case relevance results in a more compact representation than their decision tree counterparts, which could facilitate cognitively tractable explanations.

  • Conference paper
    Jiang J, Lan J, Leofante F, Rago A, Toni Fet al., 2023,

    Provably robust and plausible counterfactual explanations for neural networks via robust optimisation

    , The 15th Asian Conference on Machine Learning, Publisher: ML Research Press

    Counterfactual Explanations (CEs) have received increasing interest as a major methodology for explaining neural network classifiers. Usually, CEs for an input-output pair are defined as data points with minimum distance to the input that are classified with a different label than the output. To tackle the established problem that CEs are easily invalidated when model parameters are updated (e.g. retrained), studies have proposed ways to certify the robustness of CEs under model parameter changes bounded by a norm ball. However, existing methods targeting this form of robustness are not sound or complete, and they may generate implausible CEs, i.e., outliers wrt the training dataset. In fact, no existing method simultaneously optimises for closeness and plausibility while preserving robustness guarantees. In this work, we propose Provably RObust and PLAusible Counterfactual Explanations (PROPLACE), a method leveraging on robust optimisation techniques to address the aforementioned limitations in the literature. We formulate an iterative algorithm to compute provably robust CEs and prove its convergence, soundness and completeness. Through a comparative experiment involving six baselines, five of which target robustness, we show that PROPLACE achieves state-of-the-art performances against metrics on three evaluation aspects.

  • Conference paper
    Tirsi C-G, Proietti M, Toni F, 2023,

    ABALearn: an automated logic-based learning system for ABA frameworks

    , AIxIA 2023, Publisher: Springer Nature, ISSN: 1687-7470

    We introduce ABALearn, an automated algorithm that learns Assumption-Based Argumentation (ABA) frameworks from training data consisting of positive and negative examples, and a given background knowledge. ABALearn’s ability to generate comprehensible rules for decision-making promotes transparency and interpretability, addressing the challenges associated with the black-box nature of traditional machine learning models. This implementation is based on the strategy proposed in a previous work. The resulting ABA frameworks can be mapped onto logicprograms with negation as failure. The main advantage of this algorithm is that it requires minimal information about the learning problem and it is also capable of learning circular debates. Our results show that this approach is competitive with state-of-the-art alternatives, demonstrat-ing its potential to be used in real-world applications. Overall, this work contributes to the development of automated learning techniques for argumentation frameworks in the context of Explainable AI (XAI) andprovides insights into how such learners can be applied to make predictions.

  • Conference paper
    Russo F, Toni F, 2023,

    Causal discovery and knowledge injection for contestable neural networks

    , 26th European Conference on Artificial Intelligence ECAI 2023, Publisher: IOS Press, Pages: 2025-2032, ISSN: 0922-6389

    Neural networks have proven to be effective at solvingmachine learning tasks but it is unclear whether they learn any relevant causal relationships, while their black-box nature makes it difficult for modellers to understand and debug them. We propose a novelmethod overcoming these issues by allowing a two-way interactionwhereby neural-network-empowered machines can expose the underpinning learnt causal graphs and humans can contest the machinesby modifying the causal graphs before re-injecting them into the machines, so that the learnt models are guaranteed to conform to thegraphs and adhere to expert knowledge (some of which can also begiven up-front). By building a window into the model behaviour andenabling knowledge injection, our method allows practitioners to debug networks based on the causal structure discovered from the dataand underpinning the predictions. Experiments with real and synthetic tabular data show that our method improves predictive performance up to 2.4x while producing parsimonious networks, up to 7xsmaller in the input layer, compared to SOTA regularised networks.

  • Conference paper
    Yin X, Potyka N, Toni F, 2023,

    Argument attribution explanations in quantitative bipolar argumentation frameworks

    , 26th European Conference on Artificial Intelligence ECAI 2023, Publisher: IOS Press, Pages: 2898-2905, ISSN: 0922-6389

    Argumentative explainable AI has been advocated by several in recent years, with an increasing interest on explaining the reasoning outcomes of Argumentation Frameworks (AFs). While there is a considerable body of research on qualitatively explaining the reasoning outcomes of AFs with debates/disputes/dialogues in the spirit of extension-based semantics, explaining the quantitative reasoning outcomes of AFs under gradual semantics has not received much attention, despite widespread use in applications. In this paper, we contribute to filling this gap by proposing a novel theory of Argument Attribution Explanations (AAEs) by incorporating the spirit of feature attribution from machine learning in the context of Quantitative Bipolar Argumentation Frameworks (QBAFs): whereas feature attribution is used to determine the influence of features towards outputs of machine learning models, AAEs are used to determine the influence of arguments towards topic arguments of interest. We study desirable properties of AAEs, including some new ones and some partially adapted from the literature to our setting. To demonstrate the applicability of our AAEs in practice, we conclude by carrying out two case studies in the scenarios of fake news detection and movie recommender systems.

  • Conference paper
    Rago A, Li H, Toni F, 2023,

    Interactive explanations by conflict resolution via argumentative exchanges

    , 20th International Conference on Principles of Knowledge Representation and Reasoning (KR2023), Publisher: IJCAI Organization, Pages: 582-592, ISSN: 2334-1033

    As the field of explainable AI (XAI) is maturing, calls forinteractive explanations for (the outputs of) AI models aregrowing, but the state-of-the-art predominantly focuses onstatic explanations. In this paper, we focus instead on interactive explanations framed as conflict resolution between agents (i.e. AI models and/or humans) by leveraging on computational argumentation. Specifically, we define Argumentative eXchanges (AXs) for dynamically sharing, in multi-agent systems, information harboured in individual agents’ quantitative bipolar argumentation frameworks towards resolving conflicts amongst the agents. We then deploy AXs in the XAI setting in which a machine and a human interact about the machine’s predictions. We identify and assess several theoretical properties characterising AXs that are suitable for XAI. Finally, we instantiate AXs for XAI by defining various agent behaviours, e.g. capturing counterfactual patterns of reasoning in machines and highlighting the effects ofcognitive biases in humans. We show experimentally (in asimulated environment) the comparative advantages of these behaviours in terms of conflict resolution, and show that the strongest argument may not always be the most effective.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=517&limit=30&respub-action=search.html Current Millis: 1731120654523 Current Time: Sat Nov 09 02:50:54 GMT 2024