Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Journal article
    Kampik T, Potyka N, Yin X, Čyras K, Toni Fet al., 2024,

    Contribution functions for quantitative bipolar argumentation graphs: a principle-based analysis

    , International Journal of Approximate Reasoning, Vol: 173, ISSN: 0888-613X

    We present a principle-based analysis of contribution functions for quantitative bipolar argumentation graphs that quantify the contribution of one argument to another. The introduced principles formalise the intuitions underlying different contribution functions as well as expectations one would have regarding the behaviour of contribution functions in general. As none of the covered contribution functions satisfies all principles, our analysis can serve as a tool that enables the selection of the most suitable function based on the requirements of a given use case.

  • Conference paper
    Russo F, Rapberger A, Toni F, 2024,

    Argumentative Causal Discovery

    , The 21st International Conference on Knowledge Representation and Reasoning (KR-2024)
  • Conference paper
    Leofante F, Ayoobi H, Dejl A, Freedman G, Gorur D, Jiang J, Paulino-Passos G, Rago A, Rapberger A, Russo F, Yin X, Zhang D, Toni Fet al., 2024,

    Contestable AI needs Computational Argumentation

    , The 21st International Conference on Knowledge Representation and Reasoning (KR-2024)
  • Conference paper
    Gould A, Paulino Passos G, Dadhania S, Williams M, Toni Fet al., 2024,

    Preference-Based Abstract Argumentation for Case-Based Reasoning

    , International Conference on Principles of Knowledge Representation and Reasoning

    In the pursuit of enhancing the efficacy and flexibility of interpretable, data-driven classification models, this work introduces a novel incorporation of user-defined preferences with Abstract Argumentation and Case-Based Reasoning (CBR). Specifically, we introduce Preference-Based Abstract Argumentation for Case-Based Reasoning (which we call AA-CBR-P), allowing users to define multiple approaches to compare cases with an ordering that specifies their preference over these comparison approaches. We prove that the model inherently follows these preferences when making predictions and show that previous abstract argumentation for case-based reasoning approaches are insufficient at expressing preferences over constituents of an argument. We then demonstrate how this can be applied to a real-world medical dataset sourced from a clinical trial evaluating differing assessment methods of patients with a primary brain tumour. We show empirically that our approach outperforms other interpretable machine learning models on this dataset.

  • Conference paper
    Yin X, Potyka N, Toni F, 2024,

    CE-QArg: Counterfactual explanations for quantitative bipolar argumentation frameworks

    , 21st International Conference on Principles of Knowledge Representation and Reasoning, Publisher: International Joint Conferences on Artificial Intelligence Organization

    There is a growing interest in understanding arguments’strength in Quantitative Bipolar Argumentation Frameworks(QBAFs). Most existing studies focus on attribution-basedmethods that explain an argument’s strength by assigning importance scores to other arguments but fail to explain how to change the current strength to a desired one. To solve this issue, we introduce counterfactual explanations for QBAFs. We discuss problem variants and propose an iterative algorithm named Counterfactual Explanations for Quantitative bipolar Argumentation frameworks (CE-QArg). CE-QArg can identify valid and cost-effective counterfactual explanations based on two core modules, polarity and priority, which help determine the updating direction and magnitude for each argument, respectively. We discuss some formal properties of our counterfactual explanations and empirically evaluate CE-QArg on randomly generated QBAFs.

  • Conference paper
    Marzari L, Leofante F, Cicalese F, Farinelli Aet al., 2024,

    Rigorous probabilistic guarantees for robust counterfactual explanations

    , 27th European Conference on Artificial Intelligence (ECAI 2024), Publisher: IOS Press

    We study the problem of assessing the robustness ofcounterfactual explanations for deep learning models. We focus on plausible model shifts altering model parameters and propose a novel framework to reason about the robustness property in this setting. To motivate our solution, we begin by showing for the first time that computing the robustness of counterfactuals with respect to plausiblemodel shifts is NP-complete. As this (practically) rules out the existence of scalable algorithms for exactly computing robustness, we propose a novel probabilistic approach which is able to provide tight estimates of robustness with strong guarantees while preserving scalability. Remarkably, and differently from existing solutions targetingplausible model shifts, our approach does not impose requirements on the network to be analyzed, thus enabling robustness analysis on a wider range of architectures. Experiments on four binary classification datasets indicate that our method improves the state of the art ingenerating robust explanations, outperforming existing methods on a range of metrics.

  • Conference paper
    Freedman G, Toni F, 2024,

    Detecting Scientific Fraud Using Argument Mining

    , ArgMining@ACL2024, Publisher: The MIT Press
  • Conference paper
    Lehtonen T, Rapberger A, Toni F, Ulbricht M, Wallner JPet al., 2024,

    On Computing Admissibility in ABA

    , 10th International Conference on Computational Models of Argument (COMMA 2024)
  • Conference paper
    Rapberger A, Toni F, 2024,

    On the Robustness of Argumentative Explanations

    , 10th International Conference on Computational Models of Argument (COMMA 2024)
  • Conference paper
    Jiang J, Leofante F, Rago A, Toni Fet al., 2024,

    Robust Counterfactual Explanations in Machine Learning: A Survey

    , The 33rd International Joint Conference on Artificial Intelligence, IJCAI 2024
  • Conference paper
    Yin X, Potyka N, Toni F, 2024,

    Explaining arguments’ strength: unveiling the role of attacks and supports

    , IJCAI 2024, Publisher: IJCAI
  • Journal article
    Domínguez J, Prociuk D, Marović B, Čyras K, Cocarascu O, Ruiz F, Mi E, Mi E, Ramtale C, Rago A, Darzi A, Toni F, Curcin V, Delaney Bet al., 2024,

    ROAD2H: development and evaluation of an open-sourceexplainable artificial intelligence approach for managingco-morbidity and clinical guidelines

    , Learning Health Systems, Vol: 8, ISSN: 2379-6146

    IntroductionClinical decision support (CDS) systems (CDSSs) that integrate clinical guidelines need to reflect real-world co-morbidity. In patient-specific clinical contexts, transparent recommendations that allow for contraindications and other conflicts arising from co-morbidity are a requirement. In this work, we develop and evaluate a non-proprietary, standards-based approach to the deployment of computable guidelines with explainable argumentation, integrated with a commercial electronic health record (EHR) system in Serbia, a middle-income country in West Balkans.MethodsWe used an ontological framework, the Transition-based Medical Recommendation (TMR) model, to represent, and reason about, guideline concepts, and chose the 2017 International global initiative for chronic obstructive lung disease (GOLD) guideline and a Serbian hospital as the deployment and evaluation site, respectively. To mitigate potential guideline conflicts, we used a TMR-based implementation of the Assumptions-Based Argumentation framework extended with preferences and Goals (ABA+G). Remote EHR integration of computable guidelines was via a microservice architecture based on HL7 FHIR and CDS Hooks. A prototype integration was developed to manage chronic obstructive pulmonary disease (COPD) with comorbid cardiovascular or chronic kidney diseases, and a mixed-methods evaluation was conducted with 20 simulated cases and five pulmonologists.ResultsPulmonologists agreed 97% of the time with the GOLD-based COPD symptom severity assessment assigned to each patient by the CDSS, and 98% of the time with one of the proposed COPD care plans. Comments were favourable on the principles of explainable argumentation; inclusion of additional co-morbidities was suggested in the future along with customisation of the level of explanation with expertise.ConclusionAn ontological model provided a flexible means of providing argumentation and explainable artificial intelligence for a long-term condition. Exte

  • Conference paper
    Zhang D, Williams M, Toni F, 2024,

    Targeted activation penalties help CNNs ignore spurious signals

    , The 38th Annual AAAI Conference on Artificial Intelligence, Publisher: AAAI, ISSN: 2159-5399

    Neural networks (NNs) can learn to rely on spurious signals in the training data, leading to poor generalisation. Recent methods tackle this problem by training NNs with additional ground-truth annotations of such signals. These methods may, however, let spurious signals re-emerge in deep convolutional NNs (CNNs). We propose Targeted Activation Penalty (TAP), a new method tackling the same problem by penalising activations to control the re-emergence of spurious signals in deep CNNs, while also lowering training times and memory usage. In addition, ground-truth annotations can be expensive to obtain. We show that TAP still works well with annotations generated by pre-trained models as effective substitutes of ground-truth annotations. We demonstrate the power of TAP against two state-of-the-art baselines on the MNIST benchmark and on two clinical image datasets, using four different CNN architectures.

  • Conference paper
    Ulbricht M, Potyka N, Rapberger A, Toni Fet al., 2024,

    Non-flat ABA is an instance of bipolar argumentation

    , The 38th Annual AAAI Conference on Artificial Intelligence, Publisher: AAAI, Pages: 10723-10731, ISSN: 2374-3468

    Assumption-based Argumentation (ABA) is a well-knownstructured argumentation formalism, whereby arguments and attacks between them are drawn from rules, defeasible assumptions and their contraries. A common restriction im-posed on ABA frameworks (ABAFs) is that they are flat, i.e.,each of the defeasible assumptions can only be assumed, but not derived. While it is known that flat ABAFs can be translated into abstract argumentation frameworks (AFs) as pro-posed by Dung, no translation exists from general, possibly non-flat ABAFs into any kind of abstract argumentation formalism. In this paper, we close this gap and show that bipolar AFs (BAFs) can instantiate general ABAFs. To this end we develop suitable, novel BAF semantics which borrow from the notion of deductive support. We investigate basic properties of our BAFs, including computational complexity, and prove the desired relation to ABAFs under several semantics.

  • Conference paper
    Leofante F, Potyka N, 2024,

    Promoting Counterfactual Robustness through Diversity

    , The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI24)
  • Conference paper
    Kori A, Locatello F, De Sousa Ribeiro F, Toni F, Glocker Bet al., 2024,

    Grounded Object-Centric Learning

    , International Conference on Learning Representations (ICLR)
  • Conference paper
    Jiang J, Rago A, Leofante F, Toni Fet al., 2024,

    Recourse under model multiplicity via argumentative ensembling

    , The 23rd International Conference on Autonomous Agents and Multi-Agent Systems, Publisher: ACM

    Model Multiplicity (MM) arises when multiple, equally performing machine learning models can be trained to solve the same prediction task. Recent studies show that models obtained under MM may produce inconsistent predictions for the same input. When this occurs, it becomes challenging to provide counterfactual explanations(CEs), a common means for offering recourse recommendations to individuals negatively affected by models’ predictions. In this paper, we formalise this problem, which we name recourse-aware ensembling, and identify several desirable properties which methods for solving it should satisfy. We demonstrate that existing ensemblingmethods, naturally extended in different ways to provide CEs, fail to satisfy these properties. We then introduce argumentative ensembling, deploying computational argumentation as a means to guarantee robustness of CEs to MM, while also accommodating customisable user preferences. We show theoretically and experimentally that argumentative ensembling is able to satisfy propertieswhich the existing methods lack, and that the trade-offs are minimal wrt the ensemble’s accuracy.

  • Conference paper
    Jiang J, Lan J, Leofante F, Rago A, Toni Fet al., 2023,

    Provably robust and plausible counterfactual explanations for neural networks via robust optimisation

    , The 15th Asian Conference on Machine Learning, Publisher: ML Research Press

    Counterfactual Explanations (CEs) have received increasing interest as a major methodology for explaining neural network classifiers. Usually, CEs for an input-output pair are defined as data points with minimum distance to the input that are classified with a different label than the output. To tackle the established problem that CEs are easily invalidated when model parameters are updated (e.g. retrained), studies have proposed ways to certify the robustness of CEs under model parameter changes bounded by a norm ball. However, existing methods targeting this form of robustness are not sound or complete, and they may generate implausible CEs, i.e., outliers wrt the training dataset. In fact, no existing method simultaneously optimises for closeness and plausibility while preserving robustness guarantees. In this work, we propose Provably RObust and PLAusible Counterfactual Explanations (PROPLACE), a method leveraging on robust optimisation techniques to address the aforementioned limitations in the literature. We formulate an iterative algorithm to compute provably robust CEs and prove its convergence, soundness and completeness. Through a comparative experiment involving six baselines, five of which target robustness, we show that PROPLACE achieves state-of-the-art performances against metrics on three evaluation aspects.

  • Conference paper
    Tirsi C-G, Proietti M, Toni F, 2023,

    ABALearn: an automated logic-based learning system for ABA frameworks

    , AIxIA 2023, Publisher: Springer Nature, ISSN: 1687-7470

    We introduce ABALearn, an automated algorithm that learns Assumption-Based Argumentation (ABA) frameworks from training data consisting of positive and negative examples, and a given background knowledge. ABALearn’s ability to generate comprehensible rules for decision-making promotes transparency and interpretability, addressing the challenges associated with the black-box nature of traditional machine learning models. This implementation is based on the strategy proposed in a previous work. The resulting ABA frameworks can be mapped onto logicprograms with negation as failure. The main advantage of this algorithm is that it requires minimal information about the learning problem and it is also capable of learning circular debates. Our results show that this approach is competitive with state-of-the-art alternatives, demonstrat-ing its potential to be used in real-world applications. Overall, this work contributes to the development of automated learning techniques for argumentation frameworks in the context of Explainable AI (XAI) andprovides insights into how such learners can be applied to make predictions.

  • Conference paper
    Paulino Passos G, Toni F, 2023,

    Learning Case Relevance in Case-Based Reasoning with Abstract Argumentation

    , The 36th International Conference on Legal Knowledge and Information Systems
  • Conference paper
    Yin X, Potyka N, Toni F, 2023,

    Argument attribution explanations in quantitative bipolar argumentation frameworks

    , 26th European Conference on Artificial Intelligence ECAI 2023, Publisher: IOS Press, Pages: 2898-2905, ISSN: 0922-6389

    Argumentative explainable AI has been advocated by several in recent years, with an increasing interest on explaining the reasoning outcomes of Argumentation Frameworks (AFs). While there is a considerable body of research on qualitatively explaining the reasoning outcomes of AFs with debates/disputes/dialogues in the spirit of extension-based semantics, explaining the quantitative reasoning outcomes of AFs under gradual semantics has not received much attention, despite widespread use in applications. In this paper, we contribute to filling this gap by proposing a novel theory of Argument Attribution Explanations (AAEs) by incorporating the spirit of feature attribution from machine learning in the context of Quantitative Bipolar Argumentation Frameworks (QBAFs): whereas feature attribution is used to determine the influence of features towards outputs of machine learning models, AAEs are used to determine the influence of arguments towards topic arguments of interest. We study desirable properties of AAEs, including some new ones and some partially adapted from the literature to our setting. To demonstrate the applicability of our AAEs in practice, we conclude by carrying out two case studies in the scenarios of fake news detection and movie recommender systems.

  • Conference paper
    Russo F, Toni F, 2023,

    Causal discovery and knowledge injection for contestable neural networks

    , 26th European Conference on Artificial Intelligence ECAI 2023, Publisher: IOS Press, Pages: 2025-2032, ISSN: 0922-6389

    Neural networks have proven to be effective at solvingmachine learning tasks but it is unclear whether they learn any relevant causal relationships, while their black-box nature makes it difficult for modellers to understand and debug them. We propose a novelmethod overcoming these issues by allowing a two-way interactionwhereby neural-network-empowered machines can expose the underpinning learnt causal graphs and humans can contest the machinesby modifying the causal graphs before re-injecting them into the machines, so that the learnt models are guaranteed to conform to thegraphs and adhere to expert knowledge (some of which can also begiven up-front). By building a window into the model behaviour andenabling knowledge injection, our method allows practitioners to debug networks based on the causal structure discovered from the dataand underpinning the predictions. Experiments with real and synthetic tabular data show that our method improves predictive performance up to 2.4x while producing parsimonious networks, up to 7xsmaller in the input layer, compared to SOTA regularised networks.

  • Conference paper
    Leofante F, Lomuscio A, 2023,

    Robust explanations for human-neural multi-agent systems with formal verification

    , The 20th European Conference on Multi-Agent Systems (EUMAS 2023), Publisher: Springer, Pages: 244-262, ISSN: 1611-3349

    The quality of explanations in human-agent interactions isfundamental to the development of trustworthy AI systems. In this paper we study the problem of generating robust contrastive explanations for human-neural multi-agent systems and introduce two novel verification-based algorithms to (i) identify non-robust explanations generated by other methods and (ii) generate contrastive explanations equipped with formal robustness certificates. We present an implementation and evaluate the effectiveness of the approach on two case studies involving neural agents trained on credit scoring and traffic sign recognition tasks.

  • Conference paper
    Leofante F, Botoeva E, Rajani V, 2023,

    Counterfactual explanations and model multiplicity: a relational verification view

    , The 20th International Conference on Principles of Knowledge Representation and Reasoning (KR2023), Publisher: IJCAI Organization, Pages: 763-768, ISSN: 2334-1033

    We study the interplay between counterfactual explanationsand model multiplicity in the context of neural network clas-sifiers. We show that current explanation methods often pro-duce counterfactuals whose validity is not preserved undermodel multiplicity. We then study the problem of generatingcounterfactuals that are guaranteed to be robust to model multiplicity, characterise its complexity and propose an approach to solve this problem using ideas from relational verification.

  • Conference paper
    Rago A, Li H, Toni F, 2023,

    Interactive explanations by conflict resolution via argumentative exchanges

    , 20th International Conference on Principles of Knowledge Representation and Reasoning (KR2023), Publisher: IJCAI Organization, Pages: 582-592, ISSN: 2334-1033

    As the field of explainable AI (XAI) is maturing, calls forinteractive explanations for (the outputs of) AI models aregrowing, but the state-of-the-art predominantly focuses onstatic explanations. In this paper, we focus instead on interactive explanations framed as conflict resolution between agents (i.e. AI models and/or humans) by leveraging on computational argumentation. Specifically, we define Argumentative eXchanges (AXs) for dynamically sharing, in multi-agent systems, information harboured in individual agents’ quantitative bipolar argumentation frameworks towards resolving conflicts amongst the agents. We then deploy AXs in the XAI setting in which a machine and a human interact about the machine’s predictions. We identify and assess several theoretical properties characterising AXs that are suitable for XAI. Finally, we instantiate AXs for XAI by defining various agent behaviours, e.g. capturing counterfactual patterns of reasoning in machines and highlighting the effects ofcognitive biases in humans. We show experimentally (in asimulated environment) the comparative advantages of these behaviours in terms of conflict resolution, and show that the strongest argument may not always be the most effective.

  • Conference paper
    Rago A, Gorur D, Toni F, 2023,

    ArguCast: a system for online multi-forecasting with gradual argumentation

    , Knowledge Representation 2023, Publisher: CEUR-WS.org, Pages: 40-51

    Judgmental forecasting is a form of forecasting which employs (human) users to make predictions about specied future events. Judgmental forecasting has been shown to perform better than quantitative methods for forecasting, e.g. when historical data is unavailable or causal reasoning is needed. However, it has a number of limitations, arising from users’ irrationality and cognitive biases. To mitigate against these phenomena, we leverage on computational argumentation, a eld which excels in the representation and resolution of conicting knowledge and human-like reasoning, and propose novel ArguCast frameworks (ACFs) and the novel online system ArguCast, integrating ACFs. ACFs and ArguCast accommodate multi-forecasting, by allowing multiple users to debate on multiple forecasting predictions simultaneously, each potentially admitting multiple outcomes. Finally, we propose a novel notion of user rationality in ACFs based on votes on arguments in ACFs, allowing the ltering out of irrational opinions before obtaining group forecasting predictions by means commonly used in judgmental forecasting.

  • Conference paper
    Nguyen H-T, Satoh K, Goebel R, Stathis K, Toni Fet al., 2023,

    Black-box analysis: GPTs across time in legal textual entailment task

    , ISAILD symposium - International Symposium on Artificial Intelligence and Legal Documents, Publisher: IEEE
  • Journal article
    Toni F, Rago A, Cyras K, 2023,

    Forecasting with jury-based probabilistic argumentation

    , Journal of Applied Non Classical Logics, Vol: 33, Pages: 224-243, ISSN: 1166-3081

    Probabilistic Argumentation supports a form of hybrid reasoning by integratingquantitative (probabilistic) reasoning and qualitative argumentation in a naturalway. Jury-based Probabilistic Argumentation supports the combination of opinionsby different reasoners. In this paper we show how Jury-based Probabilistic Abstract Argumentation (JPAA) and a form of Jury-based Probabilistic Assumptionbased Argumentation (JPABA) can naturally support forecasting, whereby subjective probability estimates are combined to make predictions about future occurrences of events. The form of JPABA we consider is an instance of JPAA andresults from integrating Assumption-Based Argumentation (ABA) and probabilityspaces expressed by Bayesian networks, under the so-called constellation approach.It keeps the underlying structured argumentation and probabilistic reasoning modules separate while integrating them. We show how JPAA and (the considered formof) JPABA can be used to support forecasting by 1) supporting different forecasters (jurors) to determine the probability of arguments (and, in the JPABA case,sentences) with respect to their own probability spaces, while sharing arguments(and their components); and 2) supporting the aggregation of individual forecaststo produce group forecasts.

  • Conference paper
    Ayoobi H, Potyka N, Toni F, 2023,

    SpArX: Sparse Argumentative Explanations for Neural Networks

    , European Conference on Artificial Intelligence 2023

    Neural networks (NNs) have various applications in AI, but explaining their decisions remains challenging. Existing approaches often focus on explaining how changing individual inputs affects NNs' outputs. However, an explanation that is consistent with the input-output behaviour of an NN is not necessarily faithful to the actual mechanics thereof. In this paper, we exploit relationships between multi-layer perceptrons (MLPs) and quantitative argumentation frameworks (QAFs) to create argumentative explanations for the mechanics of MLPs. Our SpArX method first sparsifies the MLP while maintaining as much of the original structure as possible. It then translates the sparse MLP into an equivalent QAF to shed light on the underlying decision process of the MLP, producing global and/or local explanations. We demonstrate experimentally that SpArX can give more faithful explanations than existing approaches, while simultaneously providing deeper insightsinto the actual reasoning process of MLPs.

  • Conference paper
    De Angelis E, Proietti M, Toni F, 2023,

    ABA learning via ASP

    , ICLP 2023, Publisher: Open Publishing Association, Pages: 1-8, ISSN: 2075-2180

    Recently, ABA Learning has been proposed as a form of symbolic machine learning for drawing Assumption-Based Argumentation frameworks from background knowledge and positive and negative examples. We propose a novel method for implementing ABA Learning using Answer SetProgramming as a way to help guide Rote Learning and generalisation in ABA Learning.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1247&limit=30&respub-action=search.html Current Millis: 1723008657470 Current Time: Wed Aug 07 06:30:57 BST 2024