Citation

BibTex format

@inproceedings{Nguyen:2023,
author = {Nguyen, H-T and Goebel, R and Toni, F and Stathis, K and Satoh, K},
title = {How well do SOTA legal reasoning models support abductive reasoning?},
url = {http://hdl.handle.net/10044/1/105043},
year = {2023}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - We examine how well the state-of-the-art (SOTA) models used in legal reasoning support abductivereasoning tasks. Abductive reasoning is a form of logical inference in which a hypothesis is formulatedfrom a set of observations, and that hypothesis is used to explain the observations. The ability toformulate such hypotheses is important for lawyers and legal scholars as it helps them articulate logicalarguments, interpret laws, and develop legal theories. Our motivation is to consider the belief thatdeep learning models, especially large language models (LLMs), will soon replace lawyers because theyperform well on tasks related to legal text processing. But to do so, we believe, requires some form ofabductive hypothesis formation. In other words, while LLMs become more popular and powerful, wewant to investigate their capacity for abductive reasoning. To pursue this goal, we start by building alogic-augmented dataset for abductive reasoning with 498,697 samples and then use it to evaluate theperformance of a SOTA model in the legal field. Our experimental results show that although thesemodels can perform well on tasks related to some aspects of legal text processing, they still fall short insupporting abductive reasoning tasks.
AU - Nguyen,H-T
AU - Goebel,R
AU - Toni,F
AU - Stathis,K
AU - Satoh,K
PY - 2023///
TI - How well do SOTA legal reasoning models support abductive reasoning?
UR - http://hdl.handle.net/10044/1/105043
ER -