|
|
- Perpetrator knowledge: a Bayesian account
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
First page: mgae009 Abstract: AbstractPerpetrator knowledge (also known as “guilty knowledge,” “insider knowledge,” “crime knowledge,” or “first-hand knowledge”) is an important, but undertheorized type of criminal evidence. This article clarifies this concept in several ways. First, it offers a precise, probabilistic definition of what perpetrator knowledge is. Second, the article provides a taxonomy of arguments relating to perpetrator knowledge. This classification is based on an analysis of 438 Dutch criminal cases in which this concept was mentioned. Third, it models these arguments using Bayesian networks. Fourth, the article explains a potential reasoning error relating to perpetrator knowledge, namely the fallacy of appeal to probability. PubDate: Fri, 09 Aug 2024 00:00:00 GMT DOI: 10.1093/lpr/mgae009 Issue No: Vol. 23, No. 1 (2024)
- Introduction
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
First page: mgae006 Abstract: National Institute of Standards and Technology10.13039/10000016170NANB15H176 PubDate: Fri, 28 Jun 2024 00:00:00 GMT DOI: 10.1093/lpr/mgae006 Issue No: Vol. 23, No. 1 (2024)
- How the work being done on statistical fingerprint models provides the
basis for a much broader and greater impact affecting many areas within the criminal justice system-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
First page: mgae008 Abstract: AbstractIn the process of developing and improving statistical models to address flaws in the examination and interpretation of highly selective fingermarks, the groundwork is being laid for a much broader and greater impact. This impact will arise from the use of these same improved statistical methods to exploit information from the examination of fingermarks with lower degrees of selectivity—those fingermarks traditionally considered to be devoid of evidentiary value. To the contrary, research has shown that fingermarks of lower selectivity have much to offer. They occur very frequently: much more often than those assessed to be sufficient for inclusion in existing fingerprint examination processes. In individual cases, they occur in locations and numbers that can provide important new information for investigators and additional routes to further investigation. As evidence contributing to proving a case, they can provide detailed activity-level information and new avenues to address the relevance and probative value of other direct and circumstantial evidence. The broader application of fingerprint models to these traditionally unused fingermarks of lower selectivity needs to be specifically developed and implemented to realize the contributions and to responsibly manage the risks and benefits. PubDate: Fri, 21 Jun 2024 00:00:00 GMT DOI: 10.1093/lpr/mgae008 Issue No: Vol. 23, No. 1 (2024)
- Decisionalizing the problem of reliance on expert and machine evidence
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
First page: mgae007 Abstract: AbstractThis article analyzes and discusses the problem of reliance on expert and machine evidence, including Artificial Intelligence output, from a decision-analytic point of view. Machine evidence is broadly understood here as the result of computational approaches, with or without a human-in-the-loop, applied to the analysis and the assessment of the probative value of forensic traces such as fingermarks. We treat reliance as a personal decision for the factfinder; specifically, we define it as a function of the congruence between expert output in a given case and ground truth, combined with the decision-maker’s preferences among accurate and inaccurate decision outcomes. The originality of this analysis lies in its divergence from mainstream approaches that rely on standard, aggregate performance metrics for expert and AI systems, such as aggregate accuracy rates, as the defining criteria for reliance. Using fingermark analysis as an example, we show that our decision-theoretic criterion for the reliance on expert and machine output has a dual advantage. On the one hand, it focuses on what is really at stake in reliance on such output and, on the other hand, it has the ability to assist the decision-maker with the fundamentally personal problem of deciding to rely. In essence, our account represents a model- and coherence-based analysis of the practical questions and justificatory burden encountered by anyone required to deal with computational output in forensic science contexts. Our account provides a normative decision structure that is a reference point against which intuitive viewpoints regarding reliance can be compared, which complements standard and essentially data-centered assessment criteria. We argue that these considerations, although primarily a theoretical contribution, are fundamental to the discourses on how to use algorithmic output in areas such as fingerprint analysis. PubDate: Tue, 18 Jun 2024 00:00:00 GMT DOI: 10.1093/lpr/mgae007 Issue No: Vol. 23, No. 1 (2024)
- Correction to: Reference populations for examining possible racial
profiling-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
First page: mgae002 Abstract: This is a correction to: Douglas N VanDerwerken, Mary Santi Fowler, Joseph B Kadane, Reference populations for examining possible racial profiling, Law, Probability and Risk, Volume 22, Issue 1, 2023, mgad008, https://doi.org/10.1093/lpr/mgad008 PubDate: Mon, 20 May 2024 00:00:00 GMT DOI: 10.1093/lpr/mgae002 Issue No: Vol. 23, No. 1 (2024)
- Presumed prior, contextual prior, and bizarre consequences—a reply to
Ronald Meester and Lonneke Stevens-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
First page: mgae005 Abstract: The problem of the prior is a hotly debated issue in the literature on legal evidence. In a recent contribution to this debate, Ronald Meester and Lonneke Stevens argue that the prior must take ‘context’ into account (Meester and Stevens 2024: 8). They do not explain what this means or how it should be done, but their paper offers some examples that give a rough idea what they are after. Taking account of context could, for example, mean that the prior probability is higher when someone is accused of a crime in a small village compared to someone accused of crime in a large city with a greater number of alternative perpetrators, and with regard to two crimes both committed in the same large city, context can be taken into account for by assigning a higher prior to a suspect living in the part of the city where the crime took place than a suspect living in another part of the city. Meester and Stevens recognize that this approach is ‘very problematic’ (Meester and Stevens 2024: 4), but then go on to use it themselves in their examples of ‘applying context’.11 PubDate: Fri, 26 Apr 2024 00:00:00 GMT DOI: 10.1093/lpr/mgae005 Issue No: Vol. 23, No. 1 (2024)
- A probabilistic graphical model for assessing equivocal evidence
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
First page: mgae003 Abstract: AbstractThe Bayes’ theorem can be generalized to account for uncertainty on reported evidence. This has an impact on the value of the evidence, making the computation of the Bayes factor more demanding, as discussed by Taroni, Garbolino, and Bozza (2020). Probabilistic graphical models can however represent a suitable tool to assist the scientist in their evaluative task. A Bayesian network is proposed to deal with equivocal evidence and its use is illustrated through examples. PubDate: Wed, 24 Apr 2024 00:00:00 GMT DOI: 10.1093/lpr/mgae003 Issue No: Vol. 23, No. 1 (2024)
- Bi-Gaussianized calibration of likelihood ratios
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
First page: mgae004 Abstract: AbstractFor a perfectly calibrated forensic evaluation system, the likelihood ratio of the likelihood ratio is the likelihood ratio. Conversion of uncalibrated log-likelihood ratios (scores) to calibrated log-likelihood ratios is often performed using logistic regression. The results, however, may be far from perfectly calibrated. We propose and demonstrate a new calibration method, “bi-Gaussianized calibration,” that warps scores toward perfectly calibrated log-likelihood-ratio distributions. Using both synthetic and real data, we demonstrate that bi-Gaussianized calibration leads to better calibration than does logistic regression, that it is robust to score distributions that violate the assumption of two Gaussians with the same variance, and that it is competitive with logistic-regression calibration in terms of performance measured using log-likelihood-ratio cost (Cllr). We also demonstrate advantages of bi-Gaussianized calibration over calibration using pool-adjacent violators (PAV). Based on bi-Gaussianized calibration, we also propose a graphical representation that may help explain the meaning of likelihood ratios to triers of fact. PubDate: Thu, 11 Apr 2024 00:00:00 GMT DOI: 10.1093/lpr/mgae004 Issue No: Vol. 23, No. 1 (2024)
- Bayesian reasoning and the prior in court: not legally normative but
unavoidable-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
First page: mgae001 Abstract: AbstractWe introduce Bayesian reasoning in court not as a toolbox for doing computations, but as a way to assess evidence in a case. We argue that Bayesian reasoning comes naturally, even when the findings in a case cannot readily be translated into numbers. Not having numbers at one’s disposal is not an obstacle to use Bayesian reasoning. Although we present a coherent and complete view, we focus on the prior, since that seems to be the most problematic part of Bayesian reasoning. We explain that attempts to numerically express the prior fail in general, but also that a prior is necessary and cannot be dispensed with. Indeed, we explain in detail why decision-making should not be based on likelihood ratios alone. We next discuss two of the most delicate questions around the prior: (1) the possible conflict with the presumption of innocence, and (2) the idea that unwanted personal conviction (like racism) might enter the decision procedure via the prior. We conclude that these alleged problems are not problematic after all, and we carefully explain this position. PubDate: Mon, 18 Mar 2024 00:00:00 GMT DOI: 10.1093/lpr/mgae001 Issue No: Vol. 23, No. 1 (2024)
- Misuse of statistical method results in highly biased interpretation of
forensic evidence in Guyll et al. (2023)-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
First page: mgad010 PubDate: Tue, 09 Jan 2024 00:00:00 GMT DOI: 10.1093/lpr/mgad010 Issue No: Vol. 23, No. 1 (2024)
|