Subjects -> STATISTICS (Total: 130 journals)
| A B C D E F G H I J K L M N O P Q R S T U V W X Y Z | The end of the list has been reached or no journals were found for your choice. |
|
|
- Signal detection theory fails to account for real-world consequences of
inconclusive decisions-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Pages: 131 - 135 Abstract: Despite the ubiquity of forensic evidence in criminal cases over many decades, only recently have scholars begun in earnest to consider how to account for inconclusive decisions in error rate calculations for forensic feature-comparison methods (Dror and Langenburg, 2019; Dror and Scurich, 2020). Given the controversy and diverse viewpoints the issue continues to provoke (Biedermann and Kotsoglou, 2021; Hofmann et al., 2021; Dorfman and Valiant, 2022), Arkes and Koehler’s recent paper (Arkes and Koehler, 2022), which seeks to apply a signal detection theory approach to understanding the role of inconclusive decisions, is an important contribution. That said, we are concerned that Arkes and Koehler’s approach: (1) neglects to account for known differences in the way that inconclusive decisions are deployed by practitioners across various feature-comparison methods and (2) appears to posit blind proficiency testing as a near-complete solution to the host of complex problems associated with assessing the validity of such methods. PubDate: Tue, 07 Feb 2023 00:00:00 GMT DOI: 10.1093/lpr/mgad001 Issue No: Vol. 21, No. 2 (2023)
- Likelihood ratios for categorical count data with applications in digital
forensics-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Pages: 91 - 122 Abstract: AbstractWe consider the forensic context in which the goal is to assess whether two sets of observed data came from the same source or from different sources. In particular, we focus on the situation in which the evidence consists of two sets of categorical count data: a set of event counts from an unknown source tied to a crime and a set of event counts generated by a known source. Using a same-source versus different-source hypothesis framework, we develop an approach to calculating a likelihood ratio. Under our proposed model, the likelihood ratio can be calculated in closed form, and we use this to theoretically analyse how the likelihood ratio is affected by how much data is observed, the number of event types being considered, and the prior used in the Bayesian model. Our work is motivated in particular by user-generated event data in digital forensics, a context in which relatively few statistical methodologies have yet been developed to support quantitative analysis of event data after it is extracted from a device. We evaluate our proposed method through experiments using three real-world event datasets, representing a variety of event types that may arise in digital forensics. The results of the theoretical analyses and experiments with real-world datasets demonstrate that while this model is a useful starting point for the statistical forensic analysis of user-generated event data, more work is needed before it can be applied for practical use. PubDate: Fri, 23 Dec 2022 00:00:00 GMT DOI: 10.1093/lpr/mgac016 Issue No: Vol. 21, No. 2 (2022)
- Inconclusives in firearm error rate studies are not ‘a pass’
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Pages: 123 - 127 Abstract: One question Professors Arkes and Koehler (2022) (hereinafter ‘A&K’) ask in their thoughtful paper is ‘What role should “inconclusives” play in the computation of error rates'’ (p. 5) The answer to this question is vital because the number of inconclusives in firearm error rate studies is staggering. For example, firearm examiners in the FBI/Ames Laboratory study made 8,640 comparisons, of which 3922 (45%) were deemed inconclusive (Bajic et al., 2020, Table V). The most recent firearm error rate study reported that 51% of all comparisons were deemed inconclusive (Best and Gardner, 2022). Determining how to count half of the responses is the critical—perhaps even decisive—factor in interpreting the error rates from the study. PubDate: Mon, 14 Nov 2022 00:00:00 GMT DOI: 10.1093/lpr/mgac011 Issue No: Vol. 21, No. 2 (2022)
- A plague on both your houses: The debate about how to deal with
‘inconclusive’ conclusions when calculating error rates-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Pages: 127 - 129 Abstract: Research England’s Expanding Excellence in England FundAston Institute for Forensic Linguistics 2019–2024 PubDate: Wed, 23 Nov 2022 00:00:00 GMT DOI: 10.1093/lpr/mgac015 Issue No: Vol. 21, No. 2 (2022)
|