A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

  Subjects -> PSYCHOLOGY (Total: 983 journals)
The end of the list has been reached or no journals were found for your choice.
Similar Journals
Journal Cover
Decision
Journal Prestige (SJR): 1.687
Citation Impact (citeScore): 2
Number of Followers: 7  
 
  Full-text available via subscription Subscription journal
ISSN (Print) 2325-9965 - ISSN (Online) 2325-9973
Published by APA Homepage  [89 journals]
  • Introduction to the special issue on judgment and decision research on the
           wisdom of the crowds.

    • Free pre-print version: Loading...

      Abstract: The articles in this special issue on the wisdom of the crowds contribute to outstanding questions regarding what to aggregate. One way to group the articles relates to an overarching question: To what extent should aggregation be left to the crowd members themselves' At one extreme, individuals work independently and opinions are combined by algorithm such as simple averaging. At the other extreme, the crowd’s opinion emerges from a social process that may be structured or organized in some way. This special issue includes articles at both ends of this continuum, as well as articles “in between” that combine elements of both approaches. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
      PubDate: Mon, 22 Jan 2024 00:00:00 GMT
      DOI: 10.1037/dec0000228
       
  • Using selected peers to improve the accuracy of crowd sourced forecasts.

    • Free pre-print version: Loading...

      Abstract: Crowd sourcing approaches accompanied by optimal aggregation algorithms of human forecasts are becoming increasingly popular and are used in many contexts. In situations where large number of judges are offered the opportunity to predict multiple events, one often encounters large numbers of “missing” forecasts. This article proposes a new approach to predict the missing responses, based on the answers of other similar forecasters. Based on every judge’s recorded forecasts, we identify a group of “peers,” and we impute the missing forecasts based on the median of one’s peers. We use data collected during a large-scale geopolitical forecasting tournament to illustrate the approach, test its feasibility and quantify its benefits. Our analysis indicates that the proposed method can improve the performance of the crowd and most events, while preserving crowd diversity. Analysis of the selected peers suggests that the proposed method is successful because it overweighs and propagates the responses of the most engaged and accurate forecasters. These influential peers tend to score higher on various measures of intelligence and are better calibrated. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
      PubDate: Mon, 22 Jan 2024 00:00:00 GMT
      DOI: 10.1037/dec0000226
       
  • Automated update tools to augment the wisdom of crowds in geopolitical
           forecasting.

    • Free pre-print version: Loading...

      Abstract: Despite the importance of predictive judgments, individual human forecasts are frequently less accurate than those of even simple prediction algorithms. At the same time, not all forecasts are amenable to algorithmic prediction. Here, we describe the evaluation of an automated prediction tool that enabled participants to create simple rules that monitored relevant indicators (e.g., commodity prices) to automatically update forecasts. We examined these rules in both a pool of previous participants in a geopolitical forecasting tournament (Study 1) and a naïve sample recruited from Mechanical Turk (Study 2). Across the two studies, we found that automated updates tended to improve forecast accuracy relative to initial forecasts and were comparable to manual updates. Additionally, making rules improved the accuracy of manual updates. Crowd forecasts likewise benefitted from rule-based updates. However, when presented with the choice of whether to accept, reject or adjust an automatic forecast update, participants showed little ability to discriminate between automated updates that were harmful versus beneficial to forecast accuracy. Simple prospective rule-based tools are thus able to improve forecast accuracy by offering accurate and efficient updates, but ensuring forecasters make use of tools remains a challenge. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
      PubDate: Mon, 22 Jan 2024 00:00:00 GMT
      DOI: 10.1037/dec0000227
       
  • The wisdom of the coherent: Improving correspondence with
           coherence-weighted aggregation.

    • Free pre-print version: Loading...

      Abstract: Previous research shows that variation in coherence (i.e., degrees of respect for axioms of probability calculus), when used as a basis for performance-weighted aggregation, can improve the accuracy of probability judgments. However, many aspects of coherence-weighted aggregation remain a mystery, including both prescriptive issues (e.g., how best to use coherence measures) and theoretical issues (e.g., why coherence-weighted aggregation is effective). Using data from six experiments in two earlier studies (N = 58, N = 2,858) employing either general-knowledge or statistical information integration tasks, we addressed many of these issues. Of prescriptive relevance, we examined the effectiveness of coherence-weighted aggregation as a function of judgment elicitation method, group size, weighting function, and the bias of the function’s tuning parameter. Of descriptive relevance, we propose that coherence-weighted aggregation can improve accuracy via two distinct, task-dependent routes: a causal route in which the bases for scoring accuracy depend on conformity to coherence principles (e.g., Bayesian information integration) and a diagnostic route in which coherence serves as a cue to correct knowledge. The findings provide support for the efficacy of both routes, but they also highlight why coherence weighting, especially the most biased forms, sometimes imposes costs to accuracy. We conclude by sketching a decision–theoretic approach to how aggregators can sensibly leverage the wisdom of the coherent within the crowd. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
      PubDate: Mon, 19 Jun 2023 00:00:00 GMT
      DOI: 10.1037/dec0000211
       
  • Using cross-domain expertise to aggregate forecasts when within-domain
           expertise is unknown.

    • Free pre-print version: Loading...

      Abstract: In recent years, a number of crowd aggregation approaches have been proposed to combine the judgments of different individuals in problems where decision-makers do not have records of the individuals’ past performance in that domain. However, it is often possible to obtain a measure of the individuals’ past performance in other domains. The current article explores the extent to which individuals’ relative expertise in one domain can be used to weight their judgments in another domain. Over three experiments comprising a range of decision problems from art, science, sport, and a test of emotional intelligence, we compare the performance of aggregation approaches that do not use individuals’ past performance to those that weight by individuals’ past performance on questions from the same domain (within-domain weighting) or from a different domain (cross-domain weighting). Our results show that although within-domain weighting generally outperforms all other aggregation approaches, cross-domain weighting can be as effective as within-domain weighting in some circumstances. We present a simple model of the relationship between within-domain and cross-domain performance and discuss the conditions under which cross-domain weighting is likely to be effective. Our results demonstrate the potential of cross-domain weighting in problems where records of individuals’ past performance in the domain of interest are unavailable. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
      PubDate: Thu, 18 May 2023 00:00:00 GMT
      DOI: 10.1037/dec0000212
       
  • Harnessing the wisdom of the confident crowd in medical image
           decision-making.

    • Free pre-print version: Loading...

      Abstract: Improving the accuracy of medical image interpretation is critical to improving the diagnosis of many diseases. Using both novices (undergraduates) and experts (medical professionals), we investigated methods for improving the accuracy of a single decision maker and a group of decision makers by aggregating repeated decisions in different ways. Participants made classification decisions (cancerous vs. noncancerous) and confidence judgments on a series of cell images, viewing and classifying each image twice. We first examined whether it is possible to improve individual-level performance by using the maximum confidence slating (MCS) algorithm (Koriat, 2012b), which leverages metacognitive ability by using the most confident response for an image as the “final response.” We find MCS improves individual classification accuracy for both novices and experts. Building on these results, we show that aggregation algorithms based on confidence weighting scale to larger groups of participants, dramatically improving diagnostic accuracy, with the performance of groups of novices reaching that of individual experts. In sum, we find that repeated decision-making and confidence weighting can be a valuable way to improve accuracy in medical image decision-making and that these techniques can be used in conjunction with each other. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
      PubDate: Thu, 06 Apr 2023 00:00:00 GMT
      DOI: 10.1037/dec0000210
       
  • How expertise mediates the effects of numerical and textual communication
           on individual and collective accuracy.

    • Free pre-print version: Loading...

      Abstract: Performance on difficult tasks such as forecasting generally benefits from the “wisdom of crowds,” but communication among individuals can harm performance by reducing independent information. Collective accuracy can be improved by weighting by expertise, but it may also be naturally improved within communicating groups by the tendency of experts to be more resistant to peer information, effectively upweighting their contributions. To elucidate precisely how experts resist peer information, and the downstream effects of that on individual and collective accuracy, we construct a set of event-prediction challenges and randomize the exchange of both numerical and textual information among individuals. This allows us to estimate a continuous nonlinear response function connecting signals and predictions, which we show is consistent with a novel Bayesian updating framework which unifies the tendencies of experts to discount all peer information, as well as information more distant from their priors. We show via our textual treatment that experts are similarly less responsive to textual information, where nonexperts are more affected and benefited overall, but experts are helped by the highest quality text. We apply our Bayesian framework to show that the collective benefits of expert nonresponsivity are highly sensitive to the variance in expertise, but that individual predictions can be “corrected” back toward their unobserved pretreatment states, boosting the collective accuracy of nonexperts close to the level of experts, and restoring much of the accuracy lost due to intragroup communication. We conclude by examining potential avenues for further improving collective accuracy by structuring communication within groups. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
      PubDate: Thu, 06 Apr 2023 00:00:00 GMT
      DOI: 10.1037/dec0000204
       
  • Incentives for self-extremized expert judgments to alleviate the
           shared-information problem.

    • Free pre-print version: Loading...

      Abstract: Simple average of subjective forecasts is known to be effective in estimating uncertain quantities. However, benefits of averaging could be limited when forecasters have shared information, resulting in overrepresentation of the shared information in average forecast. This article proposes a simple incentive-based solution to the shared-information problem. Experts are grouped with nonexperts in forecasting crowds and they are rewarded for the accuracy of crowd average instead of their individual accuracy. In equilibrium, experts anticipate the overrepresentation of shared information and extremize their forecasts toward their private information to boost crowd accuracy. The self-extremization in individual expert forecasts alleviates the shared-information problem. Experimental evidence suggests that incentives for crowd accuracy could induce self-extremization even in small crowds where winner-take-all contests (another incentive-based solution) are not effective. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
      PubDate: Thu, 17 Nov 2022 00:00:00 GMT
      DOI: 10.1037/dec0000198
       
  • Sequential collaboration: The accuracy of dependent, incremental
           judgments.

    • Free pre-print version: Loading...

      Abstract: Online collaborative projects in which users contribute to extensive knowledge bases such as Wikipedia or OpenStreetMap have become increasingly popular while yielding highly accurate information. Collaboration in such projects is organized sequentially, with one contributor creating an entry and the following contributors deciding whether to adjust or to maintain the presented information. We refer to this process as sequential collaboration since individual judgments directly depend on the previous judgment. As sequential collaboration has not yet been examined systematically, we investigate whether dependent, sequential judgments become increasingly more accurate. Moreover, we test whether final sequential judgments are more accurate than the unweighted average of independent judgments from equally large groups. We conducted three studies with groups of four to six contributors who either answered general knowledge questions (Experiments 1 and 2) or located cities on maps (Experiment 3). As expected, individual judgments became more accurate across the course of sequential chains, and final estimates were similarly accurate as unweighted averaging of independent judgments. These results show that sequential collaboration profits from dependent, incremental judgments, thereby shedding light on the contribution process underlying large-scale online collaborative projects. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
      PubDate: Thu, 15 Sep 2022 00:00:00 GMT
      DOI: 10.1037/dec0000193
       
  • How people use information about the number and distribution of judgments
           when tapping into the wisdom of the crowds.

    • Free pre-print version: Loading...

      Abstract: Using an advice-taking paradigm, we investigated how people use information about the wisdom of the crowds when revising their judgments. We focused on two types of information: information about the size of the advisor crowd and information about the distribution of the judgments within the crowd. To test whether judges use these two types of information, we varied the size of the advisor crowd (two, four, or eight advisors) and orthogonally manipulated whether judges received advice in the form of a crowd estimate or in the form of the separate individual judgments. In a third condition, participants received the crowd estimate, but it was labeled as stemming from an individual. We found no evidence that judges used information about the size of the crowd, but they considered information about the distribution of the advice when revising their opinions. Compared with crowd estimates, receiving the individual judgments as advice led to less advice taking but not to substantial differences in postadvice accuracy. Exploratory analyses showed that judges receiving multiple pieces of advice heeded advice less when their initial judgments were closer to the center of the distribution of judgments. In those instances, their initial judgments were also the most accurate, so they stood to gain less from the advice. Receiving multiple pieces of advice also led to smaller confidence gains, suggesting that judges receiving crowd judgments as advice might underestimate the variance of the underlying individual judgments. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
      PubDate: Thu, 18 Aug 2022 00:00:00 GMT
      DOI: 10.1037/dec0000194
       
  • A hypothesis test algorithm for determining when weighting individual
           judgments reliably improves collective accuracy or just adds noise.

    • Free pre-print version: Loading...

      Abstract: The wisdom of a crowd can be extracted by simply averaging judgments, but weighting judges based on their past performance may improve accuracy. The reliability of any proposed weighting scheme depends on the estimation precision of the features that determine the weights, which in practice cannot be known perfectly. Therefore, we can never guarantee that any weighted average will be more accurate than the simple average. However, depending on the statistical properties of the judgments (i.e., their estimated biases, variances, and correlations) and the sample size (i.e., the number of judgments from each individual), we may be reasonably confident that a weighted average will outperform the simple average. We develop a general algorithm to test whether there are sufficiently many observed judgments for practitioners to reject using the simple average and instead trust a weighted average as a reliably more accurate judgment aggregation method. Using simulation, we find our test provides better guidance than cross validation. Using real data, we demonstrate how many judgments may be required to be able to trust commonly used weighted averages. Our algorithm can also be used for power analysis when planning data collection and as a decision tool given existing data to optimize crowd wisdom. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
      PubDate: Thu, 28 Jul 2022 00:00:00 GMT
      DOI: 10.1037/dec0000187
       
  • Skew-adjusted extremized-mean: A simple method for identifying and
           learning from contrarian minorities in groups of forecasters.

    • Free pre-print version: Loading...

      Abstract: Recent work in forecast aggregation has demonstrated that paying attention to contrarian minorities among larger groups of forecasters can improve aggregated probabilistic forecasts. In those articles, the minorities are identified using “metaquestions” that ask forecasters about their forecasting abilities or those of others. In the present article, we explain how contrarian minorities can be identified without the metaquestions by inspecting the skewness of the distribution of the forecasts. Inspired by this observation, we introduce a new forecast aggregation tool called skew-adjusted extremized-mean and demonstrate its superior predictive power on a large set of geopolitical and general knowledge forecasting data. (PsycInfo Database Record (c) 2024 APA, all rights reserved)
      PubDate: Thu, 28 Jul 2022 00:00:00 GMT
      DOI: 10.1037/dec0000191
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 3.236.223.106
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-
JournalTOCs
 
 

 A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

  Subjects -> PSYCHOLOGY (Total: 983 journals)
The end of the list has been reached or no journals were found for your choice.
Similar Journals
Similar Journals
HOME > Browse the 73 Subjects covered by JournalTOCs  
SubjectTotal Journals
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 3.236.223.106
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-