Subjects -> SOCIOLOGY (Total: 553 journals)
| A B C D E F G H I J K L M N O P Q R S T U V W X Y Z | The end of the list has been reached or no journals were found for your choice. |
|
|
- Explaining human sampling rates across different decision domains ---
Didrika S. van de Wouw --- Ryan T. McKay --- Bruno B. Averbeck --- Nicholas Furl
Undersampling biases are common in the optimal stopping literature, especially for economic full choice problems. Among these kinds of number-based studies, the moments of the distribution of values that generates the options (i.e., the generating distribution) seem to influence participants' sampling rate. However, a recent study reported an oversampling bias on a different kind of optimal stopping task: where participants chose potential romantic partners from images of faces. The authors hypothesised that this oversampling bias might be specific to mate choice. We preregistered this hypothesis and so, here, we test whether sampling rates across different image-based decision-making domains a) reflect different over- or undersampling biases, or b) depend on the moments of the generating distributions (as shown for economic number-based tasks). In two studies (N = 208 and N = 96), we found evidence against the preregistered hypothesis. Participants oversampled to the same degree across domains (compared to a Bayesian ideal observer model), while their sampling rates depended on the generating distribution mean and skewness in a similar way as number-based paradigms. Moreover, optimality model sampling to some extent depended on the the skewness of the generating distribution in a similar way to participants. We conclude that oversampling is not instigated by the mate choice domain and that sampling rate in image-based paradigms, like number-based paradigms, depends on the generating distribution.
- The day after the disaster: Risk-taking following large- and small-scale
disasters in a microworld --- Garston Liang --- Tim Rakow --- Eldad Yechiam --- Ben R. Newell
Using data from seven microworld experiments (N = 841), we investigated how participants reacted to simulated disasters with different risk profiles in a microworld. Our central focus was to investigate how the scale of a disaster affected the choices and response times of these reactions. We find that one-off large-scale disasters prompted stronger reactions to move away from the affected region than recurrent small-scale adverse events, despite the overall risk of a disaster remaining constant across both types of events. A subset of participants are persistent risk-takers who repeatedly put themselves in harm’s way, despite having all the experience and information required to avoid a disaster. Furthermore, while near-misses prompted a small degree of precautionary movement to reduce one’s subsequent risk exposure, directly experiencing the costs of the disaster substantially increased the desire to move away from the affected region. Together, the results point to ways in which laboratory risk-taking tasks can be used to inform the kinds of communication and interventions that seek to mitigate people’s exposure to risk.
- Susceptibility to misinformation is consistent across question framings
and response modes and better explained by myside bias and partisanship than analytical thinking --- Jon Roozenbeek --- Stefan M. Herzog --- Michael Geers --- Ralf Kurvers --- Mubashir Sultan --- Sander van der Linden
Misinformation presents a significant societal problem. To measure individuals’ susceptibility to misinformation and study its predictors, researchers have used a broad variety of ad-hoc item sets, scales, question framings, and response modes. Because of this variety, it remains unknown whether results from different studies can be compared (e.g., in meta-analyses). In this preregistered study (US sample; N = 2,622), we compare five commonly used question framings (eliciting perceived headline accuracy, manipulativeness, reliability, trustworthiness, and whether a headline is real or fake) and three response modes (binary, 6-point and 7-point scales), using the psychometrically validated Misinformation Susceptibility Test (MIST). We test 1) whether different question framings and response modes yield similar responses for the same item set, 2) whether people’s confidence in their primary judgments is affected by question framings and response modes, and 3) which key psychological factors (myside bias, political partisanship, cognitive reflection, and numeracy skills) best predict misinformation susceptibility across assessment methods. Different response modes and question framings yield similar (but not identical) responses for both primary ratings and confidence judgments. We also find a similar nomological net across conditions, suggesting cross-study comparability. Finally, myside bias and political conservatism were strongly positively correlated with misinformation susceptibility, whereas numeracy skills and especially cognitive reflection were less important (although we note potential ceiling effects for numeracy). We thus find more support for an “integrative” account than a “classical reasoning” account of misinformation belief.
- Maximize when valuable: The domain specificity of maximizing
decision-making style --- Minfan Zhu --- Jun Wang --- Xiaofei Xie
The maximizing decision-making style describes the style of one who pursues maximum utility in decision-making, in contrast to the satisficing style, which describes the style of one who is satisfied with good enough options. The current research concentrates on the within-person variation in the maximizing decision-making style and provides an explanation through three studies. Study 1 (N = 530) developed a domain-specific maximizing scale and found that individuals had different maximizing tendencies across different domains. Studies 2 (N = 162) and 3 (N = 106) further explored this mechanism from the perspective of subjective task value through questionnaires and experiments. It was found that the within-person variation of maximization in different domains is driven by the difference in the individuals’ subjective task value in the corresponding domains. People tend to maximize more in the domains they value more. Our research contributes to a comprehensive understanding of maximization and provides a new perspective for the study of the maximizing decision-making style.
- Combining white box models, black box machines and human interventions for
interpretable decision strategies --- Gregory Gadzinski --- Alessio Castello
Granting a short-term loan is a critical decision. A great deal of research has concerned the prediction of credit default, notably through Machine Learning (ML) algorithms. However, given that their black-box nature has sometimes led to unwanted outcomes, comprehensibility in ML guided decision-making strategies has become more important. In many domains, transparency and accountability are no longer optional. In this article, instead of opposing white-box against black-box models, we use a multi-step procedure that combines the Fast and Frugal Tree (FFT) methodology of Martignon et al. (2005) and Phillips et al. (2017) with the extraction of post-hoc explainable information from ensemble ML models. New interpretable models are then built thanks to the inclusion of explainable ML outputs chosen by human intervention. Our methodology improves significantly the accuracy of the FFT predictions while preserving their explainable nature. We apply our approach to a dataset of short-term loans granted to borrowers in the UK, and show how complex machine learning can challenge simpler machines and help decision makers.
- Expectations of how machines use individuating information and base-rates
--- Sarah D. English --- Stephanie Denison --- Ori Friedman
Machines are increasingly used to make decisions. We investigated people’s beliefs about how they do so. In six experiments, participants (total N = 2664) predicted how computer and human judges would decide legal cases on the basis of limited evidence — either individuating information from witness testimony or base-rate information. In Experiments 1 to 4, participants predicted that computer judges would be more likely than human ones to reach a guilty verdict, regardless of which kind of evidence was available. Besides asking about punishment, Experiment 5 also included conditions where the judge had to decide whether to reward suspected helpful behavior. Participants again predicted that computer judges would be more likely than human judges to decide based on the available evidence, but also predicted that computer judges would be relatively more punitive than human ones. Also, whereas participants predicted the human judge would give more weight to individuating than base-rate evidence, they expected the computer judge to be insensitive to the distinction between these kinds of evidence. Finally, Experiment 6 replicated the finding that people expect greater sensitivity to the distinction between individuating and base-rate information from humans than computers, but found that the use of cartoon images, as in the first four studies, prevented this effect. Overall, the findings suggest people expect machines to differ from humans in how they weigh different kinds of information when deciding.
|