Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The additive utility theory of discounting is extended to probability and commodity discounting. Because the utility of a good and the disutility of its delay combine additively, increases in the utility of a good offset the disutility of its delay: Increasing the former slows the apparent discount even with the latter, time-disutility, remaining invariant, giving the magnitude effect. Conjoint measurement showed the subjective value of money to be a logarithmic function of its amount, and subjective probability—the probability weighting function—to be Prelec’s (1998). This general theory of discounting (GTD) explains why large amounts are probability discounted more quickly, giving the negative magnitude effect. Whatever enhances the value of a delayed asset, such as its ability to satisfy diverse desires, offsets its delay and reduces discounting. Money’s liquidity permits optimization of the portfolio of desired goods, providing added value that accounts for its shallow temporal discount gradient. GTD predicts diversification effects for delay but none for probability discounting. Operations such as episodic future thinking that increase the larder of potential expenditures—the portfolio of desirable goods—increase the value of the asset, flattening the discount gradient. States that decrease the larder, such as stress, depression, and overweening focus on a single substance like a drug, constrict the portfolio, decreasing its utility and thereby steepening the gradient. GTD provides a unified account of delay, probability, and cross-commodity discounting. It explains the effects of motivational states, dispositions, and cognitive manipulations on discount gradients. (PsycInfo Database Record (c) 2023 APA, all rights reserved) PubDate: Thu, 14 Sep 2023 00:00:00 GMT DOI: 10.1037/rev0000447
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: A key component of humans’ striking creativity in solving problems is our ability to construct novel descriptions to help us characterize novel concepts. Bongard problems (BPs), which challenge the problem solver to come up with a rule for distinguishing visual scenes that fall into two categories, provide an elegant test of this ability. BPs are challenging for both human and machine category learners because only a handful of example scenes are presented for each category, and they often require the open-ended creation of new descriptions. A new type of BP called physical Bongard problems (PBPs) is introduced, which requires solvers to perceive and predict the physical spatial dynamics implicit in the depicted scenes. The perceiving and testing hypotheses on structures (PATHS) computational model, which can solve many PBPs, is presented and compared to human performance on the same problems. PATHS and humans are similarly affected by the ordering of scenes within a PBP. Spatially or temporally juxtaposing similar (relative to dissimilar) scenes promotes category learning when the scenes belong to different categories but hinders learning when the similar scenes belong to the same category. The core theoretical commitments of PATHS, which we believe to also exemplify open-ended human category learning, are (a) the continual perception of new scene descriptions over the course of category learning; (b) the context-dependent nature of that perceptual process, in which the perceived scenes establish the context for the perception of subsequent scenes; (c) hypothesis construction by combining descriptions into explicit rules; and (d) bidirectional interactions between perceiving new aspects of scenes and constructing hypotheses for the rule that distinguishes categories. (PsycInfo Database Record (c) 2023 APA, all rights reserved) PubDate: Thu, 13 Jul 2023 00:00:00 GMT DOI: 10.1037/rev0000433
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Most words have multiple meanings, but there are foundationally distinct accounts for this. Categorical theories posit that humans maintain discrete entries for distinct word meanings, as in a dictionary. Continuous ones eschew discrete sense representations, arguing that word meanings are best characterized as trajectories through a continuous state space. Both kinds of approach face empirical challenges. In response, we introduce two novel “hybrid” theories, which reconcile discrete sense representations with a continuous view of word meaning. We then report on two behavioral experiments, pairing them with an analytical approach relying on neural language models to test these competing accounts. The experimental results are best explained by one of the novel hybrid accounts, which posits both distinct sense representations and a continuous meaning space. This hybrid account accommodates both the dynamic, context-dependent nature of word meaning, as well as the behavioral evidence for category-like structure in human lexical knowledge. We further develop and quantify the predictive power of several computational implementations of this hybrid account. These results raise questions for future research on lexical ambiguity, such as why and when discrete sense representations might emerge in the first place. They also connect to more general questions about the role of discrete versus gradient representations in cognitive processes and suggest that at least in this case, the best explanation is one that integrates both factors: Word meaning is both categorical and continuous. (PsycInfo Database Record (c) 2023 APA, all rights reserved) PubDate: Thu, 09 Mar 2023 00:00:00 GMT DOI: 10.1037/rev0000420
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Mindsets of ability (i.e., “fixed” and “growth” mindsets) play a pivotal role in students’ academic trajectories. However, relatively little is known about the mechanisms underlying mindset development. Identifying these mechanisms is vital for understanding, and potentially influencing, how mindsets emerge and change over time. In this article, we formulate a comprehensive theoretical model that purports to account for the emergence and development of ability mindsets: the process model of mindsets (PMM). The PMM is rooted in complex dynamic systems and enactive perspectives, which allow for conceptualizing psychological phenomena as dynamic and socially situated. The PMM accounts for how mindset-related behaviors, action tendencies, beliefs, and social interactions can become codependent and robust over time. We discuss how the model helps to further our understanding of the efficacy of mindset interventions and the heterogeneity thereof. The PMM has a broad explanatory scope, is generative, and paves the way for future process studies of mindsets and mindset interventions. (PsycInfo Database Record (c) 2023 APA, all rights reserved) PubDate: Mon, 06 Mar 2023 00:00:00 GMT DOI: 10.1037/rev0000425
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Extensive research in the behavioral sciences has addressed people’s ability to learn stationary probabilities, which stay constant over time, but only recently have there been attempts to model the cognitive processes whereby people learn—and track—nonstationary probabilities. In this context, the old debate on whether learning occurs by the gradual formation of associations or by occasional shifts between hypotheses representing beliefs about distal states of the world has resurfaced. Gallistel et al. (2014) pitched the two theories against each other in a nonstationary probability learning task. They concluded that various qualitative patterns in their data were incompatible with trial-by-trial associative learning and could only be explained by a hypothesis-testing model. Here, we contest that claim and demonstrate that it was premature. First, we argue that their experimental paradigm consisted of two distinct tasks: probability tracking (an estimation task) and change detection (a decision-making task). Next, we present a model that uses the (associative) delta learning rule for the probability tracking task and bounded evidence accumulation for the change detection task. We find that this combination of two highly established theories accounts well for all qualitative phenomena and outperforms the alternative model proposed by Gallistel et al. (2014) in a quantitative model comparison. In the spirit of cumulative science, we conclude that current experimental data on human learning of nonstationary probabilities can be explained as a combination of associative learning and bounded evidence accumulation and does not require a new model. (PsycInfo Database Record (c) 2023 APA, all rights reserved) PubDate: Thu, 12 Jan 2023 00:00:00 GMT DOI: 10.1037/rev0000410
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Memory should make more available things that are more likely to be needed. Across multiple environmental domains, it has been shown that such a system would match qualitatively the memory effects involving repetition, delay, and spacing (Schooler & Anderson, 2017). To obtain data of sufficient size to study how detailed patterns of past appearance predict probability of being needed again, we examined the patterns with which words appear in large two data sets: tweets from popular sources and comments on popular subreddits. The two data sets show remarkably similar statistics, which are also consistent with earlier, smaller studies of environmental statistics. None of a candidate set of mathematical models of memory do well at predicting the observed patterns in these environments. A new model of human memory based on the environmental model proposed by Anderson and Milson (1989) did better at predicting the environmental data and a wide range of behavioral studies that measure memory availability by probability of recall and speed of retrieval. A critical variable in this model was range, the span of time over which an item occurs, which was discovered in mining the environmental data. These results suggest that theories of memory can be guided by mining of the statistical structure of the environment. (PsycInfo Database Record (c) 2023 APA, all rights reserved) PubDate: Thu, 22 Dec 2022 00:00:00 GMT DOI: 10.1037/rev0000409
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Self-control describes the processes by which individuals control their habits, desires, and impulses in the service of long-term goals. Research has identified important components of self-control and proposed theoretical frameworks integrating these components (e.g., Inzlicht et al., 2021; Kotabe & Hofmann, 2015). In our perspective, these frameworks, however, do not yet fully incorporate important metacognitive aspects of self-control. We therefore introduce a framework explicating the role of metacognition for self-control. This framework extends existing frameworks, primarily from the domains of self-regulated learning and problem-solving (e.g., Schraw & Moshman, 1995; Zimmerman, 2000), and integrates past and contemporary research and theorizing on self-control that involves aspects of metacognition. It considers two groups of metacognitive components, namely, (a) individual metacognitive characteristics, that is a person’s declarative, procedural, and conditional metacognitive knowledge about self-control, as well as their self-awareness (or metacognitive awareness), and (b) metacognitive regulatory processes that unfold before a self-control conflict (forethought and prevention), when a self-control conflict is identified, during a self-control conflict (regulation and monitoring), and after a self-control conflict (reflection and evaluation). The proposed framework integrates existing research and will be useful for highlighting new directions for research on the role of metacognition for self-control success and failure. (PsycInfo Database Record (c) 2023 APA, all rights reserved) PubDate: Thu, 15 Dec 2022 00:00:00 GMT DOI: 10.1037/rev0000406
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Free association among words is a fundamental and ubiquitous memory task. Although distributed semantics (DS) models can predict the association between pairs of words, and semantic network (SN) models can describe transition probabilities in free association data, there have been few attempts to apply established cognitive process models of memory search to free association data. Thus, researchers are currently unable to explain the dynamics of free association using memory mechanisms known to be at play in other retrieval tasks, such as free recall from lists. We address this issue using a popular neural network model of free recall, the context maintenance and retrieval (CMR) model, which we fit using stochastic gradient descent on a large data set of free association norms. Special cases of CMR mimic existing DS and SN models of free association, and we find that CMR outperforms these models on out-of-sample free association data. We also show that training CMR on free association data generates improved predictions for free recall from lists, demonstrating the value of free association for the study of many different types of memory phenomena. Overall, our analysis provides a new account of the dynamics of free association, predicts free association with increased accuracy, integrates theories of free association with established models of memory, and shows how large data sets and neural network training methods can be used to model complex cognitive processes that operate over thousands of representations. (PsycInfo Database Record (c) 2023 APA, all rights reserved) PubDate: Thu, 06 Oct 2022 00:00:00 GMT DOI: 10.1037/rev0000396
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The general theory of deception (GTD) aims to unify and complete the various sparse theoretical units that have been proposed in the deception literature to date, in a comprehensive framework fully describing from end to end the process by which deceptive messages are produced, and how this can inform more effective prevention and detection. As part of the elaboration of the theory, the different ways people elaborate deceptive messages were first tracked by the authors daily, over 3 years, resulting in the identification, description, and naming of 99 “elementary deception modes” (87 verbal, 12 nonverbal) that can all be combined during one deceptive episode, thus leading to a total estimate of 10³⁰ different ways to lie. Central to the GTD is the “five forces model,” explaining precisely at which times deceptive messages occur and what factors compete to determine the types of messages that are most likely to be produced (truthful, refusal to answer, or deceptive—and with which deception modes). Finally, the process by which deceptive messages come to mind and are compared, both against each other and against the option of disclosing the truth, given memory’s capacity and time limits, has been described in the form of a dynamic, continuous, and testable algorithm called the “deception decision algorithm” (DDA). The practical insights derived from this new disruptive theory of lie production are discussed and a theory-based lie prevention and detection enhancement method is introduced. (PsycInfo Database Record (c) 2023 APA, all rights reserved) PubDate: Thu, 25 Aug 2022 00:00:00 GMT DOI: 10.1037/rev0000389
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Central to perceptual dehumanization theory (PDT) is the claim that full engagement of a putative module for the visual analysis of faces is necessary in order to recognize the humanity or personhood of observed individuals. According to this view, the faces of outgroup members do not engage domain-specific face processing fully or typically and are instead processed in a manner akin to how the brain processes objects. Consequently, outgroup members are attributed less humanity than ingroup members. To the extent that groups are perceptually dehumanized, they are hypothesized to be vulnerable to harm. In our article, we challenge several of the fundamental assumptions underlying this theory and question the empirical evidence in its favor. We begin by illustrating the extent to which the existence of domain-specific face processing is contested within the vision science literature. Next, we interrogate empirical evidence that appears to support PDT and suggest that alternative explanations for prominent findings in the field are more likely. In the closing sections of the article, we reflect on the broader logic of the theory and highlight some underlying inconsistencies. (PsycInfo Database Record (c) 2023 APA, all rights reserved) PubDate: Thu, 28 Jul 2022 00:00:00 GMT DOI: 10.1037/rev0000388
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The open science framework has garnered increased visibility and has been partially implemented in recent years. Open science underscores the importance of transparency and reproducibility to conduct rigorous science. Recently, several journals published by the American Psychological Association have begun adopting the open science framework. At the same time, the field of psychology has been reckoning with the current sociopolitical climate regarding anti-Blackness and White supremacy. As psychology begins to adopt the open science framework into its journals, the authors underscore the importance of embracing and aligning open science with frameworks and theories that have the potential to move the field toward antiracism and away from the embedded White supremacy value systems and ideals. The present article provides an overview of the open science framework; an examination of White supremacy ideology in research and publishing; guidance on how to move away from these pernicious values; and a proposal on alternate value systems to center equity, diversity, and inclusion with the aim of establishing an antiracist open science framework. (PsycInfo Database Record (c) 2023 APA, all rights reserved) PubDate: Thu, 14 Jul 2022 00:00:00 GMT DOI: 10.1037/rev0000386
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Continuous-outcome decisions, in which responses are made on continuous scales, are increasingly used to study perception and memory for stimulus attributes like color, orientation, and motion. This interest has led to the development of models of continuous-outcome decision processes like the circular diffusion model that predict joint distributions of decision outcomes and response times (RTs). We use the circular diffusion model and a new spherical generalization of it to model performance in a continuous-outcome version of the random-dot motion task. The task is a benchmark test of decision models because it yields bimodal distributions of decision outcomes: In addition to a peak or mode in the true direction of motion, there is a secondary, antipodal, mode at 180° to the true direction. Models like the circular diffusion model, in which evidence is accumulated by a single process, are thought to be unable to predict bimodality. We compared diffusion models for the continuous motion task in which evidence is accumulated in either a two-dimensional (2D) or a three-dimensional (3D) representational space. We found that performance was well described by a spherical (3D) diffusion model in which the drift rate encodes perceived motion direction and strength and the points on the bounding sphere representing the decision criterion are projected onto a 2D circle to make a response. A model with an antipodal component of drift rate and drift-rate variability successfully predicted bimodal distributions of decision outcomes and the joint distributions of decision outcomes and RT for individual participants. (PsycInfo Database Record (c) 2023 APA, all rights reserved) PubDate: Mon, 04 Jul 2022 00:00:00 GMT DOI: 10.1037/rev0000377
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Overprecision is the excessive certainty in the accuracy of one’s judgment. This article proposes a new theory to explain it. The theory holds that overprecision in judgment results from neglect of all the ways in which one could be wrong. When there are many ways to be wrong, it can be difficult to consider them all. Overprecision is the result of being wrong and not knowing it. This explanation can account for why question formats have such a dramatic influence on the degree of overprecision people report. It also explains the ubiquity of overprecision not only among people but also among artificially intelligent agents. (PsycInfo Database Record (c) 2023 APA, all rights reserved) PubDate: Thu, 05 May 2022 00:00:00 GMT DOI: 10.1037/rev0000370