Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Sara J. Weston, Ian Shryock, Ryan Light, Phillip A. Fisher Abstract: Advances in Methods and Practices in Psychological Science, Volume 6, Issue 2, April-June 2023. Topic modeling is a type of text analysis that identifies clusters of co-occurring words, or latent topics. A challenging step of topic modeling is determining the number of topics to extract. This tutorial describes tools researchers can use to identify the number and labels of topics in topic modeling. First, we outline the procedure for narrowing down a large range of models to a select number of candidate models. This procedure involves comparing the large set on fit metrics, including exclusivity, residuals, variational lower bound, and semantic coherence. Next, we describe the comparison of a small number of models using project goals as a guide and information about topic representative and solution congruence. Finally, we describe tools for labeling topics, including frequent and exclusive words, key examples, and correlations among topics. Citation: Advances in Methods and Practices in Psychological Science PubDate: 2023-05-25T06:23:31Z DOI: 10.1177/25152459231160105 Issue No:Vol. 6, No. 2 (2023)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Advances in Methods and Practices in Psychological Science, Volume 6, Issue 2, April-June 2023.
Citation: Advances in Methods and Practices in Psychological Science PubDate: 2023-05-23T01:27:18Z DOI: 10.1177/25152459231175075 Issue No:Vol. 6, No. 2 (2023)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Jessica L. Fossum, Amanda K. Montoya Abstract: Advances in Methods and Practices in Psychological Science, Volume 6, Issue 2, April-June 2023. Several options exist for conducting inference on indirect effects in mediation analysis. Although methods that use bootstrapping are the preferred inferential approach for testing mediation, they are time-consuming when the test must be performed many times for a power analysis. Alternatives that are more computationally efficient are not as robust, meaning accuracy of the inferences from these methods is more affected by nonnormal and heteroskedastic data. Previous research has shown that different sample sizes are needed to achieve the same amount of statistical power for different inferential approaches with data that meet all the statistical assumptions of linear regression. By contrast, we explore how similar power estimates are at the same sample size, including when assumptions are violated. We compare the power estimates from six inferential methods for between-subjects mediation using a Monte Carlo simulation study. We varied the path coefficients, inferential methods for the indirect effect, and degree to which assumptions are met. We found that when the assumptions of linear regression are met, three inferential methods consistently perform similarly: the joint significance test, the Monte Carlo confidence interval, and the percentile bootstrap confidence interval. When the assumptions were violated, the nonbootstrapping methods tended to have vastly different power estimates compared with the bootstrapping methods. On the basis of these results, we recommend using the more computationally efficient joint significance test for power analysis only when no assumption violations are hypothesized a priori. We also recommend the joint significance test to pick an optimal starting sample size value for power analysis using the percentile bootstrap confidence interval when assumption violations are suspected. Citation: Advances in Methods and Practices in Psychological Science PubDate: 2023-05-11T10:08:09Z DOI: 10.1177/25152459231156606 Issue No:Vol. 6, No. 2 (2023)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Zhicheng Lin, Qi Ma, Yang Zhang Abstract: Advances in Methods and Practices in Psychological Science, Volume 6, Issue 2, April-June 2023. Studies in vision, psychology, and neuroscience often present visual stimuli on digital screens. Crucially, the appearance of visual stimuli depends on properties such as luminance and color, making it critical to measure them. Yet conventional luminance-measuring equipment is not only expensive but also onerous to operate (particularly for novices). Building on previous work, here we present an open-source integrated software package—PsyCalibrator (https://github.com/yangzhangpsy/PsyCalibrator)—that takes advantage of consumer hardware (SpyderX, Spyder5) and makes luminance/color measurement and gamma calibration accessible and flexible. Gamma calibration based on visual methods (without photometers) is also implemented. PsyCalibrator requires MATLAB (or its free alternative, GNU Octave) and works in Windows, macOS, and Linux. We first validated measurements from SpyderX and Spyder5 by comparing them with professional, high-cost photometers (ColorCAL MKII Colorimeter and Photo Research PR-670 SpectraScan). Validation results show (a) excellent accuracy in linear correction and luminance/color measurement and (b) for practical purposes, low measurement variances. We offer a detailed tutorial on using PsyCalibrator to measure luminance/color and calibrate displays. Finally, we recommend reporting templates to describe simple (e.g., computer-generated shapes) and complex (e.g., naturalistic images and videos) visual stimuli. Citation: Advances in Methods and Practices in Psychological Science PubDate: 2023-04-20T05:14:06Z DOI: 10.1177/25152459221151151 Issue No:Vol. 6, No. 2 (2023)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Yotam Erel, Katherine Adams Shannon, Junyi Chu, Kim Scott, Melissa Kline Struhl, Peng Cao, Xincheng Tan, Peter Hart, Gal Raz, Sabrina Piccolo, Catherine Mei, Christine Potter, Sagi Jaffe-Dax, Casey Lew-Williams, Joshua Tenenbaum, Katherine Fairchild, Amit Bermano, Shari Liu Abstract: Advances in Methods and Practices in Psychological Science, Volume 6, Issue 2, April-June 2023. Technological advances in psychological research have enabled large-scale studies of human behavior and streamlined pipelines for automatic processing of data. However, studies of infants and children have not fully reaped these benefits because the behaviors of interest, such as gaze duration and direction, still have to be extracted from video through a laborious process of manual annotation, even when these data are collected online. Recent advances in computer vision raise the possibility of automated annotation of these video data. In this article, we built on a system for automatic gaze annotation in young children, iCatcher, by engineering improvements and then training and testing the system (referred to hereafter as iCatcher+) on three data sets with substantial video and participant variability (214 videos collected in U.S. lab and field sites, 143 videos collected in Senegal field sites, and 265 videos collected via webcams in homes; participant age range = 4 months–3.5 years). When trained on each of these data sets, iCatcher+ performed with near human-level accuracy on held-out videos on distinguishing “LEFT” versus “RIGHT” and “ON” versus “OFF” looking behavior across all data sets. This high performance was achieved at the level of individual frames, experimental trials, and study videos; held across participant demographics (e.g., age, race/ethnicity), participant behavior (e.g., movement, head position), and video characteristics (e.g., luminance); and generalized to a fourth, entirely held-out online data set. We close by discussing next steps required to fully automate the life cycle of online infant and child behavioral studies, representing a key step toward enabling robust and high-throughput developmental research. Citation: Advances in Methods and Practices in Psychological Science PubDate: 2023-04-19T06:02:36Z DOI: 10.1177/25152459221147250 Issue No:Vol. 6, No. 2 (2023)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Zachary J. Kunicki, Meghan L. Smith, Eleanor J. Murray Abstract: Advances in Methods and Practices in Psychological Science, Volume 6, Issue 2, April-June 2023. Many psychological researchers use some form of a visual diagram in their research processes. Model diagrams used with structural equation models (SEMs) and causal directed acyclic graphs (DAGs) can guide causal-inference research. SEM diagrams and DAGs share visual similarities, often leading researchers familiar with one to wonder how the other differs. This article is intended to serve as a guide for researchers in the psychological sciences and psychiatric epidemiology on the distinctions between these methods. We offer high-level overviews of SEMs and causal DAGs using a guiding example. We then compare and contrast the two methodologies and describe when each would be used. In brief, SEM diagrams are both a conceptual and statistical tool in which a model is drawn and then tested, whereas causal DAGs are exclusively conceptual tools used to help guide researchers in developing an analytic strategy and interpreting results. Causal DAGs are explicitly tools for causal inference, whereas the results of a SEM are only sometimes interpreted causally. A DAG may be thought of as a “qualitative schematic” for some SEMs, whereas SEMs may be thought of as an “algebraic system” for a causal DAG. As psychology begins to adopt more causal-modeling concepts and psychiatric epidemiology begins to adopt more latent-variable concepts, the ability of researchers to understand and possibly combine both of these tools is valuable. Using an applied example, we provide sample analyses, code, and write-ups for both SEM and causal DAG approaches. Citation: Advances in Methods and Practices in Psychological Science PubDate: 2023-04-13T10:54:34Z DOI: 10.1177/25152459231156085 Issue No:Vol. 6, No. 2 (2023)