Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Xi Song, Yu Xie Abstract: Sociological Methods & Research, Ahead of Print. In this paper, we propose a method for constructing an occupation-based socioeconomic index that can easily incorporate changes in occupational structure. The resulting index is the occupational percentile rank for a given cohort, based on contemporaneous information pertaining to educational composition and the number of workers at the occupation level. An occupation may experience an increase or decrease in its occupational rank due to changes in relative sizes and educational compositions across occupations. The method is flexible in dealing with changes in occupational and educational measurements over time. Applying the method to U.S. history from the mid-nineteenth century to the present day, we derive the index using IPUMS U.S. Census microdata from 1850 to 2000 and the American Community Surveys (ACSs) from 2001 to 2018. Compared to previous occupational measures, this new measure takes into account occupational status evolvement caused by long-term secular changes in occupational size and educational composition. The resulting percentile rank measure can be easily merged with social surveys and administrative data that include occupational measures based on the U.S. Census occupation codes and crosswalks. Citation: Sociological Methods & Research PubDate: 2023-11-09T08:49:57Z DOI: 10.1177/00491241231207914
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Edoardo Costantini, Kyle M. Lang, Tim Reeskens, Klaas Sijtsma Abstract: Sociological Methods & Research, Ahead of Print. Including a large number of predictors in the imputation model underlying a multiple imputation (MI) procedure is one of the most challenging tasks imputers face. A variety of high-dimensional MI techniques can help, but there has been limited research on their relative performance. In this study, we investigated a wide range of extant high-dimensional MI techniques that can handle a large number of predictors in the imputation models and general missing data patterns. We assessed the relative performance of seven high-dimensional MI methods with a Monte Carlo simulation study and a resampling study based on real survey data. The performance of the methods was defined by the degree to which they facilitate unbiased and confidence-valid estimates of the parameters of complete data analysis models. We found that using lasso penalty or forward selection to select the predictors used in the MI model and using principal component analysis to reduce the dimensionality of auxiliary data produce the best results. Citation: Sociological Methods & Research PubDate: 2023-09-16T11:16:56Z DOI: 10.1177/00491241231200194
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Ari Decter-Frain, Pratik Sachdeva, Loren Collingwood, Hikari Murayama, Juandalyn Burke, Matt Barreto, Scott Henderson, Spencer Wood, Joshua Zingher Abstract: Sociological Methods & Research, Ahead of Print. We consider the cascading effects of researcher decisions throughout the process of quantifying racially polarized voting (RPV). We contrast three methods of estimating precinct racial composition, Bayesian Improved Surname Geocoding (BISG), fully Bayesian BISG, and Citizen Voting Age Population (CVAP), and two algorithms for performing ecological inference (EI), King’s EI and EI:RxC using eiCompare. Using data from two different elections we identify circumstances in which different combinations of methods produce divergent results, comparing against ground-truth data where available. We first find that BISG outperforms CVAP at estimating racial composition, though fully Bayesian BISG does not yield further improvements. Next, in a statewide election, we find that all combinations of methods yield similarly reliable estimates of RPV. However, county-level analyses and results from a non-partisan school board election reveal that BISG and CVAP produce divergent estimates of Black preferences in elections with low turnout and few precincts. Our results suggest that methodological choices can meaningfully alter conclusions about RPV, particularly in smaller, low-turnout elections. Citation: Sociological Methods & Research PubDate: 2023-08-29T04:41:39Z DOI: 10.1177/00491241231192383
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:John Ermisch Abstract: Sociological Methods & Research, Ahead of Print. Empirical analysis of variation in demographic events within the population is facilitated by using longitudinal survey data because of the richness of covariate measures in such data, but there is wave-on-wave dropout. When attrition is related to the event, it precludes consistent estimation of the impacts of covariates on the event and on event probabilities in the absence of additional assumptions. The paper introduces an adjustment procedure based on Bayes Theorem that directly addresses the problem of nonignorable dropout. It uses population information external to the survey sample to convert estimates of event probabilities and marginal effects of covariates on them that are conditional on retention in the longitudinal data to unconditional estimates of these quantities. In many plausible and verifiable circumstances, it produces estimates of the marginal effect of covariates closer to the true unconditional quantities than the conditional estimates obtained from estimation using the survey data alone. Citation: Sociological Methods & Research PubDate: 2023-08-17T06:16:58Z DOI: 10.1177/00491241231186659
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Markus Gangl Abstract: Sociological Methods & Research, Ahead of Print. Rating scales are ubiquitous in the social sciences, yet may present practical difficulties when response formats change over time or vary across surveys. To allow researchers to pool rating data across alternative question formats, the article provides a generalization of the ordered logit model that accommodates multiple scale formats in the measurement of a single rating construct. The resulting multiscale ordered logit model shares the interpretation as well as the proportional odds (or parallel lines) assumption with the standard ordered logit model. A further extension to relax the proportional odds assumption in the multiscale context is proposed, and the substitution of the logit with other convenient link functions is equally straightforward. The utility of the model is illustrated from an empirical analysis of the determinants of respondents’ confidence in democratic institutions that combines data from the European Social Survey, the General Social Survey, and the European and World Values Survey series. Citation: Sociological Methods & Research PubDate: 2023-08-09T07:06:48Z DOI: 10.1177/00491241231186655
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Sven Banisch, Hawal Shamon Abstract: Sociological Methods & Research, Ahead of Print. We combine empirical experimental research on biased argument processing with a computational theory of group deliberation to overcome the micro–macro problem of sociology and to clarify the role of biased processing in debates around energy. We integrate biased processing into the framework of argument communication theory in which agents exchange arguments about a certain topic and adapt opinions accordingly. Our derived mathematical model fits significantly better to the experimentally observed attitude changes than the neutral argument processing assumption made in previous models. Our approach provides new insight into the relationship between biased processing and opinion polarization. Our analysis reveals a sharp qualitative transition from attitude moderation to polarization at the individual level. At the collective level, we find that weak biased processing significantly accelerates group decision processes, whereas strong biased processing leads to a meta-stable conflictual state of bi-polarization that becomes persistent as the bias increases. Citation: Sociological Methods & Research PubDate: 2023-07-24T09:05:39Z DOI: 10.1177/00491241231186658
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Ulf Liebe, Sander van Cranenburgh, Caspar Chorus Abstract: Sociological Methods & Research, Ahead of Print. Empirical studies on individual behaviour often, implicitly or explicitly, assume a single type of decision rule. Other studies do not specify behavioural assumptions at all. We advance sociological research by introducing (random) regret minimization, which is related to loss aversion, into the sociological literature and by testing it against (random) utility maximization, which is the most prominent decision rule in sociological research on individual behaviour. With an application to neighbourhood choice, in a sample of four European cities, we combine stated choice experiment data and discrete choice modelling techniques and find a considerable degree of decision rule-heterogeneity, with a strong prevalence of regret minimization and hence loss aversion. We also provide indicative evidence that decision rules can affect expected neighbourhood demand at the macro level. Our approach allows identifying heterogeneity in decision rules, that is, the degree of regret/loss aversion, at the level of choice attributes such as the share of foreigners when comparing neighbourhoods, and can improve sociological practice related to linking theories and social research on decision-making. Citation: Sociological Methods & Research PubDate: 2023-07-19T06:37:03Z DOI: 10.1177/00491241231186657
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Adam N. Glynn, Miguel R. Rueda, Julian Schuessler Abstract: Sociological Methods & Research, Ahead of Print. Post-instrument covariates are often included as controls in instrumental variable (IV) analyses to address a violation of the exclusion restriction. However, we show that such analyses are subject to biases unless strong assumptions hold. Using linear constant-effects models, we present asymptotic bias formulas for three estimators (with and without measurement error): IV with post-instrument covariates, IV without post-instrument covariates, and ordinary least squares. In large samples and when the model provides a reasonable approximation, these formulas sometimes allow the analyst to bracket the parameter of interest with two estimators and allow the analyst to choose the estimator with the least asymptotic bias. We illustrate these points with a discussion of the settler mortality IV used by Acemoglu, Johnson, and Robinson. Citation: Sociological Methods & Research PubDate: 2023-06-11T08:06:18Z DOI: 10.1177/00491241231156965
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Anders Holm, Anders Hjorth-Trolle, Robert Andersen Abstract: Sociological Methods & Research, Ahead of Print. Lagged dependent variables (LDVs) are often used as predictors in ordinary least squares (OLS) models in the social sciences. Although several estimators are commonly employed, little is known about their relative merits in the presence of classical measurement error and different longitudinal processes. We assess the performance of four commonly used estimators: (1) the standard OLS estimator, (2) an average of past measures (AVG), (3) an instrumental variable (IV) measured at one period previously (IV), and (4) an IV derived from information from more than one time before (IV2). We also propose a new estimator for fixed effects models—the first difference instrumental variable (FDIV) estimator. After exploring the consistency of these estimators, we demonstrate their performance using an empirical application predicting primary school test scores. Our results demonstrate that for a Markov process with classic measurement error (CME), IV and IV2 estimators are generally consistent; LDV and AVG estimators are not. For a semi-Markov process, only the IV2 estimator is consistent. On the other hand, if fixed effects are included in the model, only the FDIV estimator is consistent. We end with advice on how to select the appropriate estimator. Citation: Sociological Methods & Research PubDate: 2023-06-08T06:42:49Z DOI: 10.1177/00491241231176845
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Patricia Hadler Abstract: Sociological Methods & Research, Ahead of Print. Probes are follow-ups to survey questions used to gain insights on respondents’ understanding of and responses to these questions. They are usually administered as open-ended questions, primarily in the context of questionnaire pretesting. Due to the decreased cost of data collection for open-ended questions in web surveys, researchers have argued for embedding more open-ended probes in large-scale web surveys. However, there are concerns that this may cause reactivity and impact survey data. The study presents a randomized experiment in which identical survey questions were run with and without open-ended probes. Embedding open-ended probes resulted in higher levels of survey break off, as well as increased backtracking and answer changes to previous questions. In most cases, there was no impact of open-ended probes on the cognitive processing of and response to survey questions. Implications for embedding open-ended probes into web surveys are discussed. Citation: Sociological Methods & Research PubDate: 2023-06-01T06:42:47Z DOI: 10.1177/00491241231176846
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Julian Schuessler, Peter Selb Abstract: Sociological Methods & Research, Ahead of Print. Directed acyclic graphs (DAGs) are now a popular tool to inform causal inferences. We discuss how DAGs can also be used to encode theoretical assumptions about nonprobability samples and survey nonresponse and to determine whether population quantities including conditional distributions and regressions can be identified. We describe sources of bias and assumptions for eliminating it in various selection scenarios. We then introduce and analyze graphical representations of multiple selection stages in the data collection process, and highlight the strong assumptions implicit in using only design weights. Furthermore, we show that the common practice of selecting adjustment variables based on correlations with sample selection and outcome variables of interest is ill-justified and that nonresponse weighting when the interest is in causal inference may come at severe costs. Finally, we identify further areas for survey methodology research that can benefit from advances in causal graph theory. Citation: Sociological Methods & Research PubDate: 2023-05-31T06:14:22Z DOI: 10.1177/00491241231176851
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Myoung-jae Lee,
Goeun Lee, Jin-young Choi Abstract: Sociological Methods & Research, Ahead of Print. A linear model is often used to find the effect of a binary treatment [math] on a noncontinuous outcome [math] with covariates [math]. Particularly, a binary [math] gives the popular “linear probability model (LPM),” but the linear model is untenable if [math] contains a continuous regressor. This raises the question: what kind of treatment effect does the ordinary least squares estimator (OLS) to LPM estimate' This article shows that the OLS estimates a weighted average of the [math]-conditional heterogeneous effect plus a bias. Under the condition that [math] is equal to the linear projection of [math] on [math], the bias becomes zero, and the OLS estimates the “overlap-weighted average” of the [math]-conditional effect. Although the condition does not hold in general, specifying the [math]-part of the LPM such that the [math]-part predicts [math] well, not [math], minimizes the bias counter-intuitively. This article also shows how to estimate the overlap-weighted average without the condition by using the “propensity-score residual” [math]. An empirical analysis demonstrates our points. Citation: Sociological Methods & Research PubDate: 2023-05-30T03:48:00Z DOI: 10.1177/00491241231176850
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Juan F. Muñoz, Pablo J. Moya-Fernández, Encarnación Álvarez-Verdejo Abstract: Sociological Methods & Research, Ahead of Print. The Gini index is probably the most commonly used indicator to measure inequality. For continuous distributions, the Gini index can be computed using several equivalent formulations. However, this is not the case with discrete distributions, where controversy remains regarding the expression to be used to estimate the Gini index. We attempt to bring a better understanding of the underlying problem by regrouping and classifying the most common estimators of the Gini index proposed in both infinite and finite populations, and focusing on the biases. We use Monte Carlo simulation studies to analyse the bias of the various estimators under a wide range of scenarios. Extremely large biases are observed in heavy-tailed distributions with high Gini indices, and bias corrections are recommended in this situation. We propose the use of some (new and traditional) bootstrap-based and jackknife-based strategies to mitigate this bias problem. Results are based on continuous distributions often used in the modelling of income distributions. We describe a simulation-based criterion for deciding when to use bias corrections. Various real data sets are used to illustrate the practical application of the suggested bias corrected procedures. Citation: Sociological Methods & Research PubDate: 2023-05-25T08:34:43Z DOI: 10.1177/00491241231176847
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Eric W. Schoon Abstract: Sociological Methods & Research, Ahead of Print. This article explores how researchers adapt to disruptions that cost them access to their field sites, advancing a uniquely sociological perspective on the dynamics of flexibility and adaptation in qualitative methods. Through interviews with 31 ethnographers whose access was preempted or eliminated, I find that adaptation varied systematically based on when during the fieldwork process researchers' access was disrupted. The timing of the disruption shaped the relevance and implications of common conditions that affect fieldwork, such as funding availability, institutionalized time constraints, and sunk costs. Consequently, despite a lack of common conventions or training in how to adapt to losing access, adaptations took one of three general forms, which I refer to as turning home, pivoting, and following. I highlight specific challenges associated with each of these forms and offer insights for navigating them. Building from my findings, I make the case that the logistics of being flexible and adapting are part of a hidden curriculum in qualitative methods, and I discuss how interrogating the conditions that structure these aspects of fieldwork advances research and pedagogy in qualitative methodology. Citation: Sociological Methods & Research PubDate: 2023-05-19T06:55:20Z DOI: 10.1177/00491241231156961
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Christoph Niessen Abstract: Sociological Methods & Research, Ahead of Print. In the wake of the methodological developments that aim to render qualitative comparative analysis (QCA) “time sensitive”, I propose a new procedure for carrying out QCA longitudinally. More specifically, I show first why longitudinal case disaggregation should be carried out with change-based intervals (CBIs) rather than with fixed intervals. Second, I develop a flexible lag condition (FLC) that (i) resolves two types of temporal contradictions and outcome redundancies that can result from temporal case disaggregation and (ii) allows to measure the average duration it takes for a combination of conditions to translate to an outcome. Since temporal contradictions and outcome redundancies are most likely with an increasing number of time points and conditions, as well as with CBIs in general, the FLC procedure is most useful in these cases. The fact that the interest of longitudinal analyses increases with the number of disaggregated cases underlines the usefulness of the proposed methodological innovation. Despite its suitability for mid-n and large-n analyses, longitudinal QCA with an FLC preserves a strong case-oriented and qualitative perspective and remains thereby loyal to QCA's original foundations. Citation: Sociological Methods & Research PubDate: 2023-04-25T07:24:17Z DOI: 10.1177/00491241231156967
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Yuanmo He, Milena Tsvetkova Abstract: Sociological Methods & Research, Ahead of Print. The rise of social media has opened countless opportunities to explore social science questions with new data and methods. However, research on socioeconomic inequality remains constrained by limited individual-level socioeconomic status (SES) measures in digital trace data. Following Bourdieu, we argue that the commercial and entertainment accounts Twitter users follow reflect their economic and cultural capital. Adapting a political science method for inferring political ideology, we use correspondence analysis to estimate the SES of 3,482,652 Twitter users who follow the accounts of 339 brands in the United States. We validate our estimates with data from the Facebook Marketing application programming interface, self-reported job titles on users’ Twitter profiles, and a small survey sample. The results show reasonable correlations with the standard proxies for SES, alongside much weaker or nonsignificant correlations with other demographic variables. The proposed method opens new opportunities for innovative social research on inequality on Twitter and similar online platforms. Citation: Sociological Methods & Research PubDate: 2023-04-17T04:36:04Z DOI: 10.1177/00491241231168665
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Thomas Suesse, David Steel, Mark Tranmer Abstract: Sociological Methods & Research, Ahead of Print. Multilevel models are often used to account for the hierarchical structure of social data and the inherent dependencies to produce estimates of regression coefficients, variance components associated with each level, and accurate standard errors. Social network analysis is another important approach to analysing complex data that incoproate the social relationships between a number of individuals. Extended linear regression models, such as network autoregressive models, have been proposed that include the social network information to account for the dependencies between persons. In this article, we propose three types of models that account for both the multilevel structure and the social network structure together, leading to network autoregressive multilevel models. We investigate theoretically and empirically, using simulated data and a data set from the Dutch Social Behavior study, the effect of omitting the levels and the social network on the estimates of the regression coefficients, variance components, network autocorrelation parameter, and standard errors. Citation: Sociological Methods & Research PubDate: 2023-03-15T10:58:29Z DOI: 10.1177/00491241231156972
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Richard A. Berk, Arun Kumar Kuchibhotla, Eric Tchetgen Tchetgen Abstract: Sociological Methods & Research, Ahead of Print. In the United States and elsewhere, risk assessment algorithms are being used to help inform criminal justice decision-makers. A common intent is to forecast an offender’s “future dangerousness.” Such algorithms have been correctly criticized for potential unfairness, and there is an active cottage industry trying to make repairs. In this paper, we use counterfactual reasoning to consider the prospects for improved fairness when members of a disadvantaged class are treated by a risk algorithm as if they are members of an advantaged class. We combine a machine learning classifier trained in a novel manner with an optimal transport adjustment for the relevant joint probability distributions, which together provide a constructive response to claims of bias-in-bias-out. A key distinction is made between fairness claims that are empirically testable and fairness claims that are not. We then use confusion tables and conformal prediction sets to evaluate achieved fairness for estimated risk. Our data are a random sample of 300,000 offenders at their arraignments for a large metropolitan area in the United States during which decisions to release or detain are made. We show that substantial improvement in fairness can be achieved consistently with a Pareto improvement for legally protected classes. Citation: Sociological Methods & Research PubDate: 2023-03-13T08:51:09Z DOI: 10.1177/00491241231155883
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Rosanna Cole Abstract: Sociological Methods & Research, Ahead of Print. The use of inter-rater reliability (IRR) methods may provide an opportunity to improve the transparency and consistency of qualitative case study data analysis in terms of the rigor of how codes and constructs have been developed from the raw data. Few articles on qualitative research methods in the literature conduct IRR assessments or neglect to report them, despite some disclosure of multiple researcher teams and coding reconciliation in the work. The article argues that the in-depth discussion and reconciliation initiated by IRR may enhance the findings and theory that emerges from qualitative case study data analysis, where the main data source is often interview transcripts or field notes. To achieve this, the article provides a missing link in the literature between data gathering and analysis by expanding an existing process model from five to six stages. The article also identifies seven factors that researchers can consider to determine the suitability of IRR to their work and it offers an IRR checklist, thereby providing a contribution to the broader literature on qualitative research methods. Citation: Sociological Methods & Research PubDate: 2023-02-23T07:14:56Z DOI: 10.1177/00491241231156971
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Costanza Tortú, Irene Crimaldi, Fabrizia Mealli, Laura Forastiere Abstract: Sociological Methods & Research, Ahead of Print. Policy evaluation studies, which assess the effect of an intervention, face statistical challenges: in real-world settings treatments are not randomly assigned and the analysis might be complicated by the presence of interference among units. Researchers have started to develop methods that allow to manage spillovers in observational studies; recent works focus primarily on binary treatments. However, many studies deal with more complex interventions. For instance, in political science, evaluating the impact of policies implemented by administrative entities often implies a multi-valued approach, as a policy towards a specific issue operates at many levels and can be defined along multiple dimensions. In this work, we extend the statistical framework about causal inference under network interference in observational studies, allowing for a multi-valued individual treatment and an interference structure shaped by a weighted network. The estimation strategy relies on a joint multiple generalized propensity score and allows one to estimate direct effects, controlling for both individual and network covariates. We follow this methodology to analyze the impact of the national immigration policy on the crime rate, analyzing data of 22 OECD countries over a thirty-years time frame. We define a multi-valued characterization of political attitude towards migrants and we assume that the extent to which each country can be influenced by another country is modeled by an indicator, summarizing their cultural and geographical proximity. Results suggest that implementing a highly restrictive immigration policy leads to an increase of the crime rate and the estimated effect is larger if we account for interference. Citation: Sociological Methods & Research PubDate: 2023-01-09T08:24:55Z DOI: 10.1177/00491241221147503
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Yuan Hsiao, Lee Fiorio, Jonathan Wakefield, Emilio Zagheni Abstract: Sociological Methods & Research, Ahead of Print. Obtaining reliable and timely estimates of migration flows is critical for advancing the migration theory and guiding policy decisions, but it remains a challenge. Digital data provide granular information on time and space, but do not draw from representative samples of the population, leading to biased estimates. We propose a method for combining digital data and official statistics by using the official statistics to model the spatial and temporal dependence structure of the biases of digital data. We use simulations to demonstrate the validity of the model, then empirically illustrate our approach by combining geo-located Twitter data with data from the American Community Survey (ACS) to estimate state-level out-migration probabilities in the United States. We show that our model, which combines unbiased and biased data, produces predictions that are more accurate than predictions based solely on unbiased data. Our approach demonstrates how digital data can be used to complement, rather than replace, official statistics. Citation: Sociological Methods & Research PubDate: 2023-01-02T11:37:55Z DOI: 10.1177/00491241221140144