Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Thomas Suesse, David Steel, Mark Tranmer Abstract: Sociological Methods & Research, Ahead of Print. Multilevel models are often used to account for the hierarchical structure of social data and the inherent dependencies to produce estimates of regression coefficients, variance components associated with each level, and accurate standard errors. Social network analysis is another important approach to analysing complex data that incoproate the social relationships between a number of individuals. Extended linear regression models, such as network autoregressive models, have been proposed that include the social network information to account for the dependencies between persons. In this article, we propose three types of models that account for both the multilevel structure and the social network structure together, leading to network autoregressive multilevel models. We investigate theoretically and empirically, using simulated data and a data set from the Dutch Social Behavior study, the effect of omitting the levels and the social network on the estimates of the regression coefficients, variance components, network autocorrelation parameter, and standard errors. Citation: Sociological Methods & Research PubDate: 2023-03-15T10:58:29Z DOI: 10.1177/00491241231156972
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Richard A. Berk, Arun Kumar Kuchibhotla, Eric Tchetgen Tchetgen Abstract: Sociological Methods & Research, Ahead of Print. In the United States and elsewhere, risk assessment algorithms are being used to help inform criminal justice decision-makers. A common intent is to forecast an offender’s “future dangerousness.” Such algorithms have been correctly criticized for potential unfairness, and there is an active cottage industry trying to make repairs. In this paper, we use counterfactual reasoning to consider the prospects for improved fairness when members of a disadvantaged class are treated by a risk algorithm as if they are members of an advantaged class. We combine a machine learning classifier trained in a novel manner with an optimal transport adjustment for the relevant joint probability distributions, which together provide a constructive response to claims of bias-in-bias-out. A key distinction is made between fairness claims that are empirically testable and fairness claims that are not. We then use confusion tables and conformal prediction sets to evaluate achieved fairness for estimated risk. Our data are a random sample of 300,000 offenders at their arraignments for a large metropolitan area in the United States during which decisions to release or detain are made. We show that substantial improvement in fairness can be achieved consistently with a Pareto improvement for legally protected classes. Citation: Sociological Methods & Research PubDate: 2023-03-13T08:51:09Z DOI: 10.1177/00491241231155883
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Rosanna Cole Abstract: Sociological Methods & Research, Ahead of Print. The use of inter-rater reliability (IRR) methods may provide an opportunity to improve the transparency and consistency of qualitative case study data analysis in terms of the rigor of how codes and constructs have been developed from the raw data. Few articles on qualitative research methods in the literature conduct IRR assessments or neglect to report them, despite some disclosure of multiple researcher teams and coding reconciliation in the work. The article argues that the in-depth discussion and reconciliation initiated by IRR may enhance the findings and theory that emerges from qualitative case study data analysis, where the main data source is often interview transcripts or field notes. To achieve this, the article provides a missing link in the literature between data gathering and analysis by expanding an existing process model from five to six stages. The article also identifies seven factors that researchers can consider to determine the suitability of IRR to their work and it offers an IRR checklist, thereby providing a contribution to the broader literature on qualitative research methods. Citation: Sociological Methods & Research PubDate: 2023-02-23T07:14:56Z DOI: 10.1177/00491241231156971
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:John D. McCluskey, Craig D. Uchida Abstract: Sociological Methods & Research, Ahead of Print. Video data analysis (VDA) represents an important methodological framework for contemporary research approaches to the myriad of footage available from cameras, devices, and phones. Footage from police body-worn cameras (BWCs) is anticipated to be a widely available platform for social science researchers to scrutinize the interactions between police and citizens. We examine issues of validity and reliability as related to BWCs in the context of VDA, based on an assessment of the quality of audio and video obtained from that platform. Second, we compare the coding of BWC footage obtained from a sample of police-citizen encounters to coding of the same events by on-scene coders using an instrument adapted from in-person systematic social observations (SSOs). Findings show that there are substantial and systematic audio and video gaps present in BWC footage as a source of data for social science investigation that likely impact the reliability of measures. Despite these problems, BWC data have substantial capacity for judging sequential developments, causal ordering, and the duration of events. Thus, the technology should open theoretical frames that are too cumbersome for in-person observation. Theoretical development with VDA in mind is suggested as an important pathway for future researchers in terms of framing data collection from BWCs and also suggesting areas where triangulation is essential. Citation: Sociological Methods & Research PubDate: 2023-02-20T08:53:07Z DOI: 10.1177/00491241231156968
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Yoav Goldstein, Nicolas M. Legewie, Doron Shiffer-Sebba Abstract: Sociological Methods & Research, Ahead of Print. Video data offer important insights into social processes because they enable direct observation of real-life social interaction. Though such data have become abundant and increasingly accessible, they pose challenges to scalability and measurement. Computer vision (CV), i.e., software-based automated analysis of visual material, can help address these challenges, but existing CV tools are not sufficiently tailored to analyze social interactions. We describe our novel approach, “3D social research” (3DSR), which uses CV and 3D camera footage to study kinesics and proxemics, two core elements of social interaction. Using eight videos of a scripted interaction and five real-life street scene videos, we demonstrate how 3DSR expands sociologists’ analytical toolkit by facilitating a range of scalable and precise measurements. We specifically emphasize 3DSR's potential for analyzing physical distance, movement in space, and movement rate – important aspects of kinesics and proxemics in interactions. We also assess data reliability when using 3DSR. Citation: Sociological Methods & Research PubDate: 2023-02-15T05:51:02Z DOI: 10.1177/00491241221147495
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Iddo Tavory Abstract: Sociological Methods & Research, Ahead of Print.
Citation: Sociological Methods & Research PubDate: 2023-02-03T08:44:51Z DOI: 10.1177/00491241221140431
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Costanza Tortú, Irene Crimaldi, Fabrizia Mealli, Laura Forastiere Abstract: Sociological Methods & Research, Ahead of Print. Policy evaluation studies, which assess the effect of an intervention, face statistical challenges: in real-world settings treatments are not randomly assigned and the analysis might be complicated by the presence of interference among units. Researchers have started to develop methods that allow to manage spillovers in observational studies; recent works focus primarily on binary treatments. However, many studies deal with more complex interventions. For instance, in political science, evaluating the impact of policies implemented by administrative entities often implies a multi-valued approach, as a policy towards a specific issue operates at many levels and can be defined along multiple dimensions. In this work, we extend the statistical framework about causal inference under network interference in observational studies, allowing for a multi-valued individual treatment and an interference structure shaped by a weighted network. The estimation strategy relies on a joint multiple generalized propensity score and allows one to estimate direct effects, controlling for both individual and network covariates. We follow this methodology to analyze the impact of the national immigration policy on the crime rate, analyzing data of 22 OECD countries over a thirty-years time frame. We define a multi-valued characterization of political attitude towards migrants and we assume that the extent to which each country can be influenced by another country is modeled by an indicator, summarizing their cultural and geographical proximity. Results suggest that implementing a highly restrictive immigration policy leads to an increase of the crime rate and the estimated effect is larger if we account for interference. Citation: Sociological Methods & Research PubDate: 2023-01-09T08:24:55Z DOI: 10.1177/00491241221147503
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Yuan Hsiao, Lee Fiorio, Jonathan Wakefield, Emilio Zagheni Abstract: Sociological Methods & Research, Ahead of Print. Obtaining reliable and timely estimates of migration flows is critical for advancing the migration theory and guiding policy decisions, but it remains a challenge. Digital data provide granular information on time and space, but do not draw from representative samples of the population, leading to biased estimates. We propose a method for combining digital data and official statistics by using the official statistics to model the spatial and temporal dependence structure of the biases of digital data. We use simulations to demonstrate the validity of the model, then empirically illustrate our approach by combining geo-located Twitter data with data from the American Community Survey (ACS) to estimate state-level out-migration probabilities in the United States. We show that our model, which combines unbiased and biased data, produces predictions that are more accurate than predictions based solely on unbiased data. Our approach demonstrates how digital data can be used to complement, rather than replace, official statistics. Citation: Sociological Methods & Research PubDate: 2023-01-02T11:37:55Z DOI: 10.1177/00491241221140144
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Anna-Carolina Haensch, Jonathan Bartlett, Bernd Weiß Abstract: Sociological Methods & Research, Ahead of Print. Discrete-time survival analysis (DTSA) models are a popular way of modeling events in the social sciences. However, the analysis of discrete-time survival data is challenged by missing data in one or more covariates. Negative consequences of missing covariate data include efficiency losses and possible bias. A popular approach to circumventing these consequences is multiple imputation (MI). In MI, it is crucial to include outcome information in the imputation models. As there is little guidance on how to incorporate the observed outcome information into the imputation model of missing covariates in DTSA, we explore different existing approaches using fully conditional specification (FCS) MI and substantive-model compatible (SMC)-FCS MI. We extend SMC-FCS for DTSA and provide an implementation in the smcfcs R package. We compare the approaches using Monte Carlo simulations and demonstrate a good performance of the new approach compared to existing approaches. Citation: Sociological Methods & Research PubDate: 2022-12-22T07:01:04Z DOI: 10.1177/00491241221140147
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Sandra Wankmüller Abstract: Sociological Methods & Research, Ahead of Print. Transformer-based models for transfer learning have the potential to achieve high prediction accuracies on text-based supervised learning tasks with relatively few training data instances. These models are thus likely to benefit social scientists that seek to have as accurate as possible text-based measures, but only have limited resources for annotating training data. To enable social scientists to leverage these potential benefits for their research, this article explains how these methods work, why they might be advantageous, and what their limitations are. Additionally, three Transformer-based models for transfer learning, BERT, RoBERTa, and the Longformer, are compared to conventional machine learning algorithms on three applications. Across all evaluated tasks, textual styles, and training data set sizes, the conventional models are consistently outperformed by transfer learning with Transformers, thereby demonstrating the benefits these models can bring to text-based social science research. Citation: Sociological Methods & Research PubDate: 2022-12-20T11:59:34Z DOI: 10.1177/00491241221134527
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Alina Arseniev-Koehler Abstract: Sociological Methods & Research, Ahead of Print. Measuring meaning is a central problem in cultural sociology and word embeddings may offer powerful new tools to do so. But like any tool, they build on and exert theoretical assumptions. In this paper, I theorize the ways in which word embeddings model three core premises of a structural linguistic theory of meaning: that meaning is coherent, relational, and may be analyzed as a static system. In certain ways, word embeddings are vulnerable to the enduring critiques of these premises. In other ways, word embeddings offer novel solutions to these critiques. More broadly, formalizing the study of meaning with word embeddings offers theoretical opportunities to clarify core concepts and debates in cultural sociology, such as the coherence of meaning. Just as network analysis specified the once vague notion of social relations, formalizing meaning with embeddings can push us to specify and reimagine meaning itself. Citation: Sociological Methods & Research PubDate: 2022-12-08T07:14:56Z DOI: 10.1177/00491241221140142
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Alexandru Cernat, Joseph Sakshaug, Pablo Christmann, Tobias Gummer Abstract: Sociological Methods & Research, Ahead of Print. Mixed-mode surveys are popular as they can save costs and maintain (or improve) response rates relative to single-mode surveys. Nevertheless, it is not yet clear how design decisions like survey mode or questionnaire length impact measurement quality. In this study, we compare measurement quality in an experiment of three distinct survey designs implemented in the German sample of the European Values Study: a single-mode face-to-face design, a mixed-mode mail/web design, and a shorter (matrix) questionnaire in the mixed-mode design. We compare measurement quality in different ways, including differences in distributions across several data quality indicators as well as equivalence testing over 140 items in 25 attitudinal scales. We find similar data quality across the survey designs, although the mixed-mode survey shows more item nonresponse compared to the single-mode survey. Using equivalence testing we find that most scales achieve metric equivalence and, to a lesser extent, scalar equivalence across the designs. Citation: Sociological Methods & Research PubDate: 2022-12-05T05:48:52Z DOI: 10.1177/00491241221140139
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Colin Jerolmack Abstract: Sociological Methods & Research, Ahead of Print. Ethnographic and interview research have made significant contributions to cumulative social science and influenced the public conversation around important social issues. However, debates rage over whether the standards of positivistic social science can or should be used to judge the rigor of interpretive methods. I begin this essay by briefly delineating the problem of developing evaluative criteria for qualitative research. I then explore the extent to which Small and Calarco's Qualitative Literacy helps advance a set of standards attuned to the distinct epistemology of interview and ethnographic methods. I argue that “qualitative literacy” is necessary but not sufficient to help readers decide whether a particular study is high quality. The reader also needs access to enough information about the researcher's data, field site, or subjects that she can independently reanalyze the researcher's interpretations and consider alternative explanations. I also touch on some important differences between ethnography and interviewing that matter for how we evaluate them. Citation: Sociological Methods & Research PubDate: 2022-12-05T05:02:51Z DOI: 10.1177/00491241221140429
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Stefanie DeLuca Abstract: Sociological Methods & Research, Ahead of Print. Increasingly, the broader public, media and policymakers are looking to qualitative research to provide answers to our most pressing social questions. While an exciting and perhaps overdue moment for qualitative researchers, it is also a time when the method is coming under increasing scrutiny for a lack of reliability and transparency. The question of how to assess the quality of qualitative research is therefore paramount, but the field still lacks clear standards to evaluate qualitative work. In their new book, Qualitative Literacy, Mario Luis Small and Jessica McCrory Calarco aim to fill this gap. I argue that Qualitative Literacy offers a compelling set of standards for consumers to assess whether an in-depth interview or participant observation was of sufficient quality and, to an extent, whether sufficient time was spent in the field. However, by ignoring the vital importance of employing systematic, well-justified, and transparent sampling strategies, the implication is that such essential criteria can be ignored, undermining the potential contribution of qualitative research to a more cumulative creation of scientific knowledge. Citation: Sociological Methods & Research PubDate: 2022-12-05T04:57:30Z DOI: 10.1177/00491241221140425
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Salomé Do, Étienne Ollion, Rubing Shen Abstract: Sociological Methods & Research, Ahead of Print. The last decade witnessed a spectacular rise in the volume of available textual data. With this new abundance came the question of how to analyze it. In the social sciences, scholars mostly resorted to two well-established approaches, human annotation on sampled data on the one hand (either performed by the researcher, or outsourced to microworkers), and quantitative methods on the other. Each approach has its own merits - a potentially very fine-grained analysis for the former, a very scalable one for the latter - but the combination of these two properties has not yielded highly accurate results so far. Leveraging recent advances in sequential transfer learning, we demonstrate via an experiment that an expert can train a precise, efficient automatic classifier in a very limited amount of time. We also show that, under certain conditions, expert-trained models produce better annotations than humans themselves. We demonstrate these points using a classic research question in the sociology of journalism, the rise of a “horse race” coverage of politics. We conclude that recent advances in transfer learning help us augment ourselves when analyzing unstructured data. Citation: Sociological Methods & Research PubDate: 2022-12-05T04:56:51Z DOI: 10.1177/00491241221134526
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Jack Katz Abstract: Sociological Methods & Research, Ahead of Print. Taking a sociological view, we can investigate the empirical consequences of variations in the rhetoric of sociological methodology. The standards advocated in Qualitative Literacy divide communities of qualitative researchers, as they are not explicitly connected to an understanding of social ontology, unlike previous qualitative methodologies; they continue the long-growing segregation of the rhetorical worlds of qualitative and quantitative research methodology; and they draw attention to the personal competencies of the researcher. I compare a rhetoric of qualitative methodology that: derives evaluation criteria from perspectives on social ontology that have been developing progressively since the early twentieth century; applies the discipline-wide evaluation criteria of reactivity, reliability, representativeness, and replicability; and asks evaluators to focus on the adequacy of the textual depiction of research subjects. Citation: Sociological Methods & Research PubDate: 2022-11-30T07:38:28Z DOI: 10.1177/00491241221140427
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:John Levi Martin Abstract: Sociological Methods & Research, Ahead of Print. Small and Calarco have done the field a great service; we must go further and arm readers with better understandings of when authors have in fact fulfilled Small and Calarco’s strictures. Citation: Sociological Methods & Research PubDate: 2022-11-29T06:00:50Z DOI: 10.1177/00491241221140426
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Rosa W. Runhardt Abstract: Sociological Methods & Research, Ahead of Print. This article uses the interventionist theory of causation, a counterfactual theory taken from philosophy of science, to strengthen causal analysis in process tracing research. Causal claims from process tracing are re-expressed in terms of so-called hypothetical interventions, and concrete evidential tests are proposed which are shown to corroborate process tracing claims. In particular, three steps are prescribed for an interventionist investigation, and each step in turn is shown to make the causal analysis more robust, amongst others by disambiguating causal claims and clarifying or strengthening the existing methodological advice on counterfactual analysis. The article's claims are then illustrated using a concrete example, Haggard and Kaufman's analysis of the Argentinian transition to democracy. It is shown that interventionism could have strengthened the authors’ conclusions. The article concludes with a short Bayesian analysis of its key methodological proposals. Citation: Sociological Methods & Research PubDate: 2022-11-24T10:25:39Z DOI: 10.1177/00491241221134523
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:O. Smallenbroek, F. Hertel, C. Barone Abstract: Sociological Methods & Research, Ahead of Print. In social stratification research, the most frequently used social class schema are based on employment relations (EGP and ESEC). These schemes have been propelled to paradigms for research on social mobility and educational inequalities and applied in cross-national research for both genders. Using the European Working Conditions Survey, we examine their criterion and construct validity across 31 countries and for both genders. We investigate whether classes are welldelineated by the theoretically assumed dimensions of employment relations and we assess how several measures of occupational advantage differ across classes. We find broad similarity in the criterion validity of EGP and ESEC across genders and countries as well as satisfactory levels of construct validity. However, the salariat classes are too heterogeneous and their boundaries with the intermediate classes are blurred. To improve the measurement of social class, we propose to differentiate managerial and professional occupations within the lower and higher salariat respectively. We show that implementing these distinctions in ESEC and EGP improves their criterion validity and allows to better identify privileged positions. Citation: Sociological Methods & Research PubDate: 2022-11-11T09:08:53Z DOI: 10.1177/00491241221134522
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Weihua An Abstract: Sociological Methods & Research, Ahead of Print. Egocentric networks represent a popular research design for network research. However, to what extent and under what conditions egocentric network centrality can serve as reasonable substitutes for their sociocentric counterparts are important questions to study. The answers to these questions are uncertain simply because of the large variety of networks. Hence, this paper aims to provide exploratory answers to these questions by analyzing both empirical and simulated data. Through analyses of various empirical networks (including some classic albeit small ones), this paper shows that egocentric betweenness approximates sociocentric betweenness quite well (the correlation is high across almost all the networks being examined) while egocentric closeness approximates sociocentric closeness only reasonably well (the correlation is a bit lower on average with a larger variance across networks). Simulations also confirm this finding. Analyses further show that egocentric approximations of betweenness and closeness seem to work well in different types of networks (as featured by network size, density, centralization, reciprocity, transitivity, and geodistance). Lastly, the paper briefly presents three ideas to help improve egocentric approximations of centrality measures. Citation: Sociological Methods & Research PubDate: 2022-09-22T05:05:56Z DOI: 10.1177/00491241221122606
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Nathaniel Josephs, Dennis M. Feehan, Forrest W. Crawford Abstract: Sociological Methods & Research, Ahead of Print. The network scale-up method (NSUM) is a survey-based method for estimating the number of individuals in a hidden or hard-to-reach subgroup of a general population. In NSUM surveys, sampled individuals report how many others they know in the subpopulation of interest (e.g. “How many sex workers do you know'”) and how many others they know in subpopulations of the general population (e.g. “How many bus drivers do you know'”). NSUM is widely used to estimate the size of important sociological and epidemiological risk groups, including men who have sex with men, sex workers, HIV+ individuals, and drug users. Unlike several other methods for population size estimation, NSUM requires only a single random sample and the estimator has a conveniently simple form. Despite its popularity, there are no published guidelines for the minimum sample size calculation to achieve a desired statistical precision. Here, we provide a sample size formula that can be employed in any NSUM survey. We show analytically and by simulation that the sample size controls error at the nominal rate and is robust to some forms of network model mis-specification. We apply this methodology to study the minimum sample size and relative error properties of several published NSUM surveys. Citation: Sociological Methods & Research PubDate: 2022-09-14T05:18:57Z DOI: 10.1177/00491241221122576
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Giuseppe Arena, Joris Mulder, Roger Th. A.J. Leenders Abstract: Sociological Methods & Research, Ahead of Print. In relational event networks, the tendency for actors to interact with each other depends greatly on the past interactions between the actors in a social network. Both the volume of past interactions and the time that has elapsed since the past interactions affect the actors’ decision-making to interact with other actors in the network. Recently occurred events may have a stronger influence on current interaction behavior than past events that occurred a long time ago–a phenomenon known as “memory decay”. Previous studies either predefined a short-run and long-run memory or fixed a parametric exponential memory decay using a predefined half-life period. In real-life relational event networks, however, it is generally unknown how the influence of past events fades as time goes by. For this reason, it is not recommendable to fix memory decay in an ad-hoc manner, but instead we should learn the shape of memory decay from the observed data. In this paper, a novel semi-parametric approach based on Bayesian Model Averaging is proposed for learning the shape of the memory decay without requiring any parametric assumptions. The method is applied to relational event history data among socio-political actors in India and a comparison with other relational event models based on predefined memory decays is provided. Citation: Sociological Methods & Research PubDate: 2022-08-16T05:36:56Z DOI: 10.1177/00491241221113875
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Xin Guo, Qiang Fu Abstract: Sociological Methods & Research, Ahead of Print. Grouped and right-censored (GRC) counts have been used in a wide range of attitudinal and behavioural surveys yet they cannot be readily analyzed or assessed by conventional statistical models. This study develops a unified regression framework for the design and optimality of GRC counts in surveys. To process infinitely many grouping schemes for the optimum design, we propose a new two-stage algorithm, the Fisher Information Maximizer (FIM), which utilizes estimates from generalized linear models to find a global optimal grouping scheme among all possible [math]-group schemes. After we define, decompose, and calculate different types of regressor-specific design errors, our analyses from both simulation and empirical examples suggest that: 1) the optimum design of GRC counts is able to reduce the grouping error to zero, 2) the performance of modified Poisson estimators using GRC counts can be comparable to that of Poisson regression, and 3) the optimum design is usually able to achieve the same estimation efficiency with a smaller sample size. Citation: Sociological Methods & Research PubDate: 2022-08-08T07:28:38Z DOI: 10.1177/00491241221113877
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Roderick J. Little, James R. Carpenter, Katherine J. Lee Abstract: Sociological Methods & Research, Ahead of Print. Missing data are a pervasive problem in data analysis. Three common methods for addressing the problem are (a) complete-case analysis, where only units that are complete on the variables in an analysis are included; (b) weighting, where the complete cases are weighted by the inverse of an estimate of the probability of being complete; and (c) multiple imputation (MI), where missing values of the variables in the analysis are imputed as draws from their predictive distribution under an implicit or explicit statistical model, the imputation process is repeated to create multiple filled-in data sets, and analysis is carried out using simple MI combining rules. This article provides a non-technical discussion of the strengths and weakness of these approaches, and when each of the methods might be adopted over the others. The methods are illustrated on data from the Youth Cohort (Time) Series (YCS) for England, Wales and Scotland, 1984–2002. Citation: Sociological Methods & Research PubDate: 2022-08-05T07:15:18Z DOI: 10.1177/00491241221113873
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Xiang Zhou Abstract: Sociological Methods & Research, Ahead of Print. A growing body of social science research investigates whether the economic payoff to a college education is heterogeneous — in particular, whether disadvantaged youth can benefit more from attending and completing college relative to their more advantaged peers. Scholars, however, have employed different analytical strategies and reported mixed findings. To shed light on this literature, I propose a causal mediation approach to conceptualizing, evaluating, and unpacking the causal effects of college on earnings. By decomposing the total effect of attending a four-year college into several direct and indirect components, this approach not only clarifies the mechanisms through which college attendance boosts earnings, but illuminates the ways in which the postsecondary system may be both an equalizer and a stratifier. The total effect of college attendance, its direct and indirect components, and their heterogeneity across different subpopulations are all identified under the assumption of sequential ignorability. I introduce a debiased machine learning (DML) method for estimating all quantities of interest, along with a set of bias formulas for sensitivity analysis. I illustrate the proposed framework and methodology using data from the National Longitudinal Survey of Youth, 1997 cohort. Citation: Sociological Methods & Research PubDate: 2022-08-01T08:01:51Z DOI: 10.1177/00491241221113876
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Guanglei Hong, Ha-Joon Chung Abstract: Sociological Methods & Research, Ahead of Print. The impact of a major historical event on child and youth development has been of great interest in the study of the life course. This study is focused on assessing the causal effect of the Great Recession on youth disconnection from school and work. Building on the insights offered by the age-period-cohort research, econometric methods, and developmental psychology, we innovatively develop a causal inference strategy that takes advantage of the multiple successive birth cohorts in the National Longitudinal Study of Youth 1997. The causal effect of the Great Recession is defined in terms of counterfactual developmental trajectories and can be identified under the assumption of short-term stable differences between the birth cohorts in the absence of the Great Recession. A meta-analysis aggregates the estimated effects over six between-cohort comparisons. Furthermore, we conduct a sensitivity analysis to assess the potential consequences if the identification assumption is violated. The findings contribute new evidence on how precipitous and pervasive economic hardship may disrupt youth development by gender and class of origin. Citation: Sociological Methods & Research PubDate: 2022-07-27T06:37:23Z DOI: 10.1177/00491241221113871
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Carlos Cinelli, Andrew Forney, Judea Pearl Abstract: Sociological Methods & Research, Ahead of Print. Many students of statistics and econometrics express frustration with the way a problem known as “bad control” is treated in the traditional literature. The issue arises when the addition of a variable to a regression equation produces an unintended discrepancy between the regression coefficient and the effect that the coefficient is intended to represent. Avoiding such discrepancies presents a challenge to all analysts in the data intensive sciences. This note describes graphical tools for understanding, visualizing, and resolving the problem through a series of illustrative examples. By making this “crash course” accessible to instructors and practitioners, we hope to avail these tools to a broader community of scientists concerned with the causal interpretation of regression models. Citation: Sociological Methods & Research PubDate: 2022-05-20T08:30:25Z DOI: 10.1177/00491241221099552
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Jose M. Pavía, Rafael Romero Abstract: Sociological Methods & Research, Ahead of Print. The estimation of RxC ecological inference contingency tables from aggregate data is one of the most salient and challenging problems in the field of quantitative social sciences, with major solutions proposed from both the ecological regression and the mathematical programming frameworks. In recent decades, there has been a drive to find solutions stemming from the former, with the latter being less active. From the mathematical programming framework, this paper suggests a new direction for tackling this problem. For the first time in the literature, a procedure based on linear programming is proposed to attain estimates of local contingency tables. Based on this and the homogeneity hypothesis, we suggest two new ecological inference algorithms. These two new algorithms represent an important step forward in the ecological inference mathematical programming literature. In addition to generating estimates for local ecological inference contingency tables and amending the tendency to produce extreme transfer probability estimates previously observed in other mathematical programming procedures, these two new algorithms prove to be quite competitive and more accurate than the current linear programming baseline algorithm. Their accuracy is assessed using a unique dataset with almost 500 elections, where the real transfer matrices are known, and their sensitivity to assumptions and limitations are gauged through an extensive simulation study. The new algorithms place the linear programming approach once again in a prominent position in the ecological inference toolkit. Interested readers can use these new algorithms easily with the aid of the R package lphom. Citation: Sociological Methods & Research PubDate: 2022-05-16T07:43:06Z DOI: 10.1177/00491241221092725
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Shiyu Zhang, James Wagner Abstract: Sociological Methods & Research, Ahead of Print. Adaptive survey design refers to using targeted procedures to recruit different sampled cases. This technique strives to reduce bias and variance of survey estimates by trying to recruit a larger and more balanced set of respondents. However, it is not well understood how adaptive design can improve data and survey estimates beyond the well-established post-survey adjustment. This paper reports the results of an experiment that evaluated the additional effect of adaptive design to post-survey adjustments. The experiment was conducted in the Detroit Metro Area Communities Study in 2021. We evaluated the adaptive design in five outcomes: 1) response rates, 2) demographic composition of respondents, 3) bias and variance of key survey estimates, 4) changes in significant results of regression models, and 5) costs. The most significant benefit of the adaptive design was its ability to generate more efficient survey estimates with smaller variances and smaller design effects. Citation: Sociological Methods & Research PubDate: 2022-05-13T03:12:18Z DOI: 10.1177/00491241221099550
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Wim Bernasco, Evelien M. Hoeben, Dennis Koelma, Lasse Suonperä Liebst, Josephine Thomas, Joska Appelman, Cees G. M. Snoek, Marie Rosenkrantz Lindegaard Abstract: Sociological Methods & Research, Ahead of Print. Social scientists increasingly use video data, but large-scale analysis of its content is often constrained by scarce manual coding resources. Upscaling may be possible with the application of automated coding procedures, which are being developed in the field of computer vision. Here, we introduce computer vision to social scientists, review the state-of-the-art in relevant subfields, and provide a working example of how computer vision can be applied in empirical sociological work. Our application involves defining a ground truth by human coders, developing an algorithm for automated coding, testing the performance of the algorithm against the ground truth, and running the algorithm on a large-scale dataset of CCTV images. The working example concerns monitoring social distancing behavior in public space over more than a year of the COVID-19 pandemic. Finally, we discuss prospects for the use of computer vision in empirical social science research and address technical and ethical challenges. Citation: Sociological Methods & Research PubDate: 2022-05-09T03:35:10Z DOI: 10.1177/00491241221099554
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Bart Meuleman, Tomasz Żółtak, Artur Pokropek, Eldad Davidov, Bengt Muthén, Daniel L. Oberski, Jaak Billiet, Peter Schmidt Abstract: Sociological Methods & Research, Ahead of Print. Welzel et al. (2021) claim that non-invariance of instruments is inconclusive and inconsequential in the field for cross-cultural value measurement. In this response, we contend that several key arguments on which Welzel et al. (2021) base their critique of invariance testing are conceptually and statistically incorrect. First, Welzel et al. (2021) claim that value measurement follows a formative rather than reflective logic. Yet they do not provide sufficient theoretical arguments for this conceptualization, nor do they discuss the disadvantages of this approach for validation of instruments. Second, their claim that strong inter-item correlations cannot be retrieved when means are close to the endpoint of scales ignores the existence of factor-analytic approaches for ordered-categorical indicators. Third, Welzel et al. (2021) propose that rather than of relying on invariance tests, comparability can be assessed by studying the connection with theoretically related constructs. However, their proposal ignores that external validation through nomological linkages hinges on the assumption of comparability. By means of two examples, we illustrate that violating the assumptions of measurement invariance can distort conclusions substantially. Following the advice of Welzel et al. (2021) implies discarding a tool that has proven to be very useful for comparativists. Citation: Sociological Methods & Research PubDate: 2022-04-22T06:54:37Z DOI: 10.1177/00491241221091755
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Christian Welzel, Stefan Kruse, Lennart Brunkert Abstract: Sociological Methods & Research, Ahead of Print. Our original 2021 SMR article “Non-Invariance' An Overstated Problem with Misconceived Causes” disputes the conclusiveness of non-invariance diagnostics in diverse cross-cultural settings. Our critique targets the increasingly fashionable use of Multi-Group Confirmatory Factor Analysis (MGCFA), especially in its mainstream version. We document—both by mathematical proof and an empirical illustration—that non-invariance is an arithmetic artifact of group mean disparity on closed-ended scales. Precisely this arti-factualness renders standard non-invariance markers inconclusive of measurement inequivalence under group-mean diversity. Using the Emancipative Values Index (EVI), OA-Section 3 of our original article demonstrates that such artifactual non-invariance is inconsequential for multi-item constructs’ cross-cultural performance in nomological terms, that is, explanatory power and predictive quality. Given these limitations of standard non-invariance diagnostics, we challenge the unquestioned authority of invariance tests as a tool of measurement validation. Our critique provoked two teams of authors to launch counter-critiques. We are grateful to the two comments because they give us a welcome opportunity to restate our position in greater clarity. Before addressing the comments one by one, we reformulate our key propositions more succinctly. Citation: Sociological Methods & Research PubDate: 2022-04-08T06:01:22Z DOI: 10.1177/00491241221091754
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Han Zhang, Yilang Peng Abstract: Sociological Methods & Research, Ahead of Print. Automated image analysis has received increasing attention in social scientific research, yet existing scholarship has mostly covered the application of supervised learning to classify images into predefined categories. This study focuses on the task of unsupervised image clustering, which aims to automatically discover categories from unlabelled image data. We first review the steps to perform image clustering and then focus on one key challenge in this task—finding intermediate representations of images. We present several methods of extracting intermediate image representations, including the bag-of-visual-words model, self-supervised learning, and transfer learning (in particular, feature extraction with pretrained models). We compare these methods using various visual datasets, including images related to protests in China from Weibo, images about climate change on Instagram, and profile images of the Russian Internet Research Agency on Twitter. In addition, we propose a systematic way to interpret and validate clustering solutions. Results show that transfer learning significantly outperforms the other methods. The dataset used in the pretrained model critically determines what categories the algorithms can discover. Citation: Sociological Methods & Research PubDate: 2022-04-07T12:35:21Z DOI: 10.1177/00491241221082603
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Ronald Fischer, Johannes Alfons Karl, Johnny R. J. Fontaine, Ype H. Poortinga Abstract: Sociological Methods & Research, Ahead of Print. We comment on the argument by Welzel, Brunkert, Kruse and Inglehart (2021) that theoretically expected associations in nomological networks should take priority over invariance tests in cross-national research. We agree that narrow application of individual tools, such as multi-group confirmatory factor analysis with data that violates the assumptions of these techniques, can be misleading. However, findings that fit expectations of nomological networks may not be free of bias. We present supporting evidence of systematic bias affecting nomological network relationships from a) previous research on intelligence and response styles, b) two simulation studies, and c) data on the choice index from the World Value Survey (WVS). Our main point is that nomological network analysis by itself is insufficient to rule out systematic bias in data. We point out how a thoughtful exploration of sources of biases in cross-national data can contribute to stronger theory development. Citation: Sociological Methods & Research PubDate: 2022-04-06T03:21:25Z DOI: 10.1177/00491241221091756
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:David Kuehn, Ingo Rohlfing Abstract: Sociological Methods & Research, Ahead of Print. The debate about the characteristics and advantages of quantitative and qualitative methods is decades old. In their seminal monograph, A Tale of Two Cultures (2012, ATTC), Gary Goertz and James Mahoney argue that methods and research design practices for causal inference can be distinguished as two cultures that systematically differ from each other along 25 specific characteristics. ATTC’s stated goal is a description of empirical patterns in quantitative and qualitative research. Yet, it does not include a systematic empirical evaluation as to whether the 25 are relevant and valid descriptors of applied research. In this paper, we derive five observable implications from ATTC and test the implications against a stratified random sample of 90 qualitative and 90 quantitative articles published in six journals between 1990–2012. Our analysis provides little support for the two-cultures hypothesis. Quantitative methods are largely implemented as described in ATTC, whereas qualitative methods are much more diverse than ATTC suggests. While some practices do indeed conform to the qualitative culture, many others are implemented in a manner that ATTC characterizes as constitutive of the quantitative culture. We find very little evidence for ATTC's anchoring of qualitative research with set-theoretic approaches to empirical social science research. The set-theoretic template only applies to a fraction of the qualitative research that we reviewed, with the majority of qualitative work incorporating different method choices. Citation: Sociological Methods & Research PubDate: 2022-04-01T06:36:15Z DOI: 10.1177/00491241221082597
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Jean Philippe Décieux Abstract: Sociological Methods & Research, Ahead of Print. The risk of multitasking is high in online surveys. However, knowledge on the effects of multitasking on answer quality is sparse and based on suboptimal approaches. Research reports inconclusive results concerning the consequences of multitasking on task performance. However, studies suggest that especially sequential-multitasking activities are expected to be critical. Therefore, this study focusses on sequential-on-device-multitasking activities (SODM) and its consequences for data quality. Based on probability-based data, this study aims to reveal the prevalence of SODM based on the javascript function OnBlur, to reflect the its determinants and to examine the consequences for data quality. Results show that SODM was detected for 25% of all respondents and that respondent attributes and the device used to answer the survey are related to SODM. Moreover, it becomes apparent that SODM is significantly correlated to data quality measures. Therefore, I propose SODM behavior as a new instrument for researching suboptimal response behavior. Citation: Sociological Methods & Research PubDate: 2022-03-07T04:36:24Z DOI: 10.1177/00491241221082593
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Philip Dawid, Macartan Humphreys, Monica Musio Abstract: Sociological Methods & Research, Ahead of Print. Suppose X and Y are binary exposure and outcome variables, and we have full knowledge of the distribution of Y, given application of X. We are interested in assessing whether an outcome in some case is due to the exposure. This “probability of causation” is of interest in comparative historical analysis where scholars use process tracing approaches to learn about causes of outcomes for single units by observing events along a causal path. The probability of causation is typically not identified, but bounds can be placed on it. Here, we provide a full characterization of the bounds that can be achieved in the ideal case that X and Y are connected by a causal chain of complete mediators, and we know the probabilistic structure of the full chain. Our results are largely negative. We show that, even in these very favorable conditions, the gains from positive evidence on mediators is modest. Citation: Sociological Methods & Research PubDate: 2022-03-03T09:08:07Z DOI: 10.1177/00491241211036161
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Verónica Pérez Bentancur, Lucía Tiscornia Abstract: Sociological Methods & Research, Ahead of Print. Experimental designs in the social sciences have received increasing attention due to their power to produce causal inferences. Nevertheless, experimental research faces limitations, including limited external validity and unrealistic treatments. We propose combining qualitative fieldwork and experimental design iteratively—moving back-and-forth between elements of a research design—to overcome these limitations. To properly evaluate the strength of experiments researchers need information about the context, data, and previous knowledge used to design the treatment. To support our argument, we analyze 338 pre-analysis plans submitted to the Evidence in Governance and Politics repository in 2019 and the design of a study on public opinion support for punitive policing practices in Montevideo, Uruguay. The paper provides insights about using qualitative fieldwork to enhance the external validity, transparency and replicability of experimental research, and a practical guide for researchers who want to incorporate iteration to their research designs. Citation: Sociological Methods & Research PubDate: 2022-03-03T08:21:13Z DOI: 10.1177/00491241221082595
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Anders Vassenden, Marte Mangset Abstract: Sociological Methods & Research, Ahead of Print. In Goffman's terms, qualitative interviews are social encounters with their own realities. Hence, the ‘situational critique’ holds that interviews cannot produce knowledge about the world beyond these encounters, and that other methods, ethnography in particular, render lived life more accurately. The situational critique cannot be dismissed; yet interviewing remains an indispensable sociological tool. This paper demonstrates the value that situationalism holds for interviewing. We examine seemingly contradictory findings from interview studies of middle-class identity (cultural hierarchies and/or egalitarianism'). We then render these contradictions comprehensible by interpreting data excerpts through ‘methodological situationalism’: Goffman's theories of interaction order, ritual, and frontstage/backstage. In ‘situationalist interviewing,’ we suggest that sociologists be attentive to the ‘imagined audiences’ and ‘imagined communities’. These are key to identifying the situations, interaction orders, and cultural repertoires that lie beyond the interview encounter, but to which it refers. In sum, we argue for greater situational awareness among sociologists who must rely on interviews. We also discuss techniques and measures that can facilitate situational awareness. A promise of situational interviewing is that it helps us make sense of contradictions, ambiguities, and disagreements within and between interviews. Citation: Sociological Methods & Research PubDate: 2022-03-02T01:32:50Z DOI: 10.1177/00491241221082609
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Luis Vila-Henninger, Claire Dupuy, Virginie Van Ingelgom, Mauro Caprioli, Ferdinand Teuber, Damien Pennetreau, Margherita Bussi, Cal Le Gall Abstract: Sociological Methods & Research, Ahead of Print. Qualitative secondary analysis has generated heated debate regarding the epistemology of qualitative research. We argue that shifting to an abductive approach provides a fruitful avenue for qualitative secondary analysts who are oriented towards theory-building. However, the concrete implementation of abduction remains underdeveloped—especially for coding. We address this key gap by outlining a set of tactics for abductive analysis that can be applied for qualitative analysis. Our approach applies Timmermans and Tavory's ( Timmermans and Tavory 2012; Tavory and Timmermans 2014) three stages of abduction in three steps for qualitative (secondary) analysis: Generating an Abductive Codebook, Abductive Data Reduction through Code Equations, and In-Depth Abductive Qualitative Analysis. A key contribution of our article is the development of “code equations”—defined as the combination of codes to operationalize phenomena that span individual codes. Code equations are an important resource for abduction and other qualitative approaches that leverage qualitative data to build theory. Citation: Sociological Methods & Research PubDate: 2022-02-15T02:17:52Z DOI: 10.1177/00491241211067508
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Alisa Remizova, Maksim Rudnev, Eldad Davidov Abstract: Sociological Methods & Research, Ahead of Print. Individual religiosity measures are used by researchers to describe and compare individuals and societies. However, the cross-cultural comparability of the measures has often been questioned but rarely empirically tested. In the current study, we examined the cross-national measurement invariance properties of generalized individual religiosity in the sixth wave of the World Values Survey. For the analysis, we used multiple group confirmatory factor analysis and alignment. Our results demonstrated that a theoretically driven measurement model was not invariant across all countries. We suggested four unidimensional measurement models and four overlapping groups of countries in which these measurement models demonstrated approximate invariance. The indicators that covered praying practices, importance of religion, and confidence in its institutions were more cross-nationally invariant than other indicators. Citation: Sociological Methods & Research PubDate: 2022-02-09T04:19:09Z DOI: 10.1177/00491241221077239
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Qiong Wu, Liping Gu Abstract: Sociological Methods & Research, Ahead of Print. Family income questions in general purpose surveys are usually collected with either a single-question summary design or a multiple-question disaggregation design. It is unclear how estimates from the two approaches agree with each other. The current paper takes advantage of a large-scale survey that has collected family income with both methods. With data from 14,222 urban and rural families in the 2018 wave of the nationally representative China Family Panel Studies, we compare the two estimates, and further evaluate factors that might contribute to the discrepancy. We find that the two estimates are loosely matched in only a third of all families, and most of the matched families have a simple income structure. Although the mean of the multiple-question estimate is larger than that of the single-question estimate, the pattern is not monotonic. At lower percentiles up till the median, the single-question estimate is larger, whereas the multiple-question estimate is larger at higher percentiles. Larger family sizes and more income sources contribute to higher likelihood of inconsistent estimates from the two designs. Families with wage income as the main income source have the highest likelihood of giving consistent estimates compared with all other families. In contrast, families with agricultural income or property income as the main source tend to have very high probability of larger single-question estimates. Omission of certain income components and rounding can explain over half of the inconsistencies with higher multiple-question estimates and a quarter of the inconsistencies with higher single-question estimates. Citation: Sociological Methods & Research PubDate: 2022-02-08T11:26:36Z DOI: 10.1177/00491241221077238
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Katharina Meitinger, Tanja Kunz Abstract: Sociological Methods & Research, Ahead of Print. Previous research reveals that the visual design of open-ended questions should match the response task so that respondents can infer the expected response format. Based on a web survey including specific probes in a list-style open-ended question format, we experimentally tested the effects of varying numbers of answer boxes on several indicators of response quality. Our results showed that using multiple small answer boxes instead of one large box had a positive impact on the number and variety of themes mentioned, as well as on the conciseness of responses to specific probes. We found no effect on the relevance of themes and the risk of item non-response. Based on our findings, we recommend using multiple small answer boxes instead of one large box to convey the expected response format and improve response quality in specific probes. This study makes a valuable contribution to the field of web probing, extends the concept of response quality in list-style open-ended questions, and provides a deeper understanding of how visual design features affect cognitive response processes in web surveys. Citation: Sociological Methods & Research PubDate: 2022-02-08T05:05:33Z DOI: 10.1177/00491241221077241
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Aprile D. Benner, Shanting Chen, Celeste C. Fernandez, Mark D. Hayward Abstract: Sociological Methods & Research, Ahead of Print. Discrimination is associated with numerous psychological health outcomes over the life course. The nine-item Everyday Discrimination Scale (EDS) is one of the most widely used measures of discrimination; however, this nine-item measure may not be feasible in large-scale population health surveys where a shortened discrimination measure would be advantageous. The current study examined the construct validity of a combined two-item discrimination measure adapted from the EDS by Add Health (N = 14,839) as compared to the full nine-item EDS and a two-item EDS scale (parallel to the adapted combined measure) used in the National Survey of American Life (NSAL; N = 1,111) and National Latino and Asian American Study (NLAAS) studies (N = 1,055). Results identified convergence among the EDS scales, with high item-total correlations, convergent validity, and criterion validity for psychological outcomes, thus providing evidence for the construct validity of the two-item combined scale. Taken together, the findings provide support for using this reduced scale in studies where the full EDS scale is not available. Citation: Sociological Methods & Research PubDate: 2022-02-07T05:48:20Z DOI: 10.1177/00491241211067512
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Natalja Menold, Vera Toepoel Abstract: Sociological Methods & Research, Ahead of Print. Research on mixed devices in web surveys is in its infancy. Using a randomized experiment, we investigated device effects (desktop PC, tablet and mobile phone) for six response formats and four different numbers of scale points. N = 5,077 members of an online access panel participated in the experiment. An exact test of measurement invariance and Composite Reliability were investigated. The results provided full data comparability for devices and formats, with the exception of continuous Visual Analog Scale (VAS), but limited comparability for different numbers of scale points. There were device effects on reliability when looking at the interactions with formats and number of scale points. VAS, use of mobile phones and five point scales consistently gained lower reliability. We suggest technically less demanding implementations as well as a unified design for mixed-device surveys. Citation: Sociological Methods & Research PubDate: 2022-02-07T05:47:55Z DOI: 10.1177/00491241221077237
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Fernando Rios-Avila, Michelle Lee Maroto Abstract: Sociological Methods & Research, Ahead of Print. Quantile regression (QR) provides an alternative to linear regression (LR) that allows for the estimation of relationships across the distribution of an outcome. However, as highlighted in recent research on the motherhood penalty across the wage distribution, different procedures for conditional and unconditional quantile regression (CQR, UQR) often result in divergent findings that are not always well understood. In light of such discrepancies, this paper reviews how to implement and interpret a range of LR, CQR, and UQR models with fixed effects. It also discusses the use of Quantile Treatment Effect (QTE) models as an alternative to overcome some of the limitations of CQR and UQR models. We then review how to interpret results in the presence of fixed effects based on a replication of Budig and Hodges’s work on the motherhood penalty using NLSY79 data. Citation: Sociological Methods & Research PubDate: 2022-02-01T10:28:19Z DOI: 10.1177/00491241211036165
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Sarah K. Cowan, Michael Hout, Stuart Perrett Abstract: Sociological Methods & Research, Ahead of Print. Long-running surveys need a systematic way to reflect social change and to keep items relevant to respondents, especially when they ask about controversial subjects, or they threaten the items’ validity. We propose a protocol for updating measures that preserves content and construct validity. First, substantive experts articulate the current and anticipated future terms of debate. Then survey experts use this substantive input and their knowledge of existing measures to develop and pilot a large battery of new items. Third, researchers analyze the pilot data to select items for the survey of record. Finally, the items appear on the survey-of-record, available to the whole user community. Surveys-of-record have procedures for changing content that determine if the new items appear just once or become part of the core. We provide the example of developing new abortion attitude measures in the General Social Survey. Current questions ask whether abortion should be legal under varying circumstances. The new abortion items ask about morality, access, state policy, and interpersonal dynamics. They improve content and construct validity and add new insights into Americans’ abortion attitudes. Citation: Sociological Methods & Research PubDate: 2022-01-27T02:43:11Z DOI: 10.1177/00491241211043140
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Tim Haesebrouck Abstract: Sociological Methods & Research, Ahead of Print. The field of qualitative comparative analysis (QCA) is witnessing a heated debate on which one of the QCA’s main solution types should be at the center of substantive interpretation. This article argues that the different QCA solutions have complementary strengths. Therefore, researchers should interpret the three solution types in an integrated way, in order to get as much information as possible on the causal structure behind the phenomenon under investigation. The parsimonious solution is capable of identifying causally relevant conditions, the conservative solution of identifying contextually irrelevant conditions. In addition to conditions for which the data provide evidence that they are causally relevant or contextually irrelevant, there will be conditions for which the data neither suggest that they are relevant nor contextually irrelevant. In line with the procedure for crafting the intermediate solution, it is possible to make clear for which of these ambiguous conditions it is not plausible that they are relevant in the context of the research. Citation: Sociological Methods & Research PubDate: 2022-01-25T09:39:21Z DOI: 10.1177/00491241211036153
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Julia Meisters, Adrian Hoffmann, Jochen Musch Abstract: Sociological Methods & Research, Ahead of Print. Indirect questioning techniques such as the randomized response technique aim to control social desirability bias in surveys of sensitive topics. To improve upon previous indirect questioning techniques, we propose the new Cheating Detection Triangular Model. Similar to the Cheating Detection Model, it includes a mechanism for detecting instruction non-adherence, and similar to the Triangular Model, it uses simplified instructions to improve respondents’ understanding of the procedure. Based on a comparison with the known prevalence of a sensitive attribute serving as external criterion, we report the first individual-level validation of the Cheating Detection Model, the Triangular Model and the Cheating Detection Triangular Model. Moreover, the sensitivity and specificity of all models was assessed, as well as the respondents’ subjective evaluation of all questioning technique formats. Based on our results, the Cheating Detection Triangular Model appears to be the best choice among the investigated indirect questioning techniques. Citation: Sociological Methods & Research PubDate: 2022-01-19T11:49:08Z DOI: 10.1177/00491241211055764
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Fabiola Reiber, Donna Bryce, Rolf Ulrich Abstract: Sociological Methods & Research, Ahead of Print. Randomized response techniques (RRTs) are applied to reduce response biases in self-report surveys on sensitive research questions (e.g., on socially undesirable characteristics). However, there is evidence that they cannot completely eliminate self-protecting response strategies. To address this problem, there are RRTs specifically designed to measure the extent of such strategies. Here we assessed the recently devised unrelated question model—cheating extension (UQMC) in a preregistered online survey on intimate partner violence (IPV) victimization and perpetration during the first contact restrictions as containment measures for the outbreak of the coronavirus disease 2019 pandemic in Germany in early 2020. The UQMC accounting for self-protecting responses described the data better than its predecessor model which assumes instruction adherence. The resulting three-month prevalence estimates were about 10% and we found a high proportion of self-protecting responses in the group of female participants queried about IPV victimization. However, unexpected results concerning the differences in prevalence estimates across the groups queried about victimization and perpetration highlight the difficulty of investigating sensitive research questions even using methods that guarantee anonymity and the importance of interpreting the respective estimates with caution. Citation: Sociological Methods & Research PubDate: 2022-01-17T04:14:03Z DOI: 10.1177/00491241211043138
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Ian Lundberg Abstract: Sociological Methods & Research, Ahead of Print. Disparities across race, gender, and class are important targets of descriptive research. But rather than only describe disparities, research would ideally inform interventions to close those gaps. The gap-closing estimand quantifies how much a gap (e.g., incomes by race) would close if we intervened to equalize a treatment (e.g., access to college). Drawing on causal decomposition analyses, this type of research question yields several benefits. First, gap-closing estimands place categories like race in a causal framework without making them play the role of the treatment (which is philosophically fraught for non-manipulable variables). Second, gap-closing estimands empower researchers to study disparities using new statistical and machine learning estimators designed for causal effects. Third, gap-closing estimands can directly inform policy: if we sampled from the population and actually changed treatment assignments, how much could we close gaps in outcomes' I provide open-source software (the R package gapclosing) to support these methods. Citation: Sociological Methods & Research PubDate: 2022-01-13T08:55:12Z DOI: 10.1177/00491241211055769
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Michael Schultz Abstract: Sociological Methods & Research, Ahead of Print. This paper presents a model of recurrent multinomial sequences. Though there exists a quite considerable literature on modeling autocorrelation in numerical data and sequences of categorical outcomes, there is currently no systematic method of modeling patterns of recurrence in categorical sequences. This paper develops a means of discovering recurrent patterns by employing a more restrictive Markov assumption. The resulting model, which I call the recurrent multinomial model, provides a parsimonious representation of recurrent sequences, enabling the investigation of recurrences on longer time scales than existing models. The utility of recurrent multinomial models is demonstrated by applying them to the case of conversational turn-taking in meetings of the Federal Open Market Committee (FOMC). Analyses are effectively able to discover norms around turn-reclaiming, participation, and suppression and to evaluate how these norms vary throughout the course of the meeting. Citation: Sociological Methods & Research PubDate: 2022-01-11T10:47:10Z DOI: 10.1177/00491241211067513
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Soojin Park, Xu Qin, Chioun Lee Abstract: Sociological Methods & Research, Ahead of Print. In the field of disparities research, there has been growing interest in developing a counterfactual-based decomposition analysis to identify underlying mediating mechanisms that help reduce disparities in populations. Despite rapid development in the area, most prior studies have been limited to regression-based methods, undermining the possibility of addressing complex models with multiple mediators and/or heterogeneous effects. We propose a novel estimation method that effectively addresses complex models. Moreover, we develop a sensitivity analysis for possible violations of an identification assumption. The proposed method and sensitivity analysis are demonstrated with data from the Midlife Development in the US study to investigate the degree to which disparities in cardiovascular health at the intersection of race and gender would be reduced if the distributions of education and perceived discrimination were the same across intersectional groups. Citation: Sociological Methods & Research PubDate: 2022-01-11T03:56:06Z DOI: 10.1177/00491241211067516