A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

  Subjects -> STATISTICS (Total: 130 journals)
The end of the list has been reached or no journals were found for your choice.
Similar Journals
Journal Cover
Sociological Methods & Research
Journal Prestige (SJR): 2.35
Citation Impact (citeScore): 3
Number of Followers: 45  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 0049-1241 - ISSN (Online) 1552-8294
Published by Sage Publications Homepage  [1174 journals]
  • Comparing Egocentric and Sociocentric Centrality Measures in Directed
           Networks

    • Free pre-print version: Loading...

      Authors: Weihua An
      Abstract: Sociological Methods & Research, Ahead of Print.
      Egocentric networks represent a popular research design for network research. However, to what extent and under what conditions egocentric network centrality can serve as reasonable substitutes for their sociocentric counterparts are important questions to study. The answers to these questions are uncertain simply because of the large variety of networks. Hence, this paper aims to provide exploratory answers to these questions by analyzing both empirical and simulated data. Through analyses of various empirical networks (including some classic albeit small ones), this paper shows that egocentric betweenness approximates sociocentric betweenness quite well (the correlation is high across almost all the networks being examined) while egocentric closeness approximates sociocentric closeness only reasonably well (the correlation is a bit lower on average with a larger variance across networks). Simulations also confirm this finding. Analyses further show that egocentric approximations of betweenness and closeness seem to work well in different types of networks (as featured by network size, density, centralization, reciprocity, transitivity, and geodistance). Lastly, the paper briefly presents three ideas to help improve egocentric approximations of centrality measures.
      Citation: Sociological Methods & Research
      PubDate: 2022-09-22T05:05:56Z
      DOI: 10.1177/00491241221122606
       
  • A Sample Size Formula for Network Scale-up Studies

    • Free pre-print version: Loading...

      Authors: Nathaniel Josephs, Dennis M. Feehan, Forrest W. Crawford
      Abstract: Sociological Methods & Research, Ahead of Print.
      The network scale-up method (NSUM) is a survey-based method for estimating the number of individuals in a hidden or hard-to-reach subgroup of a general population. In NSUM surveys, sampled individuals report how many others they know in the subpopulation of interest (e.g. “How many sex workers do you know'”) and how many others they know in subpopulations of the general population (e.g. “How many bus drivers do you know'”). NSUM is widely used to estimate the size of important sociological and epidemiological risk groups, including men who have sex with men, sex workers, HIV+ individuals, and drug users. Unlike several other methods for population size estimation, NSUM requires only a single random sample and the estimator has a conveniently simple form. Despite its popularity, there are no published guidelines for the minimum sample size calculation to achieve a desired statistical precision. Here, we provide a sample size formula that can be employed in any NSUM survey. We show analytically and by simulation that the sample size controls error at the nominal rate and is robust to some forms of network model mis-specification. We apply this methodology to study the minimum sample size and relative error properties of several published NSUM surveys.
      Citation: Sociological Methods & Research
      PubDate: 2022-09-14T05:18:57Z
      DOI: 10.1177/00491241221122576
       
  • The Extended Computational Case Method: A Framework for Research Design

    • Free pre-print version: Loading...

      Authors: Juan Pablo Pardo-Guerra, Prithviraj Pahwa
      Abstract: Sociological Methods & Research, Ahead of Print.
      This paper considers the adoption of computational techniques within research designs modeled after the extended case method. Echoing calls to augment the power of contemporary researchers through the adoption of computational text analysis methods, we offer a framework for thinking about how such techniques can be integrated into quasi-ethnographic workflows to address broad, structural sociological claims. We focus, in particular, on how this adoption of novel forms of evidence impacts corpus design and interpretation (which we tie to matters of casing), theoretical elaboration (which we associate to moving empirical claims across scales and empirical domains), and verification (which we see as a process of reflexive scaffolding of theoretical claims). We provide an example of the use of this framework through a study of the marketization of social scientific knowledge in the United Kingdom.
      Citation: Sociological Methods & Research
      PubDate: 2022-09-09T11:56:45Z
      DOI: 10.1177/00491241221122616
       
  • From Text Signals to Simulations: A Review and Complement to Text as Data
           by Grimmer, Roberts & Stewart (PUP 2022)

    • Free pre-print version: Loading...

      Authors: James Evans
      Abstract: Sociological Methods & Research, Ahead of Print.
      Text as Data represents a major advance for teaching text analysis in the social sciences, digital humanities and data science by providing an integrated framework for how to conceptualize and deploy natural language processing techniques to enrich descriptive and causal analyses of social life in and from text. Here I review achievements of the book and highlight complementary paths not taken, including discussion of recent computational techniques like transformers, which have come to dominate automated language understanding and are just beginning to find their way into the careful research designs showcased in the book. These new methods not only highlight text as a signal from society, but textual models as simulations of society, which could fuel future advances in causal inference and experimentation. Text as Data's focus on textual discovery, measurement and inference points us toward this new frontier, cautioning us not to ignore, but build upon social scientific interpretation and theory.
      Citation: Sociological Methods & Research
      PubDate: 2022-08-30T07:06:51Z
      DOI: 10.1177/00491241221123086
       
  • A Bayesian Semi-Parametric Approach for Modeling Memory Decay in Dynamic
           Social Networks

    • Free pre-print version: Loading...

      Authors: Giuseppe Arena, Joris Mulder, Roger Th. A.J. Leenders
      Abstract: Sociological Methods & Research, Ahead of Print.
      In relational event networks, the tendency for actors to interact with each other depends greatly on the past interactions between the actors in a social network. Both the volume of past interactions and the time that has elapsed since the past interactions affect the actors’ decision-making to interact with other actors in the network. Recently occurred events may have a stronger influence on current interaction behavior than past events that occurred a long time ago–a phenomenon known as “memory decay”. Previous studies either predefined a short-run and long-run memory or fixed a parametric exponential memory decay using a predefined half-life period. In real-life relational event networks, however, it is generally unknown how the influence of past events fades as time goes by. For this reason, it is not recommendable to fix memory decay in an ad-hoc manner, but instead we should learn the shape of memory decay from the observed data. In this paper, a novel semi-parametric approach based on Bayesian Model Averaging is proposed for learning the shape of the memory decay without requiring any parametric assumptions. The method is applied to relational event history data among socio-political actors in India and a comparison with other relational event models based on predefined memory decays is provided.
      Citation: Sociological Methods & Research
      PubDate: 2022-08-16T05:36:56Z
      DOI: 10.1177/00491241221113875
       
  • The Design and Optimality of Survey Counts: A Unified Framework Via the
           Fisher Information Maximizer

    • Free pre-print version: Loading...

      Authors: Xin Guo, Qiang Fu
      Abstract: Sociological Methods & Research, Ahead of Print.
      Grouped and right-censored (GRC) counts have been used in a wide range of attitudinal and behavioural surveys yet they cannot be readily analyzed or assessed by conventional statistical models. This study develops a unified regression framework for the design and optimality of GRC counts in surveys. To process infinitely many grouping schemes for the optimum design, we propose a new two-stage algorithm, the Fisher Information Maximizer (FIM), which utilizes estimates from generalized linear models to find a global optimal grouping scheme among all possible [math]-group schemes. After we define, decompose, and calculate different types of regressor-specific design errors, our analyses from both simulation and empirical examples suggest that: 1) the optimum design of GRC counts is able to reduce the grouping error to zero, 2) the performance of modified Poisson estimators using GRC counts can be comparable to that of Poisson regression, and 3) the optimum design is usually able to achieve the same estimation efficiency with a smaller sample size.
      Citation: Sociological Methods & Research
      PubDate: 2022-08-08T07:28:38Z
      DOI: 10.1177/00491241221113877
       
  • A Comparison of Three Popular Methods for Handling Missing Data:
           Complete-Case Analysis, Inverse Probability Weighting, and Multiple
           Imputation

    • Free pre-print version: Loading...

      Authors: Roderick J. Little, James R. Carpenter, Katherine J. Lee
      Abstract: Sociological Methods & Research, Ahead of Print.
      Missing data are a pervasive problem in data analysis. Three common methods for addressing the problem are (a) complete-case analysis, where only units that are complete on the variables in an analysis are included; (b) weighting, where the complete cases are weighted by the inverse of an estimate of the probability of being complete; and (c) multiple imputation (MI), where missing values of the variables in the analysis are imputed as draws from their predictive distribution under an implicit or explicit statistical model, the imputation process is repeated to create multiple filled-in data sets, and analysis is carried out using simple MI combining rules. This article provides a non-technical discussion of the strengths and weakness of these approaches, and when each of the methods might be adopted over the others. The methods are illustrated on data from the Youth Cohort (Time) Series (YCS) for England, Wales and Scotland, 1984–2002.
      Citation: Sociological Methods & Research
      PubDate: 2022-08-05T07:15:18Z
      DOI: 10.1177/00491241221113873
       
  • Attendance, Completion, and Heterogeneous Returns to College: A Causal
           Mediation Approach

    • Free pre-print version: Loading...

      Authors: Xiang Zhou
      Abstract: Sociological Methods & Research, Ahead of Print.
      A growing body of social science research investigates whether the economic payoff to a college education is heterogeneous — in particular, whether disadvantaged youth can benefit more from attending and completing college relative to their more advantaged peers. Scholars, however, have employed different analytical strategies and reported mixed findings. To shed light on this literature, I propose a causal mediation approach to conceptualizing, evaluating, and unpacking the causal effects of college on earnings. By decomposing the total effect of attending a four-year college into several direct and indirect components, this approach not only clarifies the mechanisms through which college attendance boosts earnings, but illuminates the ways in which the postsecondary system may be both an equalizer and a stratifier. The total effect of college attendance, its direct and indirect components, and their heterogeneity across different subpopulations are all identified under the assumption of sequential ignorability. I introduce a debiased machine learning (DML) method for estimating all quantities of interest, along with a set of bias formulas for sensitivity analysis. I illustrate the proposed framework and methodology using data from the National Longitudinal Survey of Youth, 1997 cohort.
      Citation: Sociological Methods & Research
      PubDate: 2022-08-01T08:01:51Z
      DOI: 10.1177/00491241221113876
       
  • Assessing the Impact of the Great Recession on the Transition to Adulthood

    • Free pre-print version: Loading...

      Authors: Guanglei Hong, Ha-Joon Chung
      Abstract: Sociological Methods & Research, Ahead of Print.
      The impact of a major historical event on child and youth development has been of great interest in the study of the life course. This study is focused on assessing the causal effect of the Great Recession on youth disconnection from school and work. Building on the insights offered by the age-period-cohort research, econometric methods, and developmental psychology, we innovatively develop a causal inference strategy that takes advantage of the multiple successive birth cohorts in the National Longitudinal Study of Youth 1997. The causal effect of the Great Recession is defined in terms of counterfactual developmental trajectories and can be identified under the assumption of short-term stable differences between the birth cohorts in the absence of the Great Recession. A meta-analysis aggregates the estimated effects over six between-cohort comparisons. Furthermore, we conduct a sensitivity analysis to assess the potential consequences if the identification assumption is violated. The findings contribute new evidence on how precipitous and pervasive economic hardship may disrupt youth development by gender and class of origin.
      Citation: Sociological Methods & Research
      PubDate: 2022-07-27T06:37:23Z
      DOI: 10.1177/00491241221113871
       
  • A Crash Course in Good and Bad Controls

    • Free pre-print version: Loading...

      Authors: Carlos Cinelli, Andrew Forney, Judea Pearl
      Abstract: Sociological Methods & Research, Ahead of Print.
      Many students of statistics and econometrics express frustration with the way a problem known as “bad control” is treated in the traditional literature. The issue arises when the addition of a variable to a regression equation produces an unintended discrepancy between the regression coefficient and the effect that the coefficient is intended to represent. Avoiding such discrepancies presents a challenge to all analysts in the data intensive sciences. This note describes graphical tools for understanding, visualizing, and resolving the problem through a series of illustrative examples. By making this “crash course” accessible to instructors and practitioners, we hope to avail these tools to a broader community of scientists concerned with the causal interpretation of regression models.
      Citation: Sociological Methods & Research
      PubDate: 2022-05-20T08:30:25Z
      DOI: 10.1177/00491241221099552
       
  • Improving Estimates Accuracy of Voter Transitions. Two New Algorithms for
           Ecological Inference Based on Linear Programming

    • Free pre-print version: Loading...

      Authors: Jose M. Pavía, Rafael Romero
      Abstract: Sociological Methods & Research, Ahead of Print.
      The estimation of RxC ecological inference contingency tables from aggregate data is one of the most salient and challenging problems in the field of quantitative social sciences, with major solutions proposed from both the ecological regression and the mathematical programming frameworks. In recent decades, there has been a drive to find solutions stemming from the former, with the latter being less active. From the mathematical programming framework, this paper suggests a new direction for tackling this problem. For the first time in the literature, a procedure based on linear programming is proposed to attain estimates of local contingency tables. Based on this and the homogeneity hypothesis, we suggest two new ecological inference algorithms. These two new algorithms represent an important step forward in the ecological inference mathematical programming literature. In addition to generating estimates for local ecological inference contingency tables and amending the tendency to produce extreme transfer probability estimates previously observed in other mathematical programming procedures, these two new algorithms prove to be quite competitive and more accurate than the current linear programming baseline algorithm. Their accuracy is assessed using a unique dataset with almost 500 elections, where the real transfer matrices are known, and their sensitivity to assumptions and limitations are gauged through an extensive simulation study. The new algorithms place the linear programming approach once again in a prominent position in the ecological inference toolkit. Interested readers can use these new algorithms easily with the aid of the R package lphom.
      Citation: Sociological Methods & Research
      PubDate: 2022-05-16T07:43:06Z
      DOI: 10.1177/00491241221092725
       
  • A Language-Based Method for Assessing Symbolic Boundary Maintenance
           between Social Groups

    • Free pre-print version: Loading...

      Authors: Anjali M. Bhatt, Amir Goldberg, Sameer B. Srivastava
      Abstract: Sociological Methods & Research, Ahead of Print.
      When the social boundaries between groups are breached, the tendency for people to erect and maintain symbolic boundaries intensifies. Drawing on extant perspectives on boundary maintenance, we distinguish between two strategies that people pursue in maintaining symbolic boundaries: boundary retention—entrenching themselves in pre-existing symbolic distinctions—and boundary reformation—innovating new forms of symbolic distinction. Traditional approaches to measuring symbolic boundaries—interviews, participant-observation, and self-reports are ill-suited to detecting fine-grained variation in boundary maintenance. To overcome this limitation, we use the tools of computational linguistics and machine learning to develop a novel approach to measuring symbolic boundaries based on interactional language use between group members before and after they encounter one another. We construct measures of boundary retention and reformation using random forest classifiers that quantify group differences based on pre- and post-contact linguistic styles. We demonstrate this method's utility by applying it to a corpus of email communications from a mid-sized financial services firm that acquired and integrated two smaller firms. We find that: (a) the persistence of symbolic boundaries can be detected for up to 18 months after a merger; (b) acquired employees exhibit more boundary reformation and less boundary retention than their counterparts from the acquiring firm; and (c) individuals engage in more boundary retention, but not reformation, when their local work environment is more densely populated by ingroup members. We discuss implications of these findings for the study of culture in a wide range of intergroup contexts and for computational approaches to measuring culture.
      Citation: Sociological Methods & Research
      PubDate: 2022-05-13T07:05:12Z
      DOI: 10.1177/00491241221099555
       
  • Who Does What to Whom' Making Text Parsers Work for Sociological
           Inquiry

    • Free pre-print version: Loading...

      Authors: Oscar Stuhler
      Abstract: Sociological Methods & Research, Ahead of Print.
      Over the past decade, sociologists have become increasingly interested in the formal study of semantic relations within text. Most contemporary studies focus either on mapping concept co-occurrences or on measuring semantic associations via word embeddings. Although conducive to many research goals, these approaches share an important limitation: they abstract away what one can call the event structure of texts, that is, the narrative action that takes place in them. I aim to overcome this limitation by introducing a new framework for extracting semantically rich relations from text that involves three components. First, a semantic grammar structured around textual entities that distinguishes six motif classes: actions of an entity, treatments of an entity, agents acting upon an entity, patients acted upon by an entity, characterizations of an entity, and possessions of an entity; second, a comprehensive set of mapping rules, which make it possible to recover motifs from predictions of dependency parsers; third, an R package that allows researchers to extract motifs from their own texts. The framework is demonstrated in empirical analyses on gendered interaction in novels and constructions of collective identity by U.S. presidential candidates.
      Citation: Sociological Methods & Research
      PubDate: 2022-05-13T03:12:42Z
      DOI: 10.1177/00491241221099551
       
  • The Additional Effects of Adaptive Survey Design Beyond Post-Survey
           Adjustment: An Experimental Evaluation

    • Free pre-print version: Loading...

      Authors: Shiyu Zhang, James Wagner
      Abstract: Sociological Methods & Research, Ahead of Print.
      Adaptive survey design refers to using targeted procedures to recruit different sampled cases. This technique strives to reduce bias and variance of survey estimates by trying to recruit a larger and more balanced set of respondents. However, it is not well understood how adaptive design can improve data and survey estimates beyond the well-established post-survey adjustment. This paper reports the results of an experiment that evaluated the additional effect of adaptive design to post-survey adjustments. The experiment was conducted in the Detroit Metro Area Communities Study in 2021. We evaluated the adaptive design in five outcomes: 1) response rates, 2) demographic composition of respondents, 3) bias and variance of key survey estimates, 4) changes in significant results of regression models, and 5) costs. The most significant benefit of the adaptive design was its ability to generate more efficient survey estimates with smaller variances and smaller design effects.
      Citation: Sociological Methods & Research
      PubDate: 2022-05-13T03:12:18Z
      DOI: 10.1177/00491241221099550
       
  • Promise Into Practice: Application of Computer Vision in Empirical
           Research on Social Distancing

    • Free pre-print version: Loading...

      Authors: Wim Bernasco, Evelien M. Hoeben, Dennis Koelma, Lasse Suonperä Liebst, Josephine Thomas, Joska Appelman, Cees G. M. Snoek, Marie Rosenkrantz Lindegaard
      Abstract: Sociological Methods & Research, Ahead of Print.
      Social scientists increasingly use video data, but large-scale analysis of its content is often constrained by scarce manual coding resources. Upscaling may be possible with the application of automated coding procedures, which are being developed in the field of computer vision. Here, we introduce computer vision to social scientists, review the state-of-the-art in relevant subfields, and provide a working example of how computer vision can be applied in empirical sociological work. Our application involves defining a ground truth by human coders, developing an algorithm for automated coding, testing the performance of the algorithm against the ground truth, and running the algorithm on a large-scale dataset of CCTV images. The working example concerns monitoring social distancing behavior in public space over more than a year of the COVID-19 pandemic. Finally, we discuss prospects for the use of computer vision in empirical social science research and address technical and ethical challenges.
      Citation: Sociological Methods & Research
      PubDate: 2022-05-09T03:35:10Z
      DOI: 10.1177/00491241221099554
       
  • Why Measurement Invariance is Important in Comparative Research. A
           Response to Welzel et al. (2021)

    • Free pre-print version: Loading...

      Authors: Bart Meuleman, Tomasz Żółtak, Artur Pokropek, Eldad Davidov, Bengt Muthén, Daniel L. Oberski, Jaak Billiet, Peter Schmidt
      Abstract: Sociological Methods & Research, Ahead of Print.
      Welzel et al. (2021) claim that non-invariance of instruments is inconclusive and inconsequential in the field for cross-cultural value measurement. In this response, we contend that several key arguments on which Welzel et al. (2021) base their critique of invariance testing are conceptually and statistically incorrect. First, Welzel et al. (2021) claim that value measurement follows a formative rather than reflective logic. Yet they do not provide sufficient theoretical arguments for this conceptualization, nor do they discuss the disadvantages of this approach for validation of instruments. Second, their claim that strong inter-item correlations cannot be retrieved when means are close to the endpoint of scales ignores the existence of factor-analytic approaches for ordered-categorical indicators. Third, Welzel et al. (2021) propose that rather than of relying on invariance tests, comparability can be assessed by studying the connection with theoretically related constructs. However, their proposal ignores that external validation through nomological linkages hinges on the assumption of comparability. By means of two examples, we illustrate that violating the assumptions of measurement invariance can distort conclusions substantially. Following the advice of Welzel et al. (2021) implies discarding a tool that has proven to be very useful for comparativists.
      Citation: Sociological Methods & Research
      PubDate: 2022-04-22T06:54:37Z
      DOI: 10.1177/00491241221091755
       
  • Against the Mainstream: On the Limitations of Non-Invariance Diagnostics:
           Response to Fischer et al. and Meulemann et al.

    • Free pre-print version: Loading...

      Authors: Christian Welzel, Stefan Kruse, Lennart Brunkert
      Abstract: Sociological Methods & Research, Ahead of Print.
      Our original 2021 SMR article “Non-Invariance' An Overstated Problem with Misconceived Causes” disputes the conclusiveness of non-invariance diagnostics in diverse cross-cultural settings. Our critique targets the increasingly fashionable use of Multi-Group Confirmatory Factor Analysis (MGCFA), especially in its mainstream version. We document—both by mathematical proof and an empirical illustration—that non-invariance is an arithmetic artifact of group mean disparity on closed-ended scales. Precisely this arti-factualness renders standard non-invariance markers inconclusive of measurement inequivalence under group-mean diversity. Using the Emancipative Values Index (EVI), OA-Section 3 of our original article demonstrates that such artifactual non-invariance is inconsequential for multi-item constructs’ cross-cultural performance in nomological terms, that is, explanatory power and predictive quality. Given these limitations of standard non-invariance diagnostics, we challenge the unquestioned authority of invariance tests as a tool of measurement validation. Our critique provoked two teams of authors to launch counter-critiques. We are grateful to the two comments because they give us a welcome opportunity to restate our position in greater clarity. Before addressing the comments one by one, we reformulate our key propositions more succinctly.
      Citation: Sociological Methods & Research
      PubDate: 2022-04-08T06:01:22Z
      DOI: 10.1177/00491241221091754
       
  • Image Clustering: An Unsupervised Approach to Categorize Visual Data in
           Social Science Research

    • Free pre-print version: Loading...

      Authors: Han Zhang, Yilang Peng
      Abstract: Sociological Methods & Research, Ahead of Print.
      Automated image analysis has received increasing attention in social scientific research, yet existing scholarship has mostly covered the application of supervised learning to classify images into predefined categories. This study focuses on the task of unsupervised image clustering, which aims to automatically discover categories from unlabelled image data. We first review the steps to perform image clustering and then focus on one key challenge in this task—finding intermediate representations of images. We present several methods of extracting intermediate image representations, including the bag-of-visual-words model, self-supervised learning, and transfer learning (in particular, feature extraction with pretrained models). We compare these methods using various visual datasets, including images related to protests in China from Weibo, images about climate change on Instagram, and profile images of the Russian Internet Research Agency on Twitter. In addition, we propose a systematic way to interpret and validate clustering solutions. Results show that transfer learning significantly outperforms the other methods. The dataset used in the pretrained model critically determines what categories the algorithms can discover.
      Citation: Sociological Methods & Research
      PubDate: 2022-04-07T12:35:21Z
      DOI: 10.1177/00491241221082603
       
  • Evidence of Validity Does not Rule out Systematic Bias: A Commentary on
           Nomological Noise and Cross-Cultural Invariance

    • Free pre-print version: Loading...

      Authors: Ronald Fischer, Johannes Alfons Karl, Johnny R. J. Fontaine, Ype H. Poortinga
      Abstract: Sociological Methods & Research, Ahead of Print.
      We comment on the argument by Welzel, Brunkert, Kruse and Inglehart (2021) that theoretically expected associations in nomological networks should take priority over invariance tests in cross-national research. We agree that narrow application of individual tools, such as multi-group confirmatory factor analysis with data that violates the assumptions of these techniques, can be misleading. However, findings that fit expectations of nomological networks may not be free of bias. We present supporting evidence of systematic bias affecting nomological network relationships from a) previous research on intelligence and response styles, b) two simulation studies, and c) data on the choice index from the World Value Survey (WVS). Our main point is that nomological network analysis by itself is insufficient to rule out systematic bias in data. We point out how a thoughtful exploration of sources of biases in cross-national data can contribute to stronger theory development.
      Citation: Sociological Methods & Research
      PubDate: 2022-04-06T03:21:25Z
      DOI: 10.1177/00491241221091756
       
  • Do Quantitative and Qualitative Research Reflect two Distinct
           Cultures' An Empirical Analysis of 180 Articles Suggests “no”

    • Free pre-print version: Loading...

      Authors: David Kuehn, Ingo Rohlfing
      Abstract: Sociological Methods & Research, Ahead of Print.
      The debate about the characteristics and advantages of quantitative and qualitative methods is decades old. In their seminal monograph, A Tale of Two Cultures (2012, ATTC), Gary Goertz and James Mahoney argue that methods and research design practices for causal inference can be distinguished as two cultures that systematically differ from each other along 25 specific characteristics. ATTC’s stated goal is a description of empirical patterns in quantitative and qualitative research. Yet, it does not include a systematic empirical evaluation as to whether the 25 are relevant and valid descriptors of applied research. In this paper, we derive five observable implications from ATTC and test the implications against a stratified random sample of 90 qualitative and 90 quantitative articles published in six journals between 1990–2012. Our analysis provides little support for the two-cultures hypothesis. Quantitative methods are largely implemented as described in ATTC, whereas qualitative methods are much more diverse than ATTC suggests. While some practices do indeed conform to the qualitative culture, many others are implemented in a manner that ATTC characterizes as constitutive of the quantitative culture. We find very little evidence for ATTC's anchoring of qualitative research with set-theoretic approaches to empirical social science research. The set-theoretic template only applies to a fraction of the qualitative research that we reviewed, with the majority of qualitative work incorporating different method choices.
      Citation: Sociological Methods & Research
      PubDate: 2022-04-01T06:36:15Z
      DOI: 10.1177/00491241221082597
       
  • Sequential On-Device Multitasking within Online Surveys: A Data Quality
           and Response Behavior Perspective

    • Free pre-print version: Loading...

      Authors: Jean Philippe Décieux
      Abstract: Sociological Methods & Research, Ahead of Print.
      The risk of multitasking is high in online surveys. However, knowledge on the effects of multitasking on answer quality is sparse and based on suboptimal approaches. Research reports inconclusive results concerning the consequences of multitasking on task performance. However, studies suggest that especially sequential-multitasking activities are expected to be critical. Therefore, this study focusses on sequential-on-device-multitasking activities (SODM) and its consequences for data quality. Based on probability-based data, this study aims to reveal the prevalence of SODM based on the javascript function OnBlur, to reflect the its determinants and to examine the consequences for data quality. Results show that SODM was detected for 25% of all respondents and that respondent attributes and the device used to answer the survey are related to SODM. Moreover, it becomes apparent that SODM is significantly correlated to data quality measures. Therefore, I propose SODM behavior as a new instrument for researching suboptimal response behavior.
      Citation: Sociological Methods & Research
      PubDate: 2022-03-07T04:36:24Z
      DOI: 10.1177/00491241221082593
       
  • Bounding Causes of Effects With Mediators

    • Free pre-print version: Loading...

      Authors: Philip Dawid, Macartan Humphreys, Monica Musio
      Abstract: Sociological Methods & Research, Ahead of Print.
      Suppose X and Y are binary exposure and outcome variables, and we have full knowledge of the distribution of Y, given application of X. We are interested in assessing whether an outcome in some case is due to the exposure. This “probability of causation” is of interest in comparative historical analysis where scholars use process tracing approaches to learn about causes of outcomes for single units by observing events along a causal path. The probability of causation is typically not identified, but bounds can be placed on it. Here, we provide a full characterization of the bounds that can be achieved in the ideal case that X and Y are connected by a causal chain of complete mediators, and we know the probabilistic structure of the full chain. Our results are largely negative. We show that, even in these very favorable conditions, the gains from positive evidence on mediators is modest.
      Citation: Sociological Methods & Research
      PubDate: 2022-03-03T09:08:07Z
      DOI: 10.1177/00491241211036161
       
  • Iteration in Mixed-Methods Research Designs Combining Experiments and
           Fieldwork,

    • Free pre-print version: Loading...

      Authors: Verónica Pérez Bentancur, Lucía Tiscornia
      Abstract: Sociological Methods & Research, Ahead of Print.
      Experimental designs in the social sciences have received increasing attention due to their power to produce causal inferences. Nevertheless, experimental research faces limitations, including limited external validity and unrealistic treatments. We propose combining qualitative fieldwork and experimental design iteratively—moving back-and-forth between elements of a research design—to overcome these limitations. To properly evaluate the strength of experiments researchers need information about the context, data, and previous knowledge used to design the treatment. To support our argument, we analyze 338 pre-analysis plans submitted to the Evidence in Governance and Politics repository in 2019 and the design of a study on public opinion support for punitive policing practices in Montevideo, Uruguay. The paper provides insights about using qualitative fieldwork to enhance the external validity, transparency and replicability of experimental research, and a practical guide for researchers who want to incorporate iteration to their research designs.
      Citation: Sociological Methods & Research
      PubDate: 2022-03-03T08:21:13Z
      DOI: 10.1177/00491241221082595
       
  • Social Encounters and the Worlds Beyond: Putting Situationalism to Work
           for Qualitative Interviews

    • Free pre-print version: Loading...

      Authors: Anders Vassenden, Marte Mangset
      Abstract: Sociological Methods & Research, Ahead of Print.
      In Goffman's terms, qualitative interviews are social encounters with their own realities. Hence, the ‘situational critique’ holds that interviews cannot produce knowledge about the world beyond these encounters, and that other methods, ethnography in particular, render lived life more accurately. The situational critique cannot be dismissed; yet interviewing remains an indispensable sociological tool. This paper demonstrates the value that situationalism holds for interviewing. We examine seemingly contradictory findings from interview studies of middle-class identity (cultural hierarchies and/or egalitarianism'). We then render these contradictions comprehensible by interpreting data excerpts through ‘methodological situationalism’: Goffman's theories of interaction order, ritual, and frontstage/backstage. In ‘situationalist interviewing,’ we suggest that sociologists be attentive to the ‘imagined audiences’ and ‘imagined communities’. These are key to identifying the situations, interaction orders, and cultural repertoires that lie beyond the interview encounter, but to which it refers. In sum, we argue for greater situational awareness among sociologists who must rely on interviews. We also discuss techniques and measures that can facilitate situational awareness. A promise of situational interviewing is that it helps us make sense of contradictions, ambiguities, and disagreements within and between interviews.
      Citation: Sociological Methods & Research
      PubDate: 2022-03-02T01:32:50Z
      DOI: 10.1177/00491241221082609
       
  • Abductive Coding: Theory Building and Qualitative (Re)Analysis

    • Free pre-print version: Loading...

      Authors: Luis Vila-Henninger, Claire Dupuy, Virginie Van Ingelgom, Mauro Caprioli, Ferdinand Teuber, Damien Pennetreau, Margherita Bussi, Cal Le Gall
      Abstract: Sociological Methods & Research, Ahead of Print.
      Qualitative secondary analysis has generated heated debate regarding the epistemology of qualitative research. We argue that shifting to an abductive approach provides a fruitful avenue for qualitative secondary analysts who are oriented towards theory-building. However, the concrete implementation of abduction remains underdeveloped—especially for coding. We address this key gap by outlining a set of tactics for abductive analysis that can be applied for qualitative analysis. Our approach applies Timmermans and Tavory's ( Timmermans and Tavory 2012; Tavory and Timmermans 2014) three stages of abduction in three steps for qualitative (secondary) analysis: Generating an Abductive Codebook, Abductive Data Reduction through Code Equations, and In-Depth Abductive Qualitative Analysis. A key contribution of our article is the development of “code equations”—defined as the combination of codes to operationalize phenomena that span individual codes. Code equations are an important resource for abduction and other qualitative approaches that leverage qualitative data to build theory.
      Citation: Sociological Methods & Research
      PubDate: 2022-02-15T02:17:52Z
      DOI: 10.1177/00491241211067508
       
  • In Search of a Comparable Measure of Generalized Individual Religiosity in
           the World Values Survey

    • Free pre-print version: Loading...

      Authors: Alisa Remizova, Maksim Rudnev, Eldad Davidov
      Abstract: Sociological Methods & Research, Ahead of Print.
      Individual religiosity measures are used by researchers to describe and compare individuals and societies. However, the cross-cultural comparability of the measures has often been questioned but rarely empirically tested. In the current study, we examined the cross-national measurement invariance properties of generalized individual religiosity in the sixth wave of the World Values Survey. For the analysis, we used multiple group confirmatory factor analysis and alignment. Our results demonstrated that a theoretically driven measurement model was not invariant across all countries. We suggested four unidimensional measurement models and four overlapping groups of countries in which these measurement models demonstrated approximate invariance. The indicators that covered praying practices, importance of religion, and confidence in its institutions were more cross-nationally invariant than other indicators.
      Citation: Sociological Methods & Research
      PubDate: 2022-02-09T04:19:09Z
      DOI: 10.1177/00491241221077239
       
  • Comparing Single- and Multiple-Question Designs of Measuring Family Income
           in China Family Panel Studies

    • Free pre-print version: Loading...

      Authors: Qiong Wu, Liping Gu
      Abstract: Sociological Methods & Research, Ahead of Print.
      Family income questions in general purpose surveys are usually collected with either a single-question summary design or a multiple-question disaggregation design. It is unclear how estimates from the two approaches agree with each other. The current paper takes advantage of a large-scale survey that has collected family income with both methods. With data from 14,222 urban and rural families in the 2018 wave of the nationally representative China Family Panel Studies, we compare the two estimates, and further evaluate factors that might contribute to the discrepancy. We find that the two estimates are loosely matched in only a third of all families, and most of the matched families have a simple income structure. Although the mean of the multiple-question estimate is larger than that of the single-question estimate, the pattern is not monotonic. At lower percentiles up till the median, the single-question estimate is larger, whereas the multiple-question estimate is larger at higher percentiles. Larger family sizes and more income sources contribute to higher likelihood of inconsistent estimates from the two designs. Families with wage income as the main income source have the highest likelihood of giving consistent estimates compared with all other families. In contrast, families with agricultural income or property income as the main source tend to have very high probability of larger single-question estimates. Omission of certain income components and rounding can explain over half of the inconsistencies with higher multiple-question estimates and a quarter of the inconsistencies with higher single-question estimates.
      Citation: Sociological Methods & Research
      PubDate: 2022-02-08T11:26:36Z
      DOI: 10.1177/00491241221077238
       
  • Visual Design and Cognition in List-Style Open-Ended Questions in Web
           Probing

    • Free pre-print version: Loading...

      Authors: Katharina Meitinger, Tanja Kunz
      Abstract: Sociological Methods & Research, Ahead of Print.
      Previous research reveals that the visual design of open-ended questions should match the response task so that respondents can infer the expected response format. Based on a web survey including specific probes in a list-style open-ended question format, we experimentally tested the effects of varying numbers of answer boxes on several indicators of response quality. Our results showed that using multiple small answer boxes instead of one large box had a positive impact on the number and variety of themes mentioned, as well as on the conciseness of responses to specific probes. We found no effect on the relevance of themes and the risk of item non-response. Based on our findings, we recommend using multiple small answer boxes instead of one large box to convey the expected response format and improve response quality in specific probes. This study makes a valuable contribution to the field of web probing, extends the concept of response quality in list-style open-ended questions, and provides a deeper understanding of how visual design features affect cognitive response processes in web surveys.
      Citation: Sociological Methods & Research
      PubDate: 2022-02-08T05:05:33Z
      DOI: 10.1177/00491241221077241
       
  • The Potential for Using a Shortened Version of the Everyday Discrimination
           Scale in Population Research with Young Adults: A Construct Validation
           Investigation

    • Free pre-print version: Loading...

      Authors: Aprile D. Benner, Shanting Chen, Celeste C. Fernandez, Mark D. Hayward
      Abstract: Sociological Methods & Research, Ahead of Print.
      Discrimination is associated with numerous psychological health outcomes over the life course. The nine-item Everyday Discrimination Scale (EDS) is one of the most widely used measures of discrimination; however, this nine-item measure may not be feasible in large-scale population health surveys where a shortened discrimination measure would be advantageous. The current study examined the construct validity of a combined two-item discrimination measure adapted from the EDS by Add Health (N = 14,839) as compared to the full nine-item EDS and a two-item EDS scale (parallel to the adapted combined measure) used in the National Survey of American Life (NSAL; N = 1,111) and National Latino and Asian American Study (NLAAS) studies (N = 1,055). Results identified convergence among the EDS scales, with high item-total correlations, convergent validity, and criterion validity for psychological outcomes, thus providing evidence for the construct validity of the two-item combined scale. Taken together, the findings provide support for using this reduced scale in studies where the full EDS scale is not available.
      Citation: Sociological Methods & Research
      PubDate: 2022-02-07T05:48:20Z
      DOI: 10.1177/00491241211067512
       
  • Do Different Devices Perform Equally Well with Different Numbers of Scale
           Points and Response Formats' A test of measurement invariance and
           reliability

    • Free pre-print version: Loading...

      Authors: Natalja Menold, Vera Toepoel
      Abstract: Sociological Methods & Research, Ahead of Print.
      Research on mixed devices in web surveys is in its infancy. Using a randomized experiment, we investigated device effects (desktop PC, tablet and mobile phone) for six response formats and four different numbers of scale points. N = 5,077 members of an online access panel participated in the experiment. An exact test of measurement invariance and Composite Reliability were investigated. The results provided full data comparability for devices and formats, with the exception of continuous Visual Analog Scale (VAS), but limited comparability for different numbers of scale points. There were device effects on reliability when looking at the interactions with formats and number of scale points. VAS, use of mobile phones and five point scales consistently gained lower reliability. We suggest technically less demanding implementations as well as a unified design for mixed-device surveys.
      Citation: Sociological Methods & Research
      PubDate: 2022-02-07T05:47:55Z
      DOI: 10.1177/00491241221077237
       
  • Moving Beyond Linear Regression: Implementing and Interpreting Quantile
           Regression Models With Fixed Effects

    • Free pre-print version: Loading...

      Authors: Fernando Rios-Avila, Michelle Lee Maroto
      Abstract: Sociological Methods & Research, Ahead of Print.
      Quantile regression (QR) provides an alternative to linear regression (LR) that allows for the estimation of relationships across the distribution of an outcome. However, as highlighted in recent research on the motherhood penalty across the wage distribution, different procedures for conditional and unconditional quantile regression (CQR, UQR) often result in divergent findings that are not always well understood. In light of such discrepancies, this paper reviews how to implement and interpret a range of LR, CQR, and UQR models with fixed effects. It also discusses the use of Quantile Treatment Effect (QTE) models as an alternative to overcome some of the limitations of CQR and UQR models. We then review how to interpret results in the presence of fixed effects based on a replication of Budig and Hodges’s work on the motherhood penalty using NLSY79 data.
      Citation: Sociological Methods & Research
      PubDate: 2022-02-01T10:28:19Z
      DOI: 10.1177/00491241211036165
       
  • Updating a Time-Series of Survey Questions: The Case of Abortion Attitudes
           in the General Social Survey

    • Free pre-print version: Loading...

      Authors: Sarah K. Cowan, Michael Hout, Stuart Perrett
      Abstract: Sociological Methods & Research, Ahead of Print.
      Long-running surveys need a systematic way to reflect social change and to keep items relevant to respondents, especially when they ask about controversial subjects, or they threaten the items’ validity. We propose a protocol for updating measures that preserves content and construct validity. First, substantive experts articulate the current and anticipated future terms of debate. Then survey experts use this substantive input and their knowledge of existing measures to develop and pilot a large battery of new items. Third, researchers analyze the pilot data to select items for the survey of record. Finally, the items appear on the survey-of-record, available to the whole user community. Surveys-of-record have procedures for changing content that determine if the new items appear just once or become part of the core. We provide the example of developing new abortion attitude measures in the General Social Survey. Current questions ask whether abortion should be legal under varying circumstances. The new abortion items ask about morality, access, state policy, and interpersonal dynamics. They improve content and construct validity and add new insights into Americans’ abortion attitudes.
      Citation: Sociological Methods & Research
      PubDate: 2022-01-27T02:43:11Z
      DOI: 10.1177/00491241211043140
       
  • Relevant, Irrelevant, or Ambiguous' Toward a New Interpretation of
           QCA’s Solution Types

    • Free pre-print version: Loading...

      Authors: Tim Haesebrouck
      Abstract: Sociological Methods & Research, Ahead of Print.
      The field of qualitative comparative analysis (QCA) is witnessing a heated debate on which one of the QCA’s main solution types should be at the center of substantive interpretation. This article argues that the different QCA solutions have complementary strengths. Therefore, researchers should interpret the three solution types in an integrated way, in order to get as much information as possible on the causal structure behind the phenomenon under investigation. The parsimonious solution is capable of identifying causally relevant conditions, the conservative solution of identifying contextually irrelevant conditions. In addition to conditions for which the data provide evidence that they are causally relevant or contextually irrelevant, there will be conditions for which the data neither suggest that they are relevant nor contextually irrelevant. In line with the procedure for crafting the intermediate solution, it is possible to make clear for which of these ambiguous conditions it is not plausible that they are relevant in the context of the research.
      Citation: Sociological Methods & Research
      PubDate: 2022-01-25T09:39:21Z
      DOI: 10.1177/00491241211036153
       
  • A New Approach to Detecting Cheating in Sensitive Surveys: The Cheating
           Detection Triangular Model

    • Free pre-print version: Loading...

      Authors: Julia Meisters, Adrian Hoffmann, Jochen Musch
      Abstract: Sociological Methods & Research, Ahead of Print.
      Indirect questioning techniques such as the randomized response technique aim to control social desirability bias in surveys of sensitive topics. To improve upon previous indirect questioning techniques, we propose the new Cheating Detection Triangular Model. Similar to the Cheating Detection Model, it includes a mechanism for detecting instruction non-adherence, and similar to the Triangular Model, it uses simplified instructions to improve respondents’ understanding of the procedure. Based on a comparison with the known prevalence of a sensitive attribute serving as external criterion, we report the first individual-level validation of the Cheating Detection Model, the Triangular Model and the Cheating Detection Triangular Model. Moreover, the sensitivity and specificity of all models was assessed, as well as the respondents’ subjective evaluation of all questioning technique formats. Based on our results, the Cheating Detection Triangular Model appears to be the best choice among the investigated indirect questioning techniques.
      Citation: Sociological Methods & Research
      PubDate: 2022-01-19T11:49:08Z
      DOI: 10.1177/00491241211055764
       
  • Self-protecting responses in randomized response designs: A survey on
           intimate partner violence during the coronavirus disease 2019 pandemic

    • Free pre-print version: Loading...

      Authors: Fabiola Reiber, Donna Bryce, Rolf Ulrich
      Abstract: Sociological Methods & Research, Ahead of Print.
      Randomized response techniques (RRTs) are applied to reduce response biases in self-report surveys on sensitive research questions (e.g., on socially undesirable characteristics). However, there is evidence that they cannot completely eliminate self-protecting response strategies. To address this problem, there are RRTs specifically designed to measure the extent of such strategies. Here we assessed the recently devised unrelated question model—cheating extension (UQMC) in a preregistered online survey on intimate partner violence (IPV) victimization and perpetration during the first contact restrictions as containment measures for the outbreak of the coronavirus disease 2019 pandemic in Germany in early 2020. The UQMC accounting for self-protecting responses described the data better than its predecessor model which assumes instruction adherence. The resulting three-month prevalence estimates were about 10% and we found a high proportion of self-protecting responses in the group of female participants queried about IPV victimization. However, unexpected results concerning the differences in prevalence estimates across the groups queried about victimization and perpetration highlight the difficulty of investigating sensitive research questions even using methods that guarantee anonymity and the importance of interpreting the respective estimates with caution.
      Citation: Sociological Methods & Research
      PubDate: 2022-01-17T04:14:03Z
      DOI: 10.1177/00491241211043138
       
  • The Gap-Closing Estimand: A Causal Approach to Study Interventions That
           Close Disparities Across Social Categories

    • Free pre-print version: Loading...

      Authors: Ian Lundberg
      Abstract: Sociological Methods & Research, Ahead of Print.
      Disparities across race, gender, and class are important targets of descriptive research. But rather than only describe disparities, research would ideally inform interventions to close those gaps. The gap-closing estimand quantifies how much a gap (e.g., incomes by race) would close if we intervened to equalize a treatment (e.g., access to college). Drawing on causal decomposition analyses, this type of research question yields several benefits. First, gap-closing estimands place categories like race in a causal framework without making them play the role of the treatment (which is philosophically fraught for non-manipulable variables). Second, gap-closing estimands empower researchers to study disparities using new statistical and machine learning estimators designed for causal effects. Third, gap-closing estimands can directly inform policy: if we sampled from the population and actually changed treatment assignments, how much could we close gaps in outcomes' I provide open-source software (the R package gapclosing) to support these methods.
      Citation: Sociological Methods & Research
      PubDate: 2022-01-13T08:55:12Z
      DOI: 10.1177/00491241211055769
       
  • Recurrent Multinomial Models for Categorical Sequences

    • Free pre-print version: Loading...

      Authors: Michael Schultz
      Abstract: Sociological Methods & Research, Ahead of Print.
      This paper presents a model of recurrent multinomial sequences. Though there exists a quite considerable literature on modeling autocorrelation in numerical data and sequences of categorical outcomes, there is currently no systematic method of modeling patterns of recurrence in categorical sequences. This paper develops a means of discovering recurrent patterns by employing a more restrictive Markov assumption. The resulting model, which I call the recurrent multinomial model, provides a parsimonious representation of recurrent sequences, enabling the investigation of recurrences on longer time scales than existing models. The utility of recurrent multinomial models is demonstrated by applying them to the case of conversational turn-taking in meetings of the Federal Open Market Committee (FOMC). Analyses are effectively able to discover norms around turn-reclaiming, participation, and suppression and to evaluate how these norms vary throughout the course of the meeting.
      Citation: Sociological Methods & Research
      PubDate: 2022-01-11T10:47:10Z
      DOI: 10.1177/00491241211067513
       
  • Estimation and sensitivity analysis for causal decomposition in health
           disparity research

    • Free pre-print version: Loading...

      Authors: Soojin Park, Xu Qin, Chioun Lee
      Abstract: Sociological Methods & Research, Ahead of Print.
      In the field of disparities research, there has been growing interest in developing a counterfactual-based decomposition analysis to identify underlying mediating mechanisms that help reduce disparities in populations. Despite rapid development in the area, most prior studies have been limited to regression-based methods, undermining the possibility of addressing complex models with multiple mediators and/or heterogeneous effects. We propose a novel estimation method that effectively addresses complex models. Moreover, we develop a sensitivity analysis for possible violations of an identification assumption. The proposed method and sensitivity analysis are demonstrated with data from the Midlife Development in the US study to investigate the degree to which disparities in cardiovascular health at the intersection of race and gender would be reduced if the distributions of education and perceived discrimination were the same across intersectional groups.
      Citation: Sociological Methods & Research
      PubDate: 2022-01-11T03:56:06Z
      DOI: 10.1177/00491241211067516
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 44.201.97.26
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-