Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Scott W. Duxbury Abstract: Sociological Methodology, Ahead of Print. How do individuals’ network selection decisions create unique network structures' Despite broad sociological interest in the micro-level social interactions that create macro-level network structure, few methods are available to statistically evaluate micro-macro relationships in social networks. This study introduces a general methodological framework for testing the effect of (micro) network selection processes, such as homophily, reciprocity, or preferential attachment, on unique (macro) network structures, such as segregation, clustering, or brokerage. The approach uses estimates from a statistical network model to decompose the contributions of each parameter to a node, subgraph, or global network statistic specified by the researcher. A flexible parametric algorithm is introduced to estimate variances, confidence intervals, and p values. Prior micro-macro network methods can be regarded as special cases of the general framework. Extensions to hypothetical network interventions, joint parameter tests, and longitudinal and multilevel network data are discussed. An example is provided analyzing the micro foundations of political segregation in a crime policy collaboration network. Citation: Sociological Methodology PubDate: 2023-11-08T12:29:57Z DOI: 10.1177/00811750231209040
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Loring J. Thomas, Peng Huang, Xiaoshuang Iris Luo, John R. Hipp, Carter T. Butts Abstract: Sociological Methodology, Ahead of Print. Geospatial population data are typically organized into nested hierarchies of areal units, in which each unit is a union of units at the next lower level. There is increasing interest in analyses at fine geographic detail, but these lowest rungs of the areal unit hierarchy are often incompletely tabulated because of cost, privacy, or other considerations. Here, the authors introduce a novel algorithm to impute crosstabs of up to three dimensions (e.g., race, ethnicity, and gender) from marginal data combined with data at higher levels of aggregation. This method exactly preserves the observed fine-grained marginals, while approximating higher-order correlations observed in more complete higher level data. The authors show how this approach can be used with U.S. census data via a case study involving differences in exposure to crime across demographic groups, showing that the imputation process introduces very little error into downstream analysis, while depicting social process at the more fine-grained level. Citation: Sociological Methodology PubDate: 2023-11-08T12:00:16Z DOI: 10.1177/00811750231203218
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Kenneth R. Hanson, Nicholas Theis Abstract: Sociological Methodology, Ahead of Print. Researchers can use data visualization techniques to explore, analyze, and present data in new ways. Although quantitative data are visualized most often, recent innovations have brought attention to the potential benefits of visualizing qualitative data. In this article, the authors demonstrate one way researchers can use networks to analyze and present ethnographic interview data. The authors suggest that because many respondents know one another in ethnographic research, networks are a useful tool for analyzing the implications of respondents’ familiarity with one another. Moreover, respondents often share familiar cultural references that can be visualized. The authors show how visualizing respondents’ ties in conjunction with their shared cultural references sheds light on the different systems of meaning that respondents within a field site use to make sense of the social phenomena under investigation. Citation: Sociological Methodology PubDate: 2023-09-07T11:54:33Z DOI: 10.1177/00811750231195338
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Donghui Wang, Yu Xie, Junming Huang Abstract: Sociological Methodology, Ahead of Print. The use of pooled data from different repeated survey series to study long-term trends is handicapped by a measurement difficulty: different survey series often use different scales to measure the same attitude and thus generate scale-incomparable data. In this article, the authors propose the latent attitude method (LAM) to address this scale-incomparability problem, on the basis of the assumption that attitudes measured by ordinal categories reflect a latent attitude with cut points. The method extends the latent variable method in the case of a single survey series to the case of multiple survey series and leverages overlapping years for identification. The authors first assess the validity of the method with simulated data. The results show that the method yields accurate estimates of mean attitudes and cut point values. The authors then apply the method to an empirical study of Americans’ attitudes toward China from 1974 to 2019. Citation: Sociological Methodology PubDate: 2023-09-05T09:53:25Z DOI: 10.1177/00811750231193641
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Soojin Park, Suyeon Kang, Chioun Lee Abstract: Sociological Methodology, Ahead of Print. Causal decomposition analysis is among the rapidly growing number of tools for identifying factors (“mediators”) that contribute to disparities in outcomes between social groups. An example of such mediators is college completion, which explains later health disparities between Black women and White men. The goal is to quantify how much a disparity would be reduced (or remain) if we hypothetically intervened to set the mediator distribution equal across social groups. Despite increasing interest in estimating disparity reduction and the disparity that remains, various estimation procedures are not straightforward, and researchers have scant guidance for choosing an optimal method. In this article, the authors evaluate the performance in terms of bias, variance, and coverage of three approaches that use different modeling strategies: (1) regression-based methods that impose restrictive modeling assumptions (e.g., linearity) and (2) weighting-based and (3) imputation-based methods that rely on the observed distribution of variables. The authors find a trade-off between the modeling assumptions required in the method and its performance. In terms of performance, regression-based methods operate best as long as the restrictive assumption of linearity is met. Methods relying on mediator models without imposing any modeling assumptions are sensitive to the ratio of the group-mediator association to the mediator-outcome association. These results highlight the importance of selecting an appropriate estimation procedure considering the data at hand. Citation: Sociological Methodology PubDate: 2023-07-17T11:24:24Z DOI: 10.1177/00811750231183711
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Ozan Aksoy, Sinan Yıldırım Abstract: Sociological Methodology, Ahead of Print. The flow of resources across nodes over time (e.g., migration, financial transfers, peer-to-peer interactions) is a common phenomenon in sociology. Standard statistical methods are inadequate to model such interdependent flows. We propose a hierarchical Dirichlet-multinomial regression model and a Bayesian estimation method. We apply the model to analyze 25,632,876 migration instances that took place between Turkey’s 81 provinces from 2009 to 2018. We then discuss the methodological and substantive implications of our results. Methodologically, we demonstrate the predictive advantage of our model compared to its most common alternative in migration research, the gravity model. We also discuss our model in the context of other approaches, mostly developed in the social networks literature. Substantively, we find that population, economic prosperity, the spatial and political distance between the origin and destination, the strength of the AKP (Justice and Development Party) in a province, and the network characteristics of the provinces are important predictors of migration, whereas the proportion of ethnic minority Kurds in a province has no positive association with in- and out-migration. Citation: Sociological Methodology PubDate: 2023-07-11T10:29:44Z DOI: 10.1177/00811750231184460
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Satu Helske, Jouni Helske, Guilherme K. Chihaya Abstract: Sociological Methodology, Ahead of Print. Sequence analysis is increasingly used in the social sciences for the holistic analysis of life-course and other longitudinal data. The usual approach is to construct sequences, calculate dissimilarities, group similar sequences with cluster analysis, and use cluster membership as a dependent or independent variable in a regression model. This approach may be problematic, as cluster memberships are assumed to be fixed known characteristics of the subjects in subsequent analyses. Furthermore, it is often more reasonable to assume that individual sequences are mixtures of multiple ideal types rather than equal members of some group. Failing to account for uncertain and mixed memberships may lead to wrong conclusions about the nature of the studied relationships. In this article, the authors bring forward and discuss the problems of the “traditional” use of sequence analysis clusters as variables and compare four approaches for creating explanatory variables from sequence dissimilarities using different types of data. The authors conduct simulation and empirical studies, demonstrating the importance of considering how sequences and outcomes are related and the need to adjust analyses accordingly. In many typical social science applications, the traditional approach is prone to result in wrong conclusions, and similarity-based approaches such as representativeness should be preferred. Citation: Sociological Methodology PubDate: 2023-06-15T09:19:10Z DOI: 10.1177/00811750231177026
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Maik Hamjediers, Maximilian Sprengholz Abstract: Sociological Methodology, Ahead of Print. Decompositions make it possible to investigate whether gaps between groups in certain outcomes would remain if groups had comparable characteristics. In practice, however, such a counterfactual comparability is difficult to establish in the presence of lacking common support, functional-form misspecification, and insufficient sample size. In this article, the authors show how decompositions can be undermined by these three interrelated issues by comparing the results of a regression-based Kitagawa-Blinder-Oaxaca decomposition and matching decompositions applied to simulated and real-world data. The results show that matching decompositions are robust to issues of common support and functional-form misspecification but demand a large number of observations. Kitagawa-Blinder-Oaxaca decompositions provide consistent estimates also for smaller samples but require assumptions for model specification and, when common support is lacking, for model-based extrapolation. The authors recommend that any decomposition benefits from using a matching approach first to assess potential problems of common support and misspecification. Citation: Sociological Methodology PubDate: 2023-05-20T12:31:22Z DOI: 10.1177/00811750231169729
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Angelo Moretti Abstract: Sociological Methodology, Ahead of Print. Large-scale sample surveys are not designed to produce reliable estimates for small areas. Here, small area estimation methods can be applied to estimate population parameters of target variables to detailed geographic scales. Small area estimation for noncontinuous variables is a topic of great interest in the social sciences where such variables can be found. Generalized linear mixed models are widely adopted in the literature. Interestingly, the small area estimation literature shows that multivariate small area estimators, where correlations among outcome variables are taken into account, produce more efficient estimates than do the traditional univariate techniques. In this article, the author evaluate a multivariate small area estimator on the basis of a joint mixed model in which a small area proportion and mean of a continuous variable are estimated simultaneously. Using this method, the author “borrows strength” across response variables. The author carried out a design-based simulation study to evaluate the approach where the indicators object of study are the income and a monetary poverty (binary) indicator. The author found that the multivariate approach produces more efficient small area estimates than does the univariate modeling approach. The method can be extended to a large variety of indicators on the basis of social surveys. Citation: Sociological Methodology PubDate: 2023-05-11T10:51:17Z DOI: 10.1177/00811750231169726
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Gilbert Ritschard, Tim F. Liao, Emanuela Struffolino Abstract: Sociological Methodology, Ahead of Print. Multidomain/multichannel sequence analysis has become widely used in social science research to uncover the underlying relationships between two or more observed trajectories in parallel. For example, life-course researchers use multidomain sequence analysis to study the parallel unfolding of multiple life-course domains. In this article, the authors conduct a critical review of the approaches most used in multidomain sequence analysis. The parallel unfolding of trajectories in multiple domains is typically analyzed by building a joint multidomain typology and by examining how domain-specific sequence patterns combine with one another within the multidomain groups. The authors identify four strategies to construct the joint multidomain typology: proceeding independently of domain costs and distances between domain sequences, deriving multidomain costs from domain costs, deriving distances between multidomain sequences from within-domain distances, and combining typologies constructed for each domain. The second and third strategies are prevalent in the literature and typically proceed additively. The authors show that these additive procedures assume between-domain independence, and they make explicit the constraints these procedures impose on between-multidomain costs and distances. Regarding the fourth strategy, the authors propose a merging algorithm to avoid scarce combined types. As regards the first strategy, the authors demonstrate, with a real example based on data from the Swiss Household Panel, that using edit distances with data-driven costs at the multidomain level (i.e., independent of domain costs) remains easily manageable with more than 200 different multidomain combined states. In addition, the authors introduce strategies to enhance visualization by types and domains. Citation: Sociological Methodology PubDate: 2023-04-25T01:16:45Z DOI: 10.1177/00811750231163833
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Lisa Avery, Michael Rotondi Abstract: Sociological Methodology, Ahead of Print. Respondent-driven sampling (RDS) is used to measure trait or disease prevalence in populations that are difficult to reach and often marginalized. The authors evaluated the performance of RDS estimators under varying conditions of trait prevalence, homophily, and relative activity. They used large simulated networks (N = 20,000) derived from real-world RDS degree reports and an empirical Facebook network (N = 22,470) to evaluate estimators of binary and categorical trait prevalence. Variability in prevalence estimates is higher when network degree is drawn from real-world samples than from the commonly assumed Poisson distribution, resulting in lower coverage rates. Newer estimators perform well when the sample is a substantive proportion of the population, but bias is present when the population size is unknown. The choice of preferred RDS estimator needs to be study specific, considering both statistical properties and knowledge of the population under study. Citation: Sociological Methodology PubDate: 2023-04-21T11:14:00Z DOI: 10.1177/00811750231163832
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Jackelyn Hwang, Nikhil Naik Abstract: Sociological Methodology, Ahead of Print. Analysis of neighborhood environments is important for understanding inequality. Few studies, however, use direct measures of the visible characteristics of neighborhood conditions, despite their theorized importance in shaping individual and community well-being, because collecting data on the physical conditions of places across neighborhoods and cities and over time has required extensive time and labor. The authors introduce systematic social observation at scale (SSO@S), a pipeline for using visual data, crowdsourcing, and computer vision to identify visible characteristics of neighborhoods at a large scale. The authors implement SSO@S on millions of street-level images across three physically distinct cities—Boston, Detroit, and Los Angeles—from 2007 to 2020 to identify trash across space and over time. The authors evaluate the extent to which this approach can be used to assist with systematic coding of street-level imagery through cross-validation and out-of-sample validation, class-activation mapping, and comparisons with other sources of observed neighborhood characteristics. The SSO@S approach produces estimates with high reliability that correlate with some expected demographic characteristics but not others, depending on the city. The authors conclude with an assessment of this approach for measuring visible characteristics of neighborhoods and the implications for methods and research. Citation: Sociological Methodology PubDate: 2023-04-10T07:09:21Z DOI: 10.1177/00811750231160781
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Ethan Fosse, Christopher Winship Abstract: Sociological Methodology, Ahead of Print. In a widely influential essay, Ryder argued that to understand social change, researchers should compare cohort careers, contrasting how different cohorts change over the life cycle with respect to some outcome. Ryder, however, provided few technical details on how to actually conduct a cohort analysis. In this article, the authors develop a framework for analyzing temporally structured data grounded in the construction, comparison, and decomposition of cohort careers. The authors begin by illustrating how one can analyze age-period-cohort (APC) data by constructing graphs of cohort careers. Although a useful starting point, the major problem with this approach is that the graphs are typically of sufficient complexity that it can be difficult, if not impossible, to discern the underlying trends and patterns in the data. To provide a more useful foundation for cohort analysis, the authors therefore introduce three distinct improvements over the purely graphical approach. First, they provide a mathematical definition of a cohort career, demonstrating how the underlying parameters of interest can be estimated using a reparameterized version of the conventional APC model. The authors call this the life cycle and social change (LC-SC) model. Second, they contrast the proposed model with two alternative three-factor APC models and all logically possible two-factor models, showing that none of these other models are adequate for fully representing Ryder’s ideas. Third, the authors present the article’s major accomplishment: using the LC-SC model, they show how a collection of cohort careers can be decomposed into just four basic components: a curve representing an overall intracohort trend (or life cycle change); a curve representing an overall intercohort trend (or social change); a set of common cross-period temporal fluctuations that permit variability across cohort careers; and, finally, a set of terms representing cell-specific heterogeneity (or, equivalently, interactions among age, period, and/or cohort). As the authors demonstrate, these parts can be reassembled into simpler versions of cohort careers, revealing underlying trends and patterns that may not be evident otherwise. The authors illustrate this approach by analyzing trends in political party strength in the General Social Survey. Citation: Sociological Methodology PubDate: 2023-03-29T06:01:17Z DOI: 10.1177/00811750231151949