Subjects -> SOCIOLOGY (Total: 553 journals)
| A B C D E F G H I J K L M N O P Q R S T U V W X Y Z | The end of the list has been reached or no journals were found for your choice. |
|
|
- Comparing the Incomparable' Issues of Lacking Common Support,
Functional-Form Misspecification, and Insufficient Sample Size in Decompositions-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Maik Hamjediers, Maximilian Sprengholz Abstract: Sociological Methodology, Ahead of Print. Decompositions make it possible to investigate whether gaps between groups in certain outcomes would remain if groups had comparable characteristics. In practice, however, such a counterfactual comparability is difficult to establish in the presence of lacking common support, functional-form misspecification, and insufficient sample size. In this article, the authors show how decompositions can be undermined by these three interrelated issues by comparing the results of a regression-based Kitagawa-Blinder-Oaxaca decomposition and matching decompositions applied to simulated and real-world data. The results show that matching decompositions are robust to issues of common support and functional-form misspecification but demand a large number of observations. Kitagawa-Blinder-Oaxaca decompositions provide consistent estimates also for smaller samples but require assumptions for model specification and, when common support is lacking, for model-based extrapolation. The authors recommend that any decomposition benefits from using a matching approach first to assess potential problems of common support and misspecification. Citation: Sociological Methodology PubDate: 2023-05-20T12:31:22Z DOI: 10.1177/00811750231169729
- Multivariate Small Area Estimation of Social Indicators: The Case of
Continuous and Binary Variables-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Angelo Moretti Abstract: Sociological Methodology, Ahead of Print. Large-scale sample surveys are not designed to produce reliable estimates for small areas. Here, small area estimation methods can be applied to estimate population parameters of target variables to detailed geographic scales. Small area estimation for noncontinuous variables is a topic of great interest in the social sciences where such variables can be found. Generalized linear mixed models are widely adopted in the literature. Interestingly, the small area estimation literature shows that multivariate small area estimators, where correlations among outcome variables are taken into account, produce more efficient estimates than do the traditional univariate techniques. In this article, the author evaluate a multivariate small area estimator on the basis of a joint mixed model in which a small area proportion and mean of a continuous variable are estimated simultaneously. Using this method, the author “borrows strength” across response variables. The author carried out a design-based simulation study to evaluate the approach where the indicators object of study are the income and a monetary poverty (binary) indicator. The author found that the multivariate approach produces more efficient small area estimates than does the univariate modeling approach. The method can be extended to a large variety of indicators on the basis of social surveys. Citation: Sociological Methodology PubDate: 2023-05-11T10:51:17Z DOI: 10.1177/00811750231169726
- Strategies for Multidomain Sequence Analysis in Social Research
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Gilbert Ritschard, Tim F. Liao, Emanuela Struffolino Abstract: Sociological Methodology, Ahead of Print. Multidomain/multichannel sequence analysis has become widely used in social science research to uncover the underlying relationships between two or more observed trajectories in parallel. For example, life-course researchers use multidomain sequence analysis to study the parallel unfolding of multiple life-course domains. In this article, the authors conduct a critical review of the approaches most used in multidomain sequence analysis. The parallel unfolding of trajectories in multiple domains is typically analyzed by building a joint multidomain typology and by examining how domain-specific sequence patterns combine with one another within the multidomain groups. The authors identify four strategies to construct the joint multidomain typology: proceeding independently of domain costs and distances between domain sequences, deriving multidomain costs from domain costs, deriving distances between multidomain sequences from within-domain distances, and combining typologies constructed for each domain. The second and third strategies are prevalent in the literature and typically proceed additively. The authors show that these additive procedures assume between-domain independence, and they make explicit the constraints these procedures impose on between-multidomain costs and distances. Regarding the fourth strategy, the authors propose a merging algorithm to avoid scarce combined types. As regards the first strategy, the authors demonstrate, with a real example based on data from the Swiss Household Panel, that using edit distances with data-driven costs at the multidomain level (i.e., independent of domain costs) remains easily manageable with more than 200 different multidomain combined states. In addition, the authors introduce strategies to enhance visualization by types and domains. Citation: Sociological Methodology PubDate: 2023-04-25T01:16:45Z DOI: 10.1177/00811750231163833
- Evaluation of Respondent-Driven Sampling Prevalence Estimators Using
Real-World Reported Network Degree-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Lisa Avery, Michael Rotondi Abstract: Sociological Methodology, Ahead of Print. Respondent-driven sampling (RDS) is used to measure trait or disease prevalence in populations that are difficult to reach and often marginalized. The authors evaluated the performance of RDS estimators under varying conditions of trait prevalence, homophily, and relative activity. They used large simulated networks (N = 20,000) derived from real-world RDS degree reports and an empirical Facebook network (N = 22,470) to evaluate estimators of binary and categorical trait prevalence. Variability in prevalence estimates is higher when network degree is drawn from real-world samples than from the commonly assumed Poisson distribution, resulting in lower coverage rates. Newer estimators perform well when the sample is a substantive proportion of the population, but bias is present when the population size is unknown. The choice of preferred RDS estimator needs to be study specific, considering both statistical properties and knowledge of the population under study. Citation: Sociological Methodology PubDate: 2023-04-21T11:14:00Z DOI: 10.1177/00811750231163832
- Systematic Social Observation at Scale: Using Crowdsourcing and Computer
Vision to Measure Visible Neighborhood Conditions-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Jackelyn Hwang, Nikhil Naik Abstract: Sociological Methodology, Ahead of Print. Analysis of neighborhood environments is important for understanding inequality. Few studies, however, use direct measures of the visible characteristics of neighborhood conditions, despite their theorized importance in shaping individual and community well-being, because collecting data on the physical conditions of places across neighborhoods and cities and over time has required extensive time and labor. The authors introduce systematic social observation at scale (SSO@S), a pipeline for using visual data, crowdsourcing, and computer vision to identify visible characteristics of neighborhoods at a large scale. The authors implement SSO@S on millions of street-level images across three physically distinct cities—Boston, Detroit, and Los Angeles—from 2007 to 2020 to identify trash across space and over time. The authors evaluate the extent to which this approach can be used to assist with systematic coding of street-level imagery through cross-validation and out-of-sample validation, class-activation mapping, and comparisons with other sources of observed neighborhood characteristics. The SSO@S approach produces estimates with high reliability that correlate with some expected demographic characteristics but not others, depending on the city. The authors conclude with an assessment of this approach for measuring visible characteristics of neighborhoods and the implications for methods and research. Citation: Sociological Methodology PubDate: 2023-04-10T07:09:21Z DOI: 10.1177/00811750231160781
- The Anatomy of Cohort Analysis: Decomposing Comparative Cohort Careers
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Ethan Fosse, Christopher Winship Abstract: Sociological Methodology, Ahead of Print. In a widely influential essay, Ryder argued that to understand social change, researchers should compare cohort careers, contrasting how different cohorts change over the life cycle with respect to some outcome. Ryder, however, provided few technical details on how to actually conduct a cohort analysis. In this article, the authors develop a framework for analyzing temporally structured data grounded in the construction, comparison, and decomposition of cohort careers. The authors begin by illustrating how one can analyze age-period-cohort (APC) data by constructing graphs of cohort careers. Although a useful starting point, the major problem with this approach is that the graphs are typically of sufficient complexity that it can be difficult, if not impossible, to discern the underlying trends and patterns in the data. To provide a more useful foundation for cohort analysis, the authors therefore introduce three distinct improvements over the purely graphical approach. First, they provide a mathematical definition of a cohort career, demonstrating how the underlying parameters of interest can be estimated using a reparameterized version of the conventional APC model. The authors call this the life cycle and social change (LC-SC) model. Second, they contrast the proposed model with two alternative three-factor APC models and all logically possible two-factor models, showing that none of these other models are adequate for fully representing Ryder’s ideas. Third, the authors present the article’s major accomplishment: using the LC-SC model, they show how a collection of cohort careers can be decomposed into just four basic components: a curve representing an overall intracohort trend (or life cycle change); a curve representing an overall intercohort trend (or social change); a set of common cross-period temporal fluctuations that permit variability across cohort careers; and, finally, a set of terms representing cell-specific heterogeneity (or, equivalently, interactions among age, period, and/or cohort). As the authors demonstrate, these parts can be reassembled into simpler versions of cohort careers, revealing underlying trends and patterns that may not be evident otherwise. The authors illustrate this approach by analyzing trends in political party strength in the General Social Survey. Citation: Sociological Methodology PubDate: 2023-03-29T06:01:17Z DOI: 10.1177/00811750231151949
- Modeling Partitions of Individuals
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Marion Hoffman, Per Block, Tom A. B. Snijders First page: 1 Abstract: Sociological Methodology, Ahead of Print. Despite the central role of self-assembled groups in animal and human societies, statistical tools to explain their composition are limited. The authors introduce a statistical framework for cross-sectional observations of groups with exclusive membership to illuminate the social and organizational mechanisms that bring people together. Drawing from stochastic models for networks and partitions, the proposed framework introduces an exponential family of distributions for partitions. The authors derive its main mathematical properties and suggest strategies to specify and estimate such models. A case study on hackathon events applies the developed framework to the study of mechanisms underlying the formation of self-assembled project teams. Citation: Sociological Methodology PubDate: 2023-01-30T12:39:48Z DOI: 10.1177/00811750221145166
- Evaluating Substitution as a Strategy for Handling U.S. Postal Service
Drop Points in Self-Administered Address-Based Sampling Frame Surveys-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Taylor Lewis, Joseph McMichael, Charlotte Looby First page: 158 Abstract: Sociological Methodology, Ahead of Print. Most addresses on modern address-based sampling frames derived from the U.S. Postal Service’s Computerized Delivery Sequence file have a one-to-one relationship with a household. Some addresses, however, are associated with multiple households. These addresses are referred to as drop points, and the households therein are referred to as drop point units (DPUs). DPUs pose a challenge for self-administered surveys because no apartment number or unit designation is available, making it impossible to send targeted correspondence. The authors evaluate a method for substituting sampled DPUs with similar non-DPUs, which was implemented in the 2021 Healthy Chicago Survey alongside a concurrent survey of the originally sampled DPUs. Comparing aggregate distributions of DPUs and the non-DPU substitutes, the authors observe certain differences with respect to age, employment status, marital status, and housing tenure but no substantive differences in key health outcomes measured by the survey. Citation: Sociological Methodology PubDate: 2023-01-13T09:43:06Z DOI: 10.1177/00811750221147525
- A New RCM Approach to Survival Analysis: The Conditional-Incidence-Rate
Model-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Kazuo Yamaguchi First page: 42 Abstract: Sociological Methodology, Ahead of Print. This article introduces a new causal analytic method for survival analysis that retains the framework of Rubin’s causal model as an alternative to the marginal structural model (MSM). The major limitation of the MSM is a systematic bias in the effects of past treatments when the method is applied to the hazard rate analysis of nonrepeatable events in the presence of unobserved heterogeneity. This systematic bias is demonstrated in the article. The method introduced here assumes a semiparametric conditional-incidence-rate model and provides consistent estimates of the effects of present and past treatments on the conditional cumulative-incidence rate in the analysis of nonrepeatable events in the presence of unobserved heterogeneity. Unlike the MSM, which requires a sequential and cumulative use of the inverse-probability-of-treatment weighting many times for data with many time points, the new method uses the inverse-probability-of-treatment weighing only twice sequentially for estimation of the present and past treatment effects at each time of entry into treatment, and not cumulatively across different treatment entry times. Analysis of the conditional-incidence rate can also provide a more efficient parameter estimate for the treatment effect than the hazard rate model in cases where a majority of sample persons experience the event and thereby cease to be members of the risk set of the hazard rate during the period of observation. An application to an analysis of sexual initiation demonstrates that leaving home promotes sexual initiation, especially premarital sexual initiation, because it greatly increases the rate of premarital sexual initiation during the year after leaving home. Citation: Sociological Methodology PubDate: 2022-07-30T06:30:54Z DOI: 10.1177/00811750221114857
- Sparse Data Reconstruction, Missing Value and Multiple Imputation through
Matrix Factorization-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Nandana Sengupta, Madeleine Udell, Nathan Srebro, James Evans First page: 72 Abstract: Sociological Methodology, Ahead of Print. Social science approaches to missing values predict avoided, unrequested, or lost information from dense data sets, typically surveys. The authors propose a matrix factorization approach to missing data imputation that (1) identifies underlying factors to model similarities across respondents and responses and (2) regularizes across factors to reduce their overinfluence for optimal data reconstruction. This approach may enable social scientists to draw new conclusions from sparse data sets with a large number of features, for example, historical or archival sources, online surveys with high attrition rates, or data sets created from Web scraping, which confound traditional imputation techniques. The authors introduce matrix factorization techniques and detail their probabilistic interpretation, and they demonstrate these techniques’ consistency with Rubin’s multiple imputation framework. The authors show via simulations using artificial data and data from real-world subsets of the General Social Survey and National Longitudinal Study of Youth cases for which matrix factorization techniques may be preferred. These findings recommend the use of matrix factorization for data reconstruction in several settings, particularly when data are Boolean and categorical and when large proportions of the data are missing. Citation: Sociological Methodology PubDate: 2022-10-22T07:32:03Z DOI: 10.1177/00811750221125799
- Data Quality and Recall Bias in Time-Diary Research: The Effects of
Prolonged Recall Periods in Self-Administered Online Time-Use Surveys-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Petrus te Braak, Theun Pieter van Tienoven, Joeri Minnen, Ignace Glorieux First page: 115 Abstract: Sociological Methodology, Ahead of Print. Previous research has shown that a prolonged recall period is associated with lower data quality in time-diary research. In these studies, the recall period is roughly estimated on the basis of the period between the assigned diary day and the agreed collection day. Because this is so rudimentary, little is known about the duration of the mean recall period and its consequences for data quality. Recent advances in online methodology now allow a better investigation of the recall period using time stamps. Using a refined indicator, the authors examine the duration of the recall period, to what extent this duration is related to socioeconomic characteristics, and how a prolonged recall period affects data quality. The authors demonstrate that using online time-diary data collected from 8,535 teachers in Belgium, the mean recall period is less than 24 hr for most respondents, although respondents with many time constraints have extended recall periods. Additionally, a prolonged recall period indeed has negative consequences for data quality. Quality deterioration already arises several hours after an activity has been completed, much sooner than previous research has indicated. Citation: Sociological Methodology PubDate: 2022-10-05T12:17:19Z DOI: 10.1177/00811750221126499
- Hyperscanning and the Future of Neurosociology
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Warren TenHouten, Lorne Schussel, Maria F. Gritsch, Charles D. Kaplan First page: 139 Abstract: Sociological Methodology, Ahead of Print. Because all aspects of social life have a mental component, sociology’s focus is not society alone but mind and society. Insofar as mind is an emergent level of brainwork, the description and measurement of mindwork amidst social interaction can be accomplished by neurometric measurement methodology. The authors’ topic, hyperscanning, involves the simultaneous recording of either hemodynamic or neuroelectric measurement of brain activity in two (or more) interacting individuals. The authors consider two hyperscanning methods, functional magnetic resonance imaging and electroencephalography (EEG). Although functional magnetic resonance imaging provides excellent spatial resolution of brain-region activation, the temporal resolution of EEG is unmatched. EEG’s low spatial resolution has been overcome by low-resolution electromagnetic tomography. Hyperscanning studies show that interpersonal coordination of action includes mutual entrainment or synchronization of neural dynamics, flow of information between brains, and causal effects of one brain upon another with respect to social-signaling processes involving fairness, reciprocity, trust, competition, cooperation, and leadership. Citation: Sociological Methodology PubDate: 2022-10-15T12:23:43Z DOI: 10.1177/00811750221128790
|