A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

  Subjects -> STATISTICS (Total: 130 journals)
The end of the list has been reached or no journals were found for your choice.
Similar Journals
Journal Cover
Statistical Methods in Medical Research
Journal Prestige (SJR): 1.402
Citation Impact (citeScore): 2
Number of Followers: 30  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 0962-2802 - ISSN (Online) 1477-0334
Published by Sage Publications Homepage  [1176 journals]
  • Unit information prior for incorporating real-world evidence into
           randomized controlled trials

    • Free pre-print version: Loading...

      Authors: Hengtao Zhang, Guosheng Yin
      Pages: 229 - 241
      Abstract: Statistical Methods in Medical Research, Volume 32, Issue 2, Page 229-241, February 2023.
      Randomized controlled trials (RCTs) have been widely recognized as the gold standard to infer the treatment effect in clinical research. Recently, there has been growing interest in enhancing and complementing the result in an RCT by integrating real-world evidence from observational studies. The unit information prior (UIP) is a newly proposed technique that can effectively borrow information from multiple historical datasets. We extend this generic approach to synthesize the non-randomized evidence into a current RCT. Not only does the UIP only require summary statistics published from observational studies for ease of implementation, but it also has clear interpretations and can alleviate the potential bias in the real-world evidence via weighting schemes. Extensive numerical experiments show that the UIP can improve the statistical efficiency in estimating the treatment effect for various types of outcome variables. The practical potential of our UIP approach is further illustrated with a real trial of hydroxychloroquine for treating COVID-19 patients.
      Citation: Statistical Methods in Medical Research
      PubDate: 2023-01-19T06:19:02Z
      DOI: 10.1177/09622802221133555
      Issue No: Vol. 32, No. 2 (2023)
       
  • Estimation of the average treatment effect with variable selection and
           measurement error simultaneously addressed for potential confounders

    • Free pre-print version: Loading...

      Authors: Grace Y. Yi, Li-Pang Chen
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      In the framework of causal inference, the inverse probability weighting estimation method and its variants have been commonly employed to estimate the average treatment effect. Such methods, however, are challenged by the presence of irrelevant pre-treatment variables and measurement error. Ignoring these features and naively applying the usual inverse probability weighting estimation procedures may typically yield biased inference results. In this article, we develop an inference method for estimating the average treatment effect with those features taken into account. We establish theoretical properties for the resulting estimator and carry out numerical studies to assess the finite sample performance of the proposed estimator.
      Citation: Statistical Methods in Medical Research
      PubDate: 2023-01-25T06:57:19Z
      DOI: 10.1177/09622802221146308
       
  • Minimum sample size for developing a multivariable prediction model using
           multinomial logistic regression

    • Free pre-print version: Loading...

      Authors: Alexander Pate, Richard D Riley, Gary S Collins, Maarten van Smeden, Ben Van Calster, Joie Ensor, Glen P Martin
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      AimsMultinomial logistic regression models allow one to predict the risk of a categorical outcome with> 2 categories. When developing such a model, researchers should ensure the number of participants ([math]) is appropriate relative to the number of events ([math]) and the number of predictor parameters ([math]) for each category k. We propose three criteria to determine the minimum n required in light of existing criteria developed for binary outcomes.Proposed criteriaThe first criterion aims to minimise the model overfitting. The second aims to minimise the difference between the observed and adjusted [math] Nagelkerke. The third criterion aims to ensure the overall risk is estimated precisely. For criterion (i), we show the sample size must be based on the anticipated Cox-snell [math] of distinct ‘one-to-one’ logistic regression models corresponding to the sub-models of the multinomial logistic regression, rather than on the overall Cox-snell [math] of the multinomial logistic regression.Evaluation of criteriaWe tested the performance of the proposed criteria (i) through a simulation study and found that it resulted in the desired level of overfitting. Criterion (ii) and (iii) were natural extensions from previously proposed criteria for binary outcomes and did not require evaluation through simulation.SummaryWe illustrated how to implement the sample size criteria through a worked example considering the development of a multinomial risk prediction model for tumour type when presented with an ovarian mass. Code is provided for the simulation and worked example. We will embed our proposed criteria within the pmsampsize R library and Stata modules.
      Citation: Statistical Methods in Medical Research
      PubDate: 2023-01-20T07:18:39Z
      DOI: 10.1177/09622802231151220
       
  • Divided-and-combined omnibus test for genetic association analysis with
           high-dimensional data

    • Free pre-print version: Loading...

      Authors: Jinjuan Wang, Zhenzhen Jiang, Hongping Guo, Zhengbang Li
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Advances in biologic technology enable researchers to obtain a huge amount of genetic and genomic data, whose dimensions are often quite high on both phenotypes and variants. Testing their association with multiple phenotypes has been a hot topic in recent years. Traditional single phenotype multiple variant analysis has to be adjusted for multiple testing and thus suffers from substantial power loss due to ignorance of correlation across phenotypes. Similarity-based method, which uses the trace of product of two similarity matrices as a test statistic, has emerged as a useful tool to handle this problem. However, it loses power when the correlation strength within multiple phenotypes is middle or strong, for some signals represented by the eigenvalues of phenotypic similarity matrix are masked by others. We propose a divided-and-combined omnibus test to handle this drawback of the similarity-based method. Based on the divided-and-combined strategy, we first divide signals into two groups in a series of cut points according to eigenvalues of the phenotypic similarity matrix and combine analysis results via the Cauchy-combined method to reach a final statistic. Extensive simulations and application to a pig data demonstrate that the proposed statistic is much more powerful and robust than the original test under most of the considered scenarios, and sometimes the power increase can be more than 0.6. Divided-and-combined omnibus test facilitates genetic association analysis with high-dimensional data and achieves much higher power than the existing similarity based method. In fact, divided-and-combined omnibus test can be used whenever the association analysis between two multivariate variables needs to be conducted.
      Citation: Statistical Methods in Medical Research
      PubDate: 2023-01-18T07:29:51Z
      DOI: 10.1177/09622802231151204
       
  • Robust weights that optimally balance confounders for estimating marginal
           hazard ratios

    • Free pre-print version: Loading...

      Authors: Michele Santacatterina
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Covariate balance is crucial in obtaining unbiased estimates of treatment effects in observational studies. Methods that target covariate balance have been successfully proposed and largely applied to estimate treatment effects on continuous outcomes. However, in many medical and epidemiological applications, the interest lies in estimating treatment effects on time-to-event outcomes. With this type of data, one of the most common estimands of interest is the marginal hazard ratio of the Cox proportional hazards model. In this article, we start by presenting robust orthogonality weights, a set of weights obtained by solving a quadratic constrained optimization problem that maximizes precision while constraining covariate balance defined as the correlation between confounders and treatment. By doing so, robust orthogonality weights optimally deal with both binary and continuous treatments. We then evaluate the performance of the proposed weights in estimating marginal hazard ratios of binary and continuous treatments with time-to-event outcomes in a simulation study. We finally apply robust orthogonality weights in the evaluation of the effect of hormone therapy on time to coronary heart disease and on the effect of red meat consumption on time to colon cancer among 24,069 postmenopausal women enrolled in the Women’s Health Initiative observational study.
      Citation: Statistical Methods in Medical Research
      PubDate: 2023-01-12T08:21:00Z
      DOI: 10.1177/09622802221146310
       
  • Taking a chance: How likely am I to receive my preferred treatment in a
           clinical trial'

    • Free pre-print version: Loading...

      Authors: Stephen D Walter, Ondrej Blaha, Denise Esserman
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Researchers should ideally conduct clinical trials under a presumption of clinical equipoise, but in fact trial patients will often prefer one or other of the treatments being compared. Receiving an unblinded preferred treatment may affect the study outcome, possibly beneficially, but receiving a non-preferred treatment may induce ‘reluctant acquiescence’, and poorer outcomes. Even in blinded trials, patients’ primary motivation to enrol may be the chance of potentially receiving a desirable experimental treatment, which is otherwise unavailable. Study designs with a higher probability of receiving a preferred treatment (denoted as ‘concordance’) will be attractive to potential participants, and investigators, because they may improve recruitment and hence enhance study efficiency. Therefore, it is useful to consider the concordance rates associated with various study designs. We consider this question with a focus on comparing the standard, randomised, two-arm, parallel group design with the two-stage randomised patient preference design and Zelen designs; we also mention the fully randomised and partially randomised patient preference designs. For each of these designs, we evaluate the concordance rate as a function of the proportions randomised to the alternative treatments, the distribution of preferences over treatments, and (for the Zelen designs) the proportion of patients who consent to receive their assigned treatment. We also examine the equity of each design, which we define as the similarity between the concordance rates for participants with different treatment preferences. Finally, we contrast each of the alternative designs with the standard design in terms of gain in concordance and change in equity.
      Citation: Statistical Methods in Medical Research
      PubDate: 2023-01-11T07:58:35Z
      DOI: 10.1177/09622802221146305
       
  • Flexible modeling of multiple nonlinear longitudinal trajectories with
           censored and non-ignorable missing outcomes

    • Free pre-print version: Loading...

      Authors: Tsung-I Lin, Wan-Lun Wang
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Multivariate nonlinear mixed-effects models (MNLMMs) have become a promising tool for analyzing multi-outcome longitudinal data following nonlinear trajectory patterns. However, such a classical analysis can be challenging due to censorship induced by detection limits of the quantification assay or non-response occurring when participants missed scheduled visits intermittently or discontinued participation. This article proposes an extension of the MNLMM approach, called the MNLMM-CM, by taking the censored and non-ignorable missing responses into account simultaneously. The non-ignorable missingness is described by the selection-modeling factorization to tackle the missing not at random mechanism. A Monte Carlo expectation conditional maximization algorithm coupled with the first-order Taylor approximation is developed for parameter estimation. The techniques for the calculation of standard errors of fixed effects, estimation of unobservable random effects, imputation of censored and missing responses and prediction of future values are also provided. The proposed methodology is motivated and illustrated by the analysis of a clinical HIV/AIDS dataset with censored RNA viral loads and the presence of missing CD4 and CD8 cell counts. The superiority of our method on the provision of more adequate estimation is validated by a simulation study.
      Citation: Statistical Methods in Medical Research
      PubDate: 2023-01-10T05:33:12Z
      DOI: 10.1177/09622802221146312
       
  • Saddlepoint approximation [math]-values of weighted log-rank tests based
           on censored clustered data under block Efron’s biased-coin design

    • Free pre-print version: Loading...

      Authors: Haidy A. Newer
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Clustered survival data frequently occurs in biomedical research fields and clinical trials. The log-rank tests are used for two independent samples of clustered data tests. We use the block Efron’s biased-coin randomization (design) to assign patients to treatment groups in a clinical trial by forcing a sequential experiment to be balanced. In this article, the [math]-values of the null permutation distribution of log-rank tests for clustered data are approximated via the double saddlepoint approximation method. Comprehensive numerical studies are carried out to assess the accuracy of the saddlepoint approximation. This approximation demonstrates great accuracy over the asymptotic normal approximation.
      Citation: Statistical Methods in Medical Research
      PubDate: 2023-01-10T05:31:42Z
      DOI: 10.1177/09622802221143498
       
  • A novel power prior approach for borrowing historical control data in
           clinical trials

    • Free pre-print version: Loading...

      Authors: Yaru Shi, Wen Li, Guanghan (Frank) Liu
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      There has been an increased interest in borrowing information from historical control data to improve the statistical power for hypothesis testing, therefore reducing the required sample sizes in clinical trials. To account for the heterogeneity between the historical and current trials, power priors are often considered to discount the information borrowed from the historical data. However, it can be challenging to choose a fixed power prior parameter in the application. The modified power prior approach, which defines a random power parameter with initial prior to control the amount of historical information borrowed, may not directly account for heterogeneity between the trials. In this paper, we propose a novel approach to pick a power prior based on some direct measures of distributional differences between historical control data and current control data under normal assumptions. Simulations are conducted to investigate the performance of the proposed approach compared with current approaches (e.g. commensurate prior, meta-analytic-predictive, and modified power prior). The results show that the proposed power prior improves the study power while controlling the type I error within a tolerable limit when the distribution of the historical control data is similar to that of the current control data. The method is developed for both superiority and non-inferiority trials and is illustrated with an example from vaccine clinical trials.
      Citation: Statistical Methods in Medical Research
      PubDate: 2023-01-05T07:17:25Z
      DOI: 10.1177/09622802221146309
       
  • Intervention treatment distributions that depend on the observed treatment
           process and model double robustness in causal survival analysis

    • Free pre-print version: Loading...

      Authors: Lan Wen, Julia L. Marcus, Jessica G. Young
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      The generalized g-formula can be used to estimate the probability of survival under a sustained treatment strategy. When treatment strategies are deterministic, estimators derived from the so-called efficient influence function (EIF) for the g-formula will be doubly robust to model misspecification. In recent years, several practical applications have motivated estimation of the g-formula under non-deterministic treatment strategies where treatment assignment at each time point depends on the observed treatment process. In this case, EIF-based estimators may or may not be doubly robust. In this paper, we provide sufficient conditions to ensure the existence of doubly robust estimators for intervention treatment distributions that depend on the observed treatment process for point treatment interventions and give a class of intervention treatment distributions dependent on the observed treatment process that guarantee model doubly and multiply robust estimators in longitudinal settings. Motivated by an application to pre-exposure prophylaxis (PrEP) initiation studies, we propose a new treatment intervention dependent on the observed treatment process. We show there exist (1) estimators that are doubly and multiply robust to model misspecification and (2) estimators that when used with machine learning algorithms can attain fast convergence rates for our proposed intervention. Finally, we explore the finite sample performance of our estimators via simulation studies.
      Citation: Statistical Methods in Medical Research
      PubDate: 2023-01-04T08:03:07Z
      DOI: 10.1177/09622802221146311
       
  • A generalization of moderated statistics to data adaptive semiparametric
           estimation in high-dimensional biology

    • Free pre-print version: Loading...

      Authors: Nima S Hejazi, Philippe Boileau, Mark J van der Laan, Alan E Hubbard
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      The widespread availability of high-dimensional biological data has made the simultaneous screening of many biological characteristics a central problem in computational and high-dimensional biology. As the dimensionality of datasets continues to grow, so too does the complexity of identifying biomarkers linked to exposure patterns. The statistical analysis of such data often relies upon parametric modeling assumptions motivated by convenience, inviting opportunities for model misspecification. While estimation frameworks incorporating flexible, data adaptive regression strategies can mitigate this, their standard variance estimators are often unstable in high-dimensional settings, resulting in inflated Type-I error even after standard multiple testing corrections. We adapt a shrinkage approach compatible with parametric modeling strategies to semiparametric variance estimators of a family of efficient, asymptotically linear estimators of causal effects, defined by counterfactual exposure contrasts. Augmenting the inferential stability of these estimators in high-dimensional settings yields a data adaptive approach for robustly uncovering stable causal associations, even when sample sizes are limited. Our generalized variance estimator is evaluated against appropriate alternatives in numerical experiments, and an open source R/Bioconductor package, biotmle, is introduced. The proposal is demonstrated in an analysis of high-dimensional DNA methylation data from an observational study on the epigenetic effects of tobacco smoking.
      Citation: Statistical Methods in Medical Research
      PubDate: 2022-12-27T06:27:43Z
      DOI: 10.1177/09622802221146313
       
  • Generalised pairwise comparisons for trend: An extension to the win ratio
           and win odds for dose-response and prognostic variable analysis with
           arbitrary statements of outcome preference

    • Free pre-print version: Loading...

      Authors: Hannah Johns, Bruce Campbell, Julie Bernhardt, Leonid Churilov
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      The win ratio is a novel approach for handling complex patient outcomes that have seen considerable interest in the medical statistics literature, and operates by considering all-to-all pairwise statements of preference on outcomes. Recent extensions to the method have focused on the two-group case, with few developments made for considering the impact of a well-ordered explanatory variable, which would allow for dose-response analysis or the analysis of links between complex patient outcomes and prognostic variables. Where such methods have been developed, they are semiparametric methods that can only be applied to survival outcomes. In this article, we introduce the generalised pairwise comparison for trend, a modified form of Agresti’s generalised odds ratio. This approach is capable of considering arbitrary statements of preference, thus enabling its use across all types of outcome data. We provide a simulation study validating the approach and illustrate it with three clinical applications in stroke research.
      Citation: Statistical Methods in Medical Research
      PubDate: 2022-12-27T06:26:23Z
      DOI: 10.1177/09622802221146306
       
  • Bivariate joint models for survival and change of cognitive function

    • Free pre-print version: Loading...

      Authors: Shengning Pan, Ardo van den Hout
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Changes in cognitive function over time are of interest in ageing research. A joint model is constructed to investigate. Generally, cognitive function is measured through more than one test, and the test scores are integers. The aim is to investigate two test scores and use an extension of a bivariate binomial distribution to define a new joint model. This bivariate distribution model the correlation between the two test scores. To deal with attrition due to death, the Weibull hazard model and the Gompertz hazard model are used. A shared random-effects model is constructed, and the random effects are assumed to follow a bivariate normal distribution. It is shown how to incorporate random effects that link the bivariate longitudinal model and the survival model. The joint model is applied to the English Longitudinal Study of Ageing data.
      Citation: Statistical Methods in Medical Research
      PubDate: 2022-12-27T05:45:44Z
      DOI: 10.1177/09622802221146307
       
  • An overview of propensity score matching methods for clustered data

    • Free pre-print version: Loading...

      Authors: Benjamin Langworthy, Yujie Wu, Molin Wang
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Propensity score matching is commonly used in observational studies to control for confounding and estimate the causal effects of a treatment or exposure. Frequently, in observational studies data are clustered, which adds to the complexity of using propensity score techniques. In this article, we give an overview of propensity score matching methods for clustered data, and highlight how propensity score matching can be used to account for not just measured confounders, but also unmeasured cluster level confounders. We also consider using machine learning methods such as generalized boosted models to estimate the propensity score and show that accounting for clustering when using these methods can greatly reduce the performance, particularly when there are a large number of clusters and a small number of subjects per cluster. In order to get around this we highlight scenarios where it may be possible to control for measured covariates using propensity score matching, while using fixed effects regression in the outcome model to control for cluster level covariates. Using simulation studies we compare the performance of different propensity score matching methods for clustered data across a number of different settings. Finally, as an illustrative example we apply propensity score matching methods for clustered data to study the causal effect of aspirin on hearing deterioration using data from the conservation of hearing study.
      Citation: Statistical Methods in Medical Research
      PubDate: 2022-11-25T08:59:02Z
      DOI: 10.1177/09622802221133556
       
  • Shotgun-2: A Bayesian phase I/II basket trial design to identify
           indication-specific optimal biological doses

    • Free pre-print version: Loading...

      Authors: Xin Chen, Jingyi Zhang, Liyun Jianga, Fangrong Yan
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      For novel molecularly targeted agents and immunotherapies, the objective of dose-finding is often to identify the optimal biological dose, rather than the maximum tolerated dose. However, optimal biological doses may not be the same for different indications, challenging the traditional dose-finding framework. Therefore, we proposed a Bayesian phase I/II basket trial design, named “shotgun-2,” to identify indication-specific optimal biological doses. A dose-escalation part is conducted in stage I to identify the maximum tolerated dose and admissible dose sets. In stage II, dose optimization is performed incorporating both toxicity and efficacy for each indication. Simulation studies under both fixed and random scenarios show that, compared with the traditional “phase I  +  cohort expansion” design, the shotgun-2 design is robust and can improve the probability of correctly selecting the optimal biological doses. Furthermore, this study provides a useful tool for identifying indication-specific optimal biological doses and accelerating drug development.
      Citation: Statistical Methods in Medical Research
      PubDate: 2022-10-11T07:56:42Z
      DOI: 10.1177/09622802221129049
       
  • A dose–effect network meta-analysis model with application in
           antidepressants using restricted cubic splines

    • Free pre-print version: Loading...

      Authors: Tasnim Hamza, Toshi A Furukawa, Nicola Orsini, Andrea Cipriani, Cynthia P Iglesias, Georgia Salanti
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Network meta-analysis has been used to answer a range of clinical questions about the preferred intervention for a given condition. Although the effectiveness and safety of pharmacological agents depend on the dose administered, network meta-analysis applications typically ignore the role that drugs dosage plays in the results. This leads to more heterogeneity in the network. In this paper, we present a suite of network meta-analysis models that incorporate the dose–effect relationship using restricted cubic splines. We extend existing models into a dose–effect network meta-regression to account for study-level covariates and for groups of agents in a class-effect dose–effect network meta-analysis model. We apply our models to a network of aggregate data about the efficacy of 21 antidepressants and placebo for depression. We find that all antidepressants are more efficacious than placebo after a certain dose. Also, we identify the dose level at which each antidepressant's effect exceeds that of placebo and estimate the dose beyond which the effect of antidepressants no longer increases. When covariates were introduced to the model, we find that studies with small sample size tend to exaggerate antidepressants efficacy for several of the drugs. Our dose–effect network meta-analysis model with restricted cubic splines provides a flexible approach to modelling the dose–effect relationship in multiple interventions. Decision-makers can use our model to inform treatment choice.
      Citation: Statistical Methods in Medical Research
      PubDate: 2022-02-24T04:44:34Z
      DOI: 10.1177/09622802211070256
       
  • A distribution-free smoothed combination method to improve discrimination
           accuracy in multi-category classification

    • Free pre-print version: Loading...

      Authors: Raju Maiti, Jialiang Li, Priyam Das, Xueqing Liu, Lei Feng, Derek J Hausenloy, Bibhas Chakraborty
      First page: 242
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Results from multiple diagnostic tests are combined in many ways to improve the overall diagnostic accuracy. For binary classification, maximization of the empirical estimate of the area under the receiver operating characteristic curve has widely been used to produce an optimal linear combination of multiple biomarkers. However, in the presence of a large number of biomarkers, this method proves to be computationally expensive and difficult to implement since it involves maximization of a discontinuous, non-smooth function for which gradient-based methods cannot be used directly. The complexity of this problem further increases when the classification problem becomes multi-category. In this article, we develop a linear combination method that maximizes a smooth approximation of the empirical Hyper-volume Under Manifolds for the multi-category outcome. We approximate HUM by replacing the indicator function with the sigmoid function and normal cumulative distribution function. With such smooth approximations, efficient gradient-based algorithms are employed to obtain better solutions with less computing time. We show that under some regularity conditions, the proposed method yields consistent estimates of the coefficient parameters. We derive the asymptotic normality of the coefficient estimates. A simulation study is performed to study the effectiveness of our proposed method as compared to other existing methods. The method is illustrated using two real medical data sets.
      Citation: Statistical Methods in Medical Research
      PubDate: 2022-11-17T06:51:40Z
      DOI: 10.1177/09622802221137742
       
  • A connection between survival multistate models and causal inference for
           external treatment interruptions

    • Free pre-print version: Loading...

      Authors: Alexandra Erdmann, Anja Loos, Jan Beyersmann
      First page: 267
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Recently, treatment interruptions such as a clinical hold in randomized clinical trials have been investigated by using a multistate model approach. The phase III clinical trial START (Stimulating Targeted Antigenic Response To non-small-cell cancer) with primary endpoint overall survival was temporarily placed on hold for enrollment and treatment by the US Food and Drug Administration (FDA). Multistate models provide a flexible framework to account for treatment interruptions induced by a time-dependent external covariate. Extending previous work, we propose a censoring and a filtering approach both aimed at estimating the initial treatment effect on overall survival in the hypothetical situation of no clinical hold. A special focus is on creating a link to causal inference. We show that calculating the matrix of transition probabilities in the multistate model after application of censoring (or filtering) yields the desired causal interpretation. Assumptions in support of the identification of a causal effect by censoring (or filtering) are discussed. Thus, we provide the basis to apply causal censoring (or filtering) in more general settings such as the COVID-19 pandemic. A simulation study demonstrates that both causal censoring and filtering perform favorably compared to a naïve method ignoring the external impact.
      Citation: Statistical Methods in Medical Research
      PubDate: 2022-12-05T07:11:32Z
      DOI: 10.1177/09622802221133551
       
  • Point estimation following a two-stage group sequential trial

    • Free pre-print version: Loading...

      Authors: Michael J Grayling, James MS Wason
      First page: 287
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Repeated testing in a group sequential trial can result in bias in the maximum likelihood estimate of the unknown parameter of interest. Many authors have therefore proposed adjusted point estimation procedures, which attempt to reduce such bias. Here, we describe nine possible point estimators within a common general framework for a two-stage group sequential trial. We then contrast their performance in five example trial settings, examining their conditional and marginal biases and residual mean square error. By focusing on the case of a trial with a single interim analysis, additional new results aiding the determination of the estimators are given. Our findings demonstrate that the uniform minimum variance unbiased estimator, whilst being marginally unbiased, often has large conditional bias and residual mean square error. If one is concerned solely about inference on progression to the second trial stage, the conditional uniform minimum variance unbiased estimator may be preferred. Two estimators, termed mean adjusted estimators, which attempt to reduce the marginal bias, arguably perform best in terms of the marginal residual mean square error. In all, one should choose an estimator accounting for its conditional and marginal biases and residual mean square error; the most suitable estimator will depend on relative desires to minimise each of these factors. If one cares solely about the conditional and marginal biases, the conditional maximum likelihood estimate may be preferred provided lower and upper stopping boundaries are included. If the conditional and marginal residual mean square error are also of concern, two mean adjusted estimators perform well.
      Citation: Statistical Methods in Medical Research
      PubDate: 2022-11-17T07:26:19Z
      DOI: 10.1177/09622802221137745
       
  • Simulating time-to-event data subject to competing risks and clustering: A
           review and synthesis

    • Free pre-print version: Loading...

      Authors: Can Meng, Denise Esserman, Fan Li, Yize Zhao, Ondrej Blaha, Wenhan Lu, Yuxuan Wang, Peter Peduzzi, Erich J. Greene
      First page: 305
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Simulation studies play an important role in evaluating the performance of statistical models developed for analyzing complex survival data such as those with competing risks and clustering. This article aims to provide researchers with a basic understanding of competing risks data generation, techniques for inducing cluster-level correlation, and ways to combine them together in simulation studies, in the context of randomized clinical trials with a binary exposure or treatment. We review data generation with competing and semi-competing risks and three approaches of inducing cluster-level correlation for time-to-event data: the frailty model framework, the probability transform, and Moran’s algorithm. Using exponentially distributed event times as an example, we discuss how to introduce cluster-level correlation into generating complex survival outcomes, and illustrate multiple ways of combining these methods to simulate clustered, competing and semi-competing risks data with pre-specified correlation values or degree of clustering.
      Citation: Statistical Methods in Medical Research
      PubDate: 2022-11-22T09:01:32Z
      DOI: 10.1177/09622802221136067
       
  • The population-wise error rate for clinical trials with overlapping
           populations

    • Free pre-print version: Loading...

      Authors: Werner Brannath, Charlie Hillner, Kornelius Rohmeyer
      First page: 334
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      We introduce a new multiple type I error criterion for clinical trials with multiple, overlapping populations. Such trials are of interest in precision medicine where the goal is to develop treatments that are targeted to specific sub-populations defined by genetic and/or clinical biomarkers. The new criterion is based on the observation that not all type I errors are relevant to all patients in the overall population. If disjoint sub-populations are considered, no multiplicity adjustment appears necessary, since a claim in one sub-population does not affect patients in the other ones. For intersecting sub-populations we suggest to control the average multiple type I error rate, i.e. the probability that a randomly selected patient will be exposed to an inefficient treatment. We call this the population-wise error rate, exemplify it by a number of examples and illustrate how to control it with an adjustment of critical boundaries or adjusted [math]-values. We furthermore define corresponding simultaneous confidence intervals. We finally illustrate the power gain achieved by passing from family-wise to population-wise error rate control with two simple examples and a recently suggested multiple-testing approach for umbrella trials.
      Citation: Statistical Methods in Medical Research
      PubDate: 2022-12-01T08:07:58Z
      DOI: 10.1177/09622802221135249
       
  • Additive hazards model with time-varying coefficients and imaging
           predictors

    • Free pre-print version: Loading...

      Authors: Qi Yang, Chuchu Wang, Haijin He, Xiaoxiao Zhou, Xinyuan Song
      First page: 353
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Conventional hazard regression analyses frequently assume constant regression coefficients and scalar covariates. However, some covariate effects may vary with time. Moreover, medical imaging has become an increasingly important tool in screening, diagnosis, and prognosis of various diseases, given its information visualization and quantitative assessment. This study considers an additive hazards model with time-varying coefficients and imaging predictors to examine the dynamic effects of potential scalar and imaging risk factors for the failure of interest. We develop a two-stage approach that comprises the high-dimensional functional principal component analysis technique in the first stage and the counting process-based estimating equation approach in the second stage. In addition, we construct the pointwise confidence intervals for the proposed estimators and provide a significance test for the effects of scalar and imaging covariates. Simulation studies demonstrate the satisfactory performance of the proposed method. An application to the Alzheimer’s disease neuroimaging initiative study further illustrates the utility of the methodology.
      Citation: Statistical Methods in Medical Research
      PubDate: 2022-12-01T07:20:20Z
      DOI: 10.1177/09622802221137746
       
  • Standard error estimation in meta-analysis of studies reporting medians

    • Free pre-print version: Loading...

      Authors: Sean McGrath, Stephan Katzenschlager, Alexandra J Zimmer, Alexander Seitel, Russell Steele, Andrea Benedetti
      First page: 373
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      We consider the setting of an aggregate data meta-analysis of a continuous outcome of interest. When the distribution of the outcome is skewed, it is often the case that some primary studies report the sample mean and standard deviation of the outcome and other studies report the sample median along with the first and third quartiles and/or minimum and maximum values. To perform meta-analysis in this context, a number of approaches have recently been developed to impute the sample mean and standard deviation from studies reporting medians. Then, standard meta-analytic approaches with inverse-variance weighting are applied based on the (imputed) study-specific sample means and standard deviations. In this article, we illustrate how this common practice can severely underestimate the within-study standard errors, which results in poor coverage for the pooled mean in common effect meta-analyses and overestimation of between-study heterogeneity in random effects meta-analyses. We propose a straightforward bootstrap approach to estimate the standard errors of the imputed sample means. Our simulation study illustrates how the proposed approach can improve the estimation of the within-study standard errors and consequently improve coverage for the pooled mean in common effect meta-analyses and estimation of between-study heterogeneity in random effects meta-analyses. Moreover, we apply the proposed approach in a meta-analysis to identify risk factors of a severe course of COVID-19.
      Citation: Statistical Methods in Medical Research
      PubDate: 2022-11-22T08:56:28Z
      DOI: 10.1177/09622802221139233
       
  • Variance estimation for the average treatment effects on the treated and
           on the controls

    • Free pre-print version: Loading...

      Authors: Roland A Matsouaka, Yi Liu, Yunji Zhou
      First page: 389
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Common causal estimands include the average treatment effect, the average treatment effect of the treated, and the average treatment effect on the controls. Using augmented inverse probability weighting methods, parametric models are judiciously leveraged to yield doubly robust estimators, that is, estimators that are consistent when at least one the parametric models is correctly specified. Three sources of uncertainty are associated when we evaluate these estimators and their variances, that is, when we estimate the treatment and outcome regression models as well as the desired treatment effect. In this article, we propose methods to calculate the variance of the normalized, doubly robust average treatment effect of the treated and average treatment effect on the controls estimators and investigate their finite sample properties. We consider both the asymptotic sandwich variance estimation, the standard bootstrap as well as two wild bootstrap methods. For the asymptotic approximations, we incorporate the aforementioned uncertainties via estimating equations. Moreover, unlike the standard bootstrap procedures, the proposed wild bootstrap methods use perturbations of the influence functions of the estimators through independently distributed random variables. We conduct an extensive simulation study where we vary the heterogeneity of the treatment effect as well as the proportion of participants assigned to the active treatment group. We illustrate the methods using an observational study of critical ill patients on the use of right heart catherization.
      Citation: Statistical Methods in Medical Research
      PubDate: 2022-12-08T07:26:25Z
      DOI: 10.1177/09622802221142532
       
  • A reference-free R-learner for treatment recommendation

    • Free pre-print version: Loading...

      Authors: Junyi Zhou, Ying Zhang, Wanzhu Tu
      First page: 404
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Assigning optimal treatments to individual patients based on their characteristics is the ultimate goal of precision medicine. Deriving evidence-based recommendations from observational data while considering the causal treatment effects and patient heterogeneity is a challenging task, especially in situations of multiple treatment options. Herein, we propose a reference-free R-learner based on a simplex algorithm for treatment recommendation. We showed through extensive simulation that the proposed method produced accurate recommendations that corresponded to optimal treatment outcomes, regardless of the reference group. We used the method to analyze data from the Systolic Blood Pressure Intervention Trial (SPRINT) and achieved recommendations consistent with the current clinical guidelines.
      Citation: Statistical Methods in Medical Research
      PubDate: 2022-12-21T07:04:45Z
      DOI: 10.1177/09622802221144326
       
  • Regularization approaches in clinical biostatistics: A review of methods
           and their applications

    • Free pre-print version: Loading...

      Authors: Sarah Friedrich, Andreas Groll, Katja Ickstadt, Thomas Kneib, Markus Pauly, Jörg Rahnenführer, Tim Friede
      First page: 425
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      A range of regularization approaches have been proposed in the data sciences to overcome overfitting, to exploit sparsity or to improve prediction. Using a broad definition of regularization, namely controlling model complexity by adding information in order to solve ill-posed problems or to prevent overfitting, we review a range of approaches within this framework including penalization, early stopping, ensembling and model averaging. Aspects of their practical implementation are discussed including available R-packages and examples are provided. To assess the extent to which these approaches are used in medicine, we conducted a review of three general medical journals. It revealed that regularization approaches are rarely applied in practical clinical applications, with the exception of random effects models. Hence, we suggest a more frequent use of regularization approaches in medical research. In situations where also other approaches work well, the only downside of the regularization approaches is increased complexity in the conduct of the analyses which can pose challenges in terms of computational resources and expertise on the side of the data analyst. In our view, both can and should be overcome by investments in appropriate computing facilities and educational resources.
      Citation: Statistical Methods in Medical Research
      PubDate: 2022-11-17T07:24:48Z
      DOI: 10.1177/09622802221133557
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 3.236.80.119
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-