A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

              [Sort by number of followers]   [Restore default list]

  Subjects -> STATISTICS (Total: 130 journals)
Showing 1 - 151 of 151 Journals sorted alphabetically
Advances in Complex Systems     Hybrid Journal   (Followers: 11)
Advances in Data Analysis and Classification     Hybrid Journal   (Followers: 61)
Annals of Applied Statistics     Full-text available via subscription   (Followers: 39)
Applied Categorical Structures     Hybrid Journal   (Followers: 4)
Argumentation et analyse du discours     Open Access   (Followers: 11)
Asian Journal of Mathematics & Statistics     Open Access   (Followers: 8)
AStA Advances in Statistical Analysis     Hybrid Journal   (Followers: 4)
Australian & New Zealand Journal of Statistics     Hybrid Journal   (Followers: 13)
Bernoulli     Full-text available via subscription   (Followers: 9)
Biometrical Journal     Hybrid Journal   (Followers: 11)
Biometrics     Hybrid Journal   (Followers: 52)
British Journal of Mathematical and Statistical Psychology     Full-text available via subscription   (Followers: 18)
Building Simulation     Hybrid Journal   (Followers: 2)
Bulletin of Statistics     Full-text available via subscription   (Followers: 4)
CHANCE     Hybrid Journal   (Followers: 5)
Communications in Statistics - Simulation and Computation     Hybrid Journal   (Followers: 9)
Communications in Statistics - Theory and Methods     Hybrid Journal   (Followers: 11)
Computational Statistics     Hybrid Journal   (Followers: 14)
Computational Statistics & Data Analysis     Hybrid Journal   (Followers: 37)
Current Research in Biostatistics     Open Access   (Followers: 8)
Decisions in Economics and Finance     Hybrid Journal   (Followers: 11)
Demographic Research     Open Access   (Followers: 15)
Electronic Journal of Statistics     Open Access   (Followers: 8)
Engineering With Computers     Hybrid Journal   (Followers: 5)
Environmental and Ecological Statistics     Hybrid Journal   (Followers: 7)
ESAIM: Probability and Statistics     Full-text available via subscription   (Followers: 5)
Extremes     Hybrid Journal   (Followers: 2)
Fuzzy Optimization and Decision Making     Hybrid Journal   (Followers: 9)
Geneva Papers on Risk and Insurance - Issues and Practice     Hybrid Journal   (Followers: 13)
Handbook of Numerical Analysis     Full-text available via subscription   (Followers: 5)
Handbook of Statistics     Full-text available via subscription   (Followers: 7)
IEA World Energy Statistics and Balances -     Full-text available via subscription   (Followers: 2)
International Journal of Computational Economics and Econometrics     Hybrid Journal   (Followers: 6)
International Journal of Quality, Statistics, and Reliability     Open Access   (Followers: 17)
International Journal of Stochastic Analysis     Open Access   (Followers: 3)
International Statistical Review     Hybrid Journal   (Followers: 13)
International Trade by Commodity Statistics - Statistiques du commerce international par produit     Full-text available via subscription  
Journal of Algebraic Combinatorics     Hybrid Journal   (Followers: 4)
Journal of Applied Statistics     Hybrid Journal   (Followers: 21)
Journal of Biopharmaceutical Statistics     Hybrid Journal   (Followers: 21)
Journal of Business & Economic Statistics     Full-text available via subscription   (Followers: 39, SJR: 3.664, CiteScore: 2)
Journal of Combinatorial Optimization     Hybrid Journal   (Followers: 7)
Journal of Computational & Graphical Statistics     Full-text available via subscription   (Followers: 20)
Journal of Econometrics     Hybrid Journal   (Followers: 84)
Journal of Educational and Behavioral Statistics     Hybrid Journal   (Followers: 6)
Journal of Forecasting     Hybrid Journal   (Followers: 17)
Journal of Global Optimization     Hybrid Journal   (Followers: 7)
Journal of Interactive Marketing     Hybrid Journal   (Followers: 10)
Journal of Mathematics and Statistics     Open Access   (Followers: 8)
Journal of Nonparametric Statistics     Hybrid Journal   (Followers: 6)
Journal of Probability and Statistics     Open Access   (Followers: 10)
Journal of Risk and Uncertainty     Hybrid Journal   (Followers: 33)
Journal of Statistical and Econometric Methods     Open Access   (Followers: 5)
Journal of Statistical Physics     Hybrid Journal   (Followers: 13)
Journal of Statistical Planning and Inference     Hybrid Journal   (Followers: 8)
Journal of Statistical Software     Open Access   (Followers: 21, SJR: 13.802, CiteScore: 16)
Journal of the American Statistical Association     Full-text available via subscription   (Followers: 72, SJR: 3.746, CiteScore: 2)
Journal of the Korean Statistical Society     Hybrid Journal   (Followers: 1)
Journal of the Royal Statistical Society Series C (Applied Statistics)     Hybrid Journal   (Followers: 33)
Journal of the Royal Statistical Society, Series A (Statistics in Society)     Hybrid Journal   (Followers: 27)
Journal of the Royal Statistical Society, Series B (Statistical Methodology)     Hybrid Journal   (Followers: 43)
Journal of Theoretical Probability     Hybrid Journal   (Followers: 3)
Journal of Time Series Analysis     Hybrid Journal   (Followers: 16)
Journal of Urbanism: International Research on Placemaking and Urban Sustainability     Hybrid Journal   (Followers: 30)
Law, Probability and Risk     Hybrid Journal   (Followers: 8)
Lifetime Data Analysis     Hybrid Journal   (Followers: 7)
Mathematical Methods of Statistics     Hybrid Journal   (Followers: 4)
Measurement Interdisciplinary Research and Perspectives     Hybrid Journal   (Followers: 1)
Metrika     Hybrid Journal   (Followers: 4)
Modelling of Mechanical Systems     Full-text available via subscription   (Followers: 1)
Monte Carlo Methods and Applications     Hybrid Journal   (Followers: 6)
Monthly Statistics of International Trade - Statistiques mensuelles du commerce international     Full-text available via subscription   (Followers: 2)
Multivariate Behavioral Research     Hybrid Journal   (Followers: 5)
Optimization Letters     Hybrid Journal   (Followers: 2)
Optimization Methods and Software     Hybrid Journal   (Followers: 8)
Oxford Bulletin of Economics and Statistics     Hybrid Journal   (Followers: 34)
Pharmaceutical Statistics     Hybrid Journal   (Followers: 17)
Probability Surveys     Open Access   (Followers: 4)
Queueing Systems     Hybrid Journal   (Followers: 7)
Research Synthesis Methods     Hybrid Journal   (Followers: 8)
Review of Economics and Statistics     Hybrid Journal   (Followers: 128)
Review of Socionetwork Strategies     Hybrid Journal  
Risk Management     Hybrid Journal   (Followers: 15)
Sankhya A     Hybrid Journal   (Followers: 2)
Scandinavian Journal of Statistics     Hybrid Journal   (Followers: 9)
Sequential Analysis: Design Methods and Applications     Hybrid Journal  
Significance     Hybrid Journal   (Followers: 7)
Sociological Methods & Research     Hybrid Journal   (Followers: 38)
SourceOCDE Comptes nationaux et Statistiques retrospectives     Full-text available via subscription  
SourceOCDE Statistiques : Sources et methodes     Full-text available via subscription  
SourceOECD Bank Profitability Statistics - SourceOCDE Rentabilite des banques     Full-text available via subscription   (Followers: 1)
SourceOECD Insurance Statistics - SourceOCDE Statistiques d'assurance     Full-text available via subscription   (Followers: 2)
SourceOECD Main Economic Indicators - SourceOCDE Principaux indicateurs economiques     Full-text available via subscription   (Followers: 1)
SourceOECD Measuring Globalisation Statistics - SourceOCDE Mesurer la mondialisation - Base de donnees statistiques     Full-text available via subscription  
SourceOECD Monthly Statistics of International Trade     Full-text available via subscription   (Followers: 1)
SourceOECD National Accounts & Historical Statistics     Full-text available via subscription  
SourceOECD OECD Economic Outlook Database - SourceOCDE Statistiques des Perspectives economiques de l'OCDE     Full-text available via subscription   (Followers: 2)
SourceOECD Science and Technology Statistics - SourceOCDE Base de donnees des sciences et de la technologie     Full-text available via subscription  
SourceOECD Statistics Sources & Methods     Full-text available via subscription   (Followers: 1)
SourceOECD Taxing Wages Statistics - SourceOCDE Statistiques des impots sur les salaires     Full-text available via subscription  
Stata Journal     Full-text available via subscription   (Followers: 9)
Statistica Neerlandica     Hybrid Journal   (Followers: 1)
Statistical Applications in Genetics and Molecular Biology     Hybrid Journal   (Followers: 5)
Statistical Communications in Infectious Diseases     Hybrid Journal  
Statistical Inference for Stochastic Processes     Hybrid Journal   (Followers: 3)
Statistical Methodology     Hybrid Journal   (Followers: 7)
Statistical Methods and Applications     Hybrid Journal   (Followers: 6)
Statistical Methods in Medical Research     Hybrid Journal   (Followers: 27)
Statistical Modelling     Hybrid Journal   (Followers: 19)
Statistical Papers     Hybrid Journal   (Followers: 4)
Statistical Science     Full-text available via subscription   (Followers: 13)
Statistics & Probability Letters     Hybrid Journal   (Followers: 13)
Statistics & Risk Modeling     Hybrid Journal   (Followers: 3)
Statistics and Computing     Hybrid Journal   (Followers: 13)
Statistics and Economics     Open Access   (Followers: 1)
Statistics in Medicine     Hybrid Journal   (Followers: 198)
Statistics, Politics and Policy     Hybrid Journal   (Followers: 6)
Statistics: A Journal of Theoretical and Applied Statistics     Hybrid Journal   (Followers: 15)
Stochastic Models     Hybrid Journal   (Followers: 3)
Stochastics An International Journal of Probability and Stochastic Processes: formerly Stochastics and Stochastics Reports     Hybrid Journal   (Followers: 2)
Structural and Multidisciplinary Optimization     Hybrid Journal   (Followers: 12)
Teaching Statistics     Hybrid Journal   (Followers: 7)
Technology Innovations in Statistics Education (TISE)     Open Access   (Followers: 2)
TEST     Hybrid Journal   (Followers: 3)
The American Statistician     Full-text available via subscription   (Followers: 23)
The Annals of Applied Probability     Full-text available via subscription   (Followers: 8)
The Annals of Probability     Full-text available via subscription   (Followers: 10)
The Annals of Statistics     Full-text available via subscription   (Followers: 34)
The Canadian Journal of Statistics / La Revue Canadienne de Statistique     Hybrid Journal   (Followers: 11)
Wiley Interdisciplinary Reviews - Computational Statistics     Hybrid Journal   (Followers: 1)

              [Sort by number of followers]   [Restore default list]

Similar Journals
Journal Cover
Statistical Methods in Medical Research
Journal Prestige (SJR): 1.402
Citation Impact (citeScore): 2
Number of Followers: 27  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 0962-2802 - ISSN (Online) 1477-0334
Published by Sage Publications Homepage  [1151 journals]
  • Robust regression with asymmetric loss functions
    • Authors: Liya Fu, You-Gan Wang
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      In robust regression, it is usually assumed that the distribution of the error term is symmetric or the data are symmetrically contaminated by outliers. However, this assumption is usually not satisfied in practical problems, and thus if the traditional robust methods, such as Tukey’s biweight and Huber’s method, are used to estimate the regression parameters, the efficiency of the parameter estimation can be lost. In this paper, we construct an asymmetric Tukey’s biweight loss function with two tuning parameters and propose a data-driven method to find the most appropriate tuning parameters. Furthermore, we provide an adaptive algorithm to obtain robust and efficient parameter estimates. Our extensive simulation studies suggest that the proposed method performs better than the symmetric methods when error terms follow an asymmetric distribution or are asymmetrically contaminated. Finally, a cardiovascular risk factors dataset is analyzed to illustrate the proposed method.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-05-12T05:12:52Z
      DOI: 10.1177/09622802211012012
       
  • Functional joint models for chronic kidney disease in kidney transplant
           recipients
    • Authors: Jianghu (James) Dong, Jiguo Cao, Jagbir Gill, Clifford Miles, Troy Plumb
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      This functional joint model paper is motivated by a chronic kidney disease study post kidney transplantation. The available kidney organ is a scarce resource because millions of end-stage renal patients are on the waiting list for kidney transplantation. The life of the transplanted kidney can be extended if the progression of the chronic kidney disease stage can be slowed, and so a major research question is how to extend the transplanted kidney life to maximize the usage of the scarce organ resource. The glomerular filtration rate is the best test to monitor the progression of the kidney function, and it is a continuous longitudinal outcome with repeated measures. The patient’s survival status is characterized by time-to-event outcomes including kidney transplant failure, death with kidney function, and death without kidney function. Few studies have been carried out to simultaneously investigate these multiple clinical outcomes in chronic kidney disease stage patients based on a joint model. Therefore, this paper proposes a new functional joint model from this clinical chronic kidney disease study. The proposed joint models include a longitudinal sub-model with a flexible basis function for subject-level trajectories and a competing-risks sub-model for multiple time-to event outcomes. The different association structures can be accomplished through a time-dependent function of shared random effects from the longitudinal process or the whole longitudinal history in the competing-risks sub-model. The proposed joint model that utilizes basis function and competing-risks sub-model is an extension of the standard linear joint models. The application results from the proposed joint model can supply some useful clinical references for chronic kidney disease study post kidney transplantation.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-05-10T02:39:07Z
      DOI: 10.1177/09622802211009265
       
  • Estimation of covariate effects on net survivals in the relative survival
           progressive illness-death model
    • Authors: Leyla Azarang, Roch Giorgi
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Recently, there has been a lot of development in relative survival field. In the absence of data on the cause of death, the research has tended to focus on the estimation of survival probability of a cancer (as a disease of interest). In many cancers, one nonfatal event that decreases the survival probability can occur. There are a few methods that assess the role of prognostic factors for multiple types of clinical events while dealing with uncertainty about the cause of death. However, these methods require proportional hazard or Markov assumptions. In practice, one or both of these assumptions might be violated. Violation of the proportional hazard assumption can lead to estimates that are biased, and difficult to interpret and violation of Markov assumption results in inconsistent estimators. In this work, we propose a semi-parametric approach to estimate the possibly time-varying regression coefficients in the likely non-Markov relative survival progressive illness-death model. The performance of the proposed estimator is investigated through simulations. We illustrate our approach using data from a study on rectal cancer resected for cure conducted in two French population-based digestive cancer registries.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-05-08T07:00:35Z
      DOI: 10.1177/09622802211003608
       
  • Survival models induced by zero-modified power series discrete frailty:
           Application with a melanoma data set
    • Authors: Katy C Molina, Vinicius F Calsavara, Vera D Tomazella, Eder A Milani
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Survival models with a frailty term are presented as an extension of Cox’s proportional hazard model, in which a random effect is introduced in the hazard function in a multiplicative form with the aim of modeling the unobserved heterogeneity in the population. Candidates for the frailty distribution are assumed to be continuous and non-negative. However, this assumption may not be true in some situations. In this paper, we consider a discretely distributed frailty model that allows units with zero frailty, that is, it can be interpreted as having long-term survivors. We propose a new discrete frailty-induced survival model with a zero-modified power series family, which can be zero-inflated or zero-deflated depending on the parameter value. Parameter estimation was obtained using the maximum likelihood method, and the performance of the proposed models was performed by Monte Carlo simulation studies. Finally, the applicability of the proposed models was illustrated with a real melanoma cancer data set.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-05-06T10:54:52Z
      DOI: 10.1177/09622802211011187
       
  • Improving the estimation of the COVID-19 effective reproduction number
           using nowcasting
    • Authors: Joaquin Salas
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      As the interactions between people increases, the impending menace of COVID-19 outbreaks materializes, and there is an inclination to apply lockdowns. In this context, it is essential to have easy-to-use indicators for people to employ as a reference. The effective reproduction number of confirmed positives, Rt, fulfills such a role. This document proposes a data-driven approach to nowcast Rt based on previous observations’ statistical behavior. As more information arrives, the method naturally becomes more precise about the final count of confirmed positives. Our method’s strength is that it is based on the self-reported onset of symptoms, in contrast to other methods that use the daily report’s count to infer this quantity. We show that our approach may be the foundation for determining useful epidemy tracking indicators.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-05-06T05:38:02Z
      DOI: 10.1177/09622802211008939
       
  • Testing for treatment effect in covariate-adaptive randomized trials with
           generalized linear models and omitted covariates
    • Authors: Yang Li, Wei Ma, Yichen Qin, Feifang Hu
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Concerns have been expressed over the validity of statistical inference under covariate-adaptive randomization despite the extensive use in clinical trials. In the literature, the inferential properties under covariate-adaptive randomization have been mainly studied for continuous responses; in particular, it is well known that the usual two-sample t-test for treatment effect is typically conservative. This phenomenon of invalid tests has also been found for generalized linear models without adjusting for the covariates and are sometimes more worrisome due to inflated Type I error. The purpose of this study is to examine the unadjusted test for treatment effect under generalized linear models and covariate-adaptive randomization. For a large class of covariate-adaptive randomization methods, we obtain the asymptotic distribution of the test statistic under the null hypothesis and derive the conditions under which the test is conservative, valid, or anti-conservative. Several commonly used generalized linear models, such as logistic regression and Poisson regression, are discussed in detail. An adjustment method is also proposed to achieve a valid size based on the asymptotic results. Numerical studies confirm the theoretical findings and demonstrate the effectiveness of the proposed adjustment method.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-04-26T10:36:05Z
      DOI: 10.1177/09622802211008206
       
  • Using multiple imputation to classify potential outcomes subgroups
    • Authors: Yun Li, Irina Bondarenko, Michael R Elliott, Timothy P Hofer, Jeremy MG Taylor
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      With medical tests becoming increasingly available, concerns about over-testing, over-treatment and health care cost dramatically increase. Hence, it is important to understand the influence of testing on treatment selection in general practice. Most statistical methods focus on average effects of testing on treatment decisions. However, this may be ill-advised, particularly for patient subgroups that tend not to benefit from such tests. Furthermore, missing data are common, representing large and often unaddressed threats to the validity of most statistical methods. Finally, it is often desirable to conduct analyses that can be interpreted causally. Using the Rubin Causal Model framework, we propose to classify patients into four potential outcomes subgroups, defined by whether or not a patient’s treatment selection is changed by the test result and by the direction of how the test result changes treatment selection. This subgroup classification naturally captures the differential influence of medical testing on treatment selections for different patients, which can suggest targets to improve the utilization of medical tests. We can then examine patient characteristics associated with patient potential outcomes subgroup memberships. We used multiple imputation methods to simultaneously impute the missing potential outcomes as well as regular missing values. This approach can also provide estimates of many traditional causal quantities of interest. We find that explicitly incorporating causal inference assumptions into the multiple imputation process can improve the precision for some causal estimates of interest. We also find that bias can occur when the potential outcomes conditional independence assumption is violated; sensitivity analyses are proposed to assess the impact of this violation. We applied the proposed methods to examine the influence of 21-gene assay, the most commonly used genomic test in the United States, on chemotherapy selection among breast cancer patients.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-04-22T09:43:17Z
      DOI: 10.1177/09622802211002866
       
  • Estimation of required sample size for external validation of risk models
           for binary outcomes
    • Authors: Menelaos Pavlou, Chen Qu, Rumana Z Omar, Shaun R Seaman, Ewout W Steyerberg, Ian R White, Gareth Ambler
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Risk-prediction models for health outcomes are used in practice as part of clinical decision-making, and it is essential that their performance be externally validated. An important aspect in the design of a validation study is choosing an adequate sample size. In this paper, we investigate the sample size requirements for validation studies with binary outcomes to estimate measures of predictive performance (C-statistic for discrimination and calibration slope and calibration in the large). We aim for sufficient precision in the estimated measures. In addition, we investigate the sample size to achieve sufficient power to detect a difference from a target value. Under normality assumptions on the distribution of the linear predictor, we obtain simple estimators for sample size calculations based on the measures above. Simulation studies show that the estimators perform well for common values of the C-statistic and outcome prevalence when the linear predictor is marginally Normal. Their performance deteriorates only slightly when the normality assumptions are violated. We also propose estimators which do not require normality assumptions but require specification of the marginal distribution of the linear predictor and require the use of numerical integration. These estimators were also seen to perform very well under marginal normality. Our sample size equations require a specified standard error (SE) and the anticipated C-statistic and outcome prevalence. The sample size requirement varies according to the prognostic strength of the model, outcome prevalence, choice of the performance measure and study objective. For example, to achieve an SE 
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-04-21T12:55:31Z
      DOI: 10.1177/09622802211007522
       
  • Approximate Bayesian inference for joint linear and partially linear
           modeling of longitudinal zero-inflated count and time to event data
    • Authors: T Baghfalaki, M Ganjali
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Joint modeling of zero-inflated count and time-to-event data is usually performed by applying the shared random effect model. This kind of joint modeling can be considered as a latent Gaussian model. In this paper, the approach of integrated nested Laplace approximation (INLA) is used to perform approximate Bayesian approach for the joint modeling. We propose a zero-inflated hurdle model under Poisson or negative binomial distributional assumption as sub-model for count data. Also, a Weibull model is used as survival time sub-model. In addition to the usual joint linear model, a joint partially linear model is also considered to take into account the non-linear effect of time on the longitudinal count response. The performance of the method is investigated using some simulation studies and its achievement is compared with the usual approach via the Bayesian paradigm of Monte Carlo Markov Chain (MCMC). Also, we apply the proposed method to analyze two real data sets. The first one is the data about a longitudinal study of pregnancy and the second one is a data set obtained of a HIV study.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-04-19T06:55:38Z
      DOI: 10.1177/09622802211002868
       
  • Predictive performance of machine and statistical learning methods: Impact
           of data-generating processes on external validity in the “large N, small
           p” setting
    • Authors: Peter C Austin, Frank E Harrell, Ewout W Steyerberg
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Machine learning approaches are increasingly suggested as tools to improve prediction of clinical outcomes. We aimed to identify when machine learning methods perform better than a classical learning method. We hereto examined the impact of the data-generating process on the relative predictive accuracy of six machine and statistical learning methods: bagged classification trees, stochastic gradient boosting machines using trees as the base learners, random forests, the lasso, ridge regression, and unpenalized logistic regression. We performed simulations in two large cardiovascular datasets which each comprised an independent derivation and validation sample collected from temporally distinct periods: patients hospitalized with acute myocardial infarction (AMI, n = 9484 vs. n = 7000) and patients hospitalized with congestive heart failure (CHF, n = 8240 vs. n = 7608). We used six data-generating processes based on each of the six learning methods to simulate outcomes in the derivation and validation samples based on 33 and 28 predictors in the AMI and CHF data sets, respectively. We applied six prediction methods in each of the simulated derivation samples and evaluated performance in the simulated validation samples according to c-statistic, generalized R2, Brier score, and calibration. While no method had uniformly superior performance across all six data-generating process and eight performance metrics, (un)penalized logistic regression and boosted trees tended to have superior performance to the other methods across a range of data-generating processes and performance metrics. This study confirms that classical statistical learning methods perform well in low-dimensional settings with large data sets.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-04-13T07:13:07Z
      DOI: 10.1177/09622802211002867
       
  • A method for systematically ranking therapeutic drug candidates using
           multiple uncertain screening criteria
    • Authors: Xubiao Peng, Ebrima Gibbs, Judith M Silverman, Neil R Cashman, Steven S Plotkin
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Multiple different screening tests for candidate leads in drug development may often yield conflicting or ambiguous results, sometimes making the selection of leads a nontrivial maximum-likelihood ranking problem. Here, we employ methods from the field of multiple criteria decision making (MCDM) to the problem of screening candidate antibody therapeutics. We employ the SMAA-TOPSIS method to rank a large cohort of antibodies using up to eight weighted screening criteria, in order to find lead candidate therapeutics for Alzheimer’s disease, and determine their robustness to both uncertainty in screening measurements, as well as uncertainty in the user-defined weights of importance attributed to each screening criterion. To choose lead candidates and measure the confidence in their ranking, we propose two new quantities, the Retention Probability and the Topness, as robust measures for ranking. This method may enable more systematic screening of candidate therapeutics when it becomes difficult intuitively to process multi-variate screening data that distinguishes candidates, so that additional candidates may be exposed as potential leads, increasing the likelihood of success in downstream clinical trials. The method properly identifies true positives and true negatives from synthetic data, its predictions correlate well with known clinically approved antibodies vs. those still in trials, and it allows for ranking analyses using antibody developability profiles in the literature. We provide a webserver where users can apply the method to their own data: http://bjork.phas.ubc.ca.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-04-13T01:04:00Z
      DOI: 10.1177/09622802211002861
       
  • Calibration of surgical tools using multilevel modeling with LINEX loss
           function: Theory and experiment
    • Authors: Parisa Azimaee, Mohammad Jafari Jozani, Yaser Maddahi
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Quantifying the tool–tissue interaction forces in surgery can be used in the training process of novice surgeons to help them better handle surgical tools and avoid exerting excessive forces. A significant challenge concerns the development of proper statistical learning techniques to model the relationship between the true force exerted on the tissue and several outputs read from sensors mounted on the surgical tools. We propose a nonparametric bootstrap technique and a Bayesian multilevel modeling methodology to estimate the true forces. We use the linear exponential loss function to asymmetrically penalize the over and underestimation of the applied forces to the tissue. We incorporate the direction of the force as a group factor in our analysis. A weighted approach is used to account for the nonhomogeneity of read voltages from the surgical tool. Our proposed Bayesian multilevel models provide estimates that are more accurate than those under the maximum likelihood and restricted maximum likelihood approaches. Moreover, confidence bounds are much narrower and the biases and root mean squared errors are significantly smaller in our multilevel models with the linear exponential loss function.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-04-13T01:03:46Z
      DOI: 10.1177/09622802211003620
       
  • Mediation analysis for mixture Cox proportional hazards cure models
    • Authors: Xiaoxiao Zhou, Xinyuan Song
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Mediation analysis aims to decompose a total effect into specific pathways and investigate the underlying causal mechanism. Although existing methods have been developed to conduct mediation analysis in the context of survival models, none of these methods accommodates the existence of a substantial proportion of subjects who never experience the event of interest, even if the follow-up is sufficiently long. In this study, we consider mediation analysis for the mixture of Cox proportional hazards cure models that cope with the cure fraction problem. Path-specific effects on restricted mean survival time and survival probability are assessed by introducing a partially latent group indicator and applying the mediation formula approach in a three-stage mediation framework. A Bayesian approach with P-splines for approximating the baseline hazard function is developed to conduct analysis. The satisfactory performance of the proposed method is verified through simulation studies. An application of the Alzheimer’s disease (AD) neuroimaging initiative dataset investigates the causal effects of APOE-[math] allele on AD progression.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-04-09T11:36:55Z
      DOI: 10.1177/09622802211003113
       
  • A conditional approach for the receiver operating characteristic curve
           construction to evaluate diagnostic test performance in a family-matched
           case–control design
    • Authors: Yalda Zarnegarnia, Shari Messinger
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Receiver operating characteristic curves are widely used in medical research to illustrate biomarker performance in binary classification, particularly with respect to disease or health status. Study designs that include related subjects, such as siblings, usually have common environmental or genetic factors giving rise to correlated biomarker data. The design could be used to improve detection of biomarkers informative of increased risk, allowing initiation of treatment to stop or slow disease progression. Available methods for receiver operating characteristic construction do not take advantage of correlation inherent in this design to improve biomarker performance. This paper will briefly review some developed methods for receiver operating characteristic curve estimation in settings with correlated data from case–control designs and will discuss the limitations of current methods for analyzing correlated familial paired data. An alternative approach using conditional receiver operating characteristic curves will be demonstrated. The proposed approach will use information about correlation among biomarker values, producing conditional receiver operating characteristic curves that evaluate the ability of a biomarker to discriminate between affected and unaffected subjects in a familial paired design.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-04-07T04:42:44Z
      DOI: 10.1177/0962280221995956
       
  • Exposure misclassification in propensity score-based time-to-event data
           analysis
    • Authors: Yingrui Yang, Molin Wang
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      In epidemiology, identifying the effect of exposure variables in relation to a time-to-event outcome is a classical research area of practical importance. Incorporating propensity score in the Cox regression model, as a measure to control for confounding, has certain advantages when outcome is rare. However, in situations involving exposure measured with moderate to substantial error, identifying the exposure effect using propensity score in Cox models remains a challenging yet unresolved problem. In this paper, we propose an estimating equation method to correct for the exposure misclassification-caused bias in the estimation of exposure-outcome associations. We also discuss the asymptotic properties and derive the asymptotic variances of the proposed estimators. We conduct a simulation study to evaluate the performance of the proposed estimators in various settings. As an illustration, we apply our method to correct for the misclassification-caused bias in estimating the association of PM2.5 level with lung cancer mortality using a nationwide prospective cohort, the Nurses’ Health Study. The proposed methodology can be applied using our user-friendly R program published online.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-04-07T04:42:44Z
      DOI: 10.1177/0962280221998410
       
  • Sample size estimation for modified Poisson analysis of cluster randomized
           trials with a binary outcome
    • Authors: Fan Li, Guangyu Tong
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      The modified Poisson regression coupled with a robust sandwich variance has become a viable alternative to log-binomial regression for estimating the marginal relative risk in cluster randomized trials. However, a corresponding sample size formula for relative risk regression via the modified Poisson model is currently not available for cluster randomized trials. Through analytical derivations, we show that there is no loss of asymptotic efficiency for estimating the marginal relative risk via the modified Poisson regression relative to the log-binomial regression. This finding holds both under the independence working correlation and under the exchangeable working correlation provided a simple modification is used to obtain the consistent intraclass correlation coefficient estimate. Therefore, the sample size formulas developed for log-binomial regression naturally apply to the modified Poisson regression in cluster randomized trials. We further extend the sample size formulas to accommodate variable cluster sizes. An extensive Monte Carlo simulation study is carried out to validate the proposed formulas. We find that the proposed formulas have satisfactory performance across a range of cluster size variability, as long as suitable finite-sample corrections are applied to the sandwich variance estimator and the number of clusters is at least 10. Our findings also suggest that the sample size estimate under the exchangeable working correlation is more robust to cluster size variability, and recommend the use of an exchangeable working correlation over an independence working correlation for both design and analysis. The proposed sample size formulas are illustrated using the Stop Colorectal Cancer (STOP CRC) trial.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-04-07T04:42:42Z
      DOI: 10.1177/0962280221990415
       
  • A review of multistate modelling approaches in monitoring disease
           progression: Bayesian estimation using the Kolmogorov-Chapman forward
           equations
    • Authors: Zvifadzo Matsena Zingoni, Tobias F Chirwa, Jim Todd, Eustasius Musenge
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      There are numerous fields of science in which multistate models are used, including biomedical research and health economics. In biomedical studies, these stochastic continuous-time models are used to describe the time-to-event life history of an individual through a flexible framework for longitudinal data. The multistate framework can describe more than one possible time-to-event outcome for a single individual. The standard estimation quantities in multistate models are transition probabilities and transition rates which can be mapped through the Kolmogorov-Chapman forward equations from the Bayesian estimation perspective. Most multistate models assume the Markov property and time homogeneity; however, if these assumptions are violated, an extension to non-Markovian and time-varying transition rates is possible. This manuscript extends reviews in various types of multistate models, assumptions, methods of estimation and data features compatible with fitting multistate models. We highlight the contrast between the frequentist (maximum likelihood estimation) and the Bayesian estimation approaches in the multistate modeling framework and point out where the latter is advantageous. A partially observed and aggregated dataset from the Zimbabwe national ART program was used to illustrate the use of Kolmogorov-Chapman forward equations. The transition rates from a three-stage reversible multistate model based on viral load measurements in WinBUGS were reported.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-04-07T04:22:33Z
      DOI: 10.1177/0962280221997507
       
  • Quantile regression models for survival data with missing censoring
           indicators
    • Authors: Zhiping Qiu, Huijuan Ma, Jianwei Chen, Gregg E Dinse
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      The quantile regression model has increasingly become a useful approach for analyzing survival data due to its easy interpretation and flexibility in exploring the dynamic relationship between a time-to-event outcome and the covariates. In this paper, we consider the quantile regression model for survival data with missing censoring indicators. Based on the augmented inverse probability weighting technique, two weighted estimating equations are developed and corresponding easily implemented algorithms are suggested to solve the estimating equations. Asymptotic properties of the resultant estimators and the resampling-based inference procedures are established. Finally, the finite sample performances of the proposed approaches are investigated in simulation studies and a real data application.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-04-07T04:22:32Z
      DOI: 10.1177/0962280221995986
       
  • Net benefit separation and the determination curve: A probabilistic
           framework for cost-effectiveness estimation
    • Authors: Andrew J Spieker, Nicholas Illenberger, Jason A Roy, Nandita Mitra
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Considerations regarding clinical effectiveness and cost are essential in comparing the overall value of two treatments. There has been growing interest in methodology to integrate cost and effectiveness measures in order to inform policy and promote adequate resource allocation. The net monetary benefit aggregates information on differences in mean cost and clinical outcomes; the cost-effectiveness acceptability curve was developed to characterize the extent to which the strength of evidence regarding net monetary benefit changes with fluctuations in the willingness-to-pay threshold. Methods to derive insights from characteristics of the cost/clinical outcomes besides mean differences remain undeveloped but may also be informative. We propose a novel probabilistic measure of cost-effectiveness based on the stochastic ordering of the individual net benefit distribution under each treatment. Our approach is able to accommodate features frequently encountered in observational data including confounding and censoring, and complements the net monetary benefit in the insights it provides. We conduct a range of simulations to evaluate finite-sample performance and illustrate our proposed approach using simulated data based on a study of endometrial cancer patients.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-04-07T04:22:31Z
      DOI: 10.1177/0962280221995972
       
  • A unified approach to power and sample size determination for log-rank
           tests under proportional and nonproportional hazards
    • Authors: Yongqiang Tang
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Log-rank tests have been widely used to compare two survival curves in biomedical research. We describe a unified approach to power and sample size calculation for the unweighted and weighted log-rank tests in superiority, noninferiority and equivalence trials. It is suitable for both time-driven and event-driven trials. A numerical algorithm is suggested. It allows flexible specification of the patient accrual distribution, baseline hazards, and proportional or nonproportional hazards patterns, and enables efficient sample size calculation when there are a range of choices for the patient accrual pattern and trial duration. A confidence interval method is proposed for the trial duration of an event-driven trial. We point out potential issues with several popular sample size formulae. Under proportional hazards, the power of a survival trial is commonly believed to be determined by the number of observed events. The belief is roughly valid for noninferiority and equivalence trials with similar survival and censoring distributions between two groups, and for superiority trials with balanced group sizes. In unbalanced superiority trials, the power depends also on other factors such as data maturity. Surprisingly, the log-rank test usually yields slightly higher power than the Wald test from the Cox model under proportional hazards in simulations. We consider various nonproportional hazards patterns induced by delayed effects, cure fractions, and/or treatment switching. Explicit power formulae are derived for the combination test that takes the maximum of two or more weighted log-rank tests to handle uncertain nonproportional hazards patterns. Numerical examples are presented for illustration.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-04-05T07:09:57Z
      DOI: 10.1177/0962280220988570
       
  • Glucodensities: A new representation of glucose profiles using
           distributional data analysis
    • Authors: Marcos Matabuena, Alexander Petersen, Juan C Vidal, Francisco Gude
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Biosensor data have the potential to improve disease control and detection. However, the analysis of these data under free-living conditions is not feasible with current statistical techniques. To address this challenge, we introduce a new functional representation of biosensor data, termed the glucodensity, together with a data analysis framework based on distances between them. The new data analysis procedure is illustrated through an application in diabetes with continuous-time glucose monitoring (CGM) data. In this domain, we show marked improvement with respect to state-of-the-art analysis methods. In particular, our findings demonstrate that (i) the glucodensity possesses an extraordinary clinical sensitivity to capture the typical biomarkers used in the standard clinical practice in diabetes; (ii) previous biomarkers cannot accurately predict glucodensity, so that the latter is a richer source of information and; (iii) the glucodensity is a natural generalization of the time in range metric, this being the gold standard in the handling of CGM data. Furthermore, the new method overcomes many of the drawbacks of time in range metrics and provides more in-depth insight into assessing glucose metabolism.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-03-24T08:43:10Z
      DOI: 10.1177/0962280221998064
       
  • Variable selection for causal mediation analysis using LASSO-based methods
    • Authors: Zhaoxin Ye, Yeying Zhu, Donna L Coffman
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Causal mediation effect estimates can be obtained from marginal structural models using inverse probability weighting with appropriate weights. In order to compute weights, treatment and mediator propensity score models need to be fitted first. If the covariates are high-dimensional, parsimonious propensity score models can be developed by regularization methods including LASSO and its variants. Furthermore, in a mediation setup, more efficient direct or indirect effect estimators can be obtained by using outcome-adaptive LASSO to select variables for propensity score models by incorporating the outcome information. A simulation study is conducted to assess how different regularization methods can affect the performance of estimated natural direct and indirect effect odds ratios. Our simulation results show that regularizing propensity score models by outcome-adaptive LASSO can improve the efficiency of the natural effect estimators and by optimizing balance in the covariates, bias can be reduced in most cases. The regularization methods are then applied to MIMIC-III database, an ICU database developed by MIT.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-03-23T07:58:41Z
      DOI: 10.1177/0962280221997505
       
  • Quantile regression on inactivity time
    • Authors: Lauren C Balmert, Ruosha Li, Limin Peng, Jong-Hyeon Jeong
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      The inactivity time, or lost lifespan specifically for mortality data, concerns time from occurrence of an event of interest to the current time point and has recently emerged as a new summary measure for cumulative information inherent in time-to-event data. This summary measure provides several benefits over the traditional methods, including more straightforward interpretation yet less sensitivity to heavy censoring. However, there exists no systematic modeling approach to inferring the quantile inactivity time in the literature. In this paper, we propose a semi-parametric regression method for the quantiles of the inactivity time distribution under right censoring. The consistency and asymptotic normality of the regression parameters are established. To avoid estimation of the probability density function of the inactivity time distribution under censoring, we propose a computationally efficient method for estimating the variance–covariance matrix of the regression coefficient estimates. Simulation results are presented to validate the finite sample properties of the proposed estimators and test statistics. The proposed method is illustrated with a real dataset from a clinical trial on breast cancer.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-03-20T06:44:38Z
      DOI: 10.1177/0962280221995977
       
  • Mediation effects that emulate a target randomised trial: Simulation-based
           evaluation of ill-defined interventions on multiple mediators
    • Authors: Margarita Moreno-Betancur, Paul Moran, Denise Becker, George C Patton, John B Carlin
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Many epidemiological questions concern potential interventions to alter the pathways presumed to mediate an association. For example, we consider a study that investigates the benefit of interventions in young adulthood for ameliorating the poorer mid-life psychosocial outcomes of adolescent self-harmers relative to their healthy peers. Two methodological challenges arise. First, mediation methods have hitherto mostly focused on the elusive task of discovering pathways, rather than on the evaluation of mediator interventions. Second, the complexity of such questions is invariably such that there are no well-defined mediator interventions (i.e. actual treatments, programs, etc.) for which data exist on the relevant populations, outcomes and time-spans of interest. Instead, researchers must rely on exposure (non-intervention) data, that is, on mediator measures such as depression symptoms for which the actual interventions that one might implement to alter them are not well defined. We propose a novel framework that addresses these challenges by defining mediation effects that map to a target trial of hypothetical interventions targeting multiple mediators for which we simulate the effects. Specifically, we specify a target trial addressing three policy-relevant questions, regarding the impacts of hypothetical interventions that would shift the mediators’ distributions (separately under various interdependence assumptions, jointly or sequentially) to user-specified distributions that can be emulated with the observed data. We then define novel interventional effects that map to this trial, simulating shifts by setting mediators to random draws from those distributions. We show that estimation using a g-computation method is possible under an expanded set of causal assumptions relative to inference with well-defined interventions, which reflects the lower level of evidence that is expected with ill-defined interventions. Application to the self-harm example in the Victorian Adolescent Health Cohort Study illustrates the value of our proposal for informing the design and evaluation of actual interventions in the future.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-03-20T06:44:19Z
      DOI: 10.1177/0962280221998409
       
  • Evaluating Bayesian adaptive randomization procedures with adaptive clip
           methods for multi-arm trials
    • Authors: Kim May Lee, J Jack Lee
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Bayesian adaptive randomization is a heuristic approach that aims to randomize more patients to the putatively superior arms based on the trend of the accrued data in a trial. Many statistical aspects of this approach have been explored and compared with other approaches; yet only a limited number of works has focused on improving its performance and providing guidance on its application to real trials. An undesirable property of this approach is that the procedure would randomize patients to an inferior arm in some circumstances, which has raised concerns in its application. Here, we propose an adaptive clip method to rectify the problem by incorporating a data-driven function to be used in conjunction with Bayesian adaptive randomization procedure. This function aims to minimize the chance of assigning patients to inferior arms during the early time of the trial. Moreover, we propose a utility approach to facilitate the selection of a randomization procedure. A cost that reflects the penalty of assigning patients to the inferior arm(s) in the trial is incorporated into our utility function along with all patients benefited from the trial, both within and beyond the trial. We illustrate the selection strategy for a wide range of scenarios.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-03-10T06:36:13Z
      DOI: 10.1177/0962280221995961
       
  • Marginal analysis of bivariate mixed responses with measurement error and
           misclassification
    • Authors: Qihuang Zhang, Grace Y Yi
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Bivariate responses with mixed continuous and binary variables arise commonly in applications such as clinical trials and genetic studies. Statistical methods based on jointly modeling continuous and binary variables have been available. However, such methods ignore the effects of response mismeasurement, a ubiquitous feature in applications. It has been well studied that in many settings, ignorance of mismeasurement in variables usually results in biased estimation. In this paper, we consider the setting with a bivariate outcome vector which contains a continuous component and a binary component both subject to mismeasurement. We propose estimating equation approaches to handle measurement error in the continuous response and misclassification in the binary response simultaneously. The proposed estimators are consistent and robust to certain model misspecification, provided regularity conditions. Extensive simulation studies confirm that the proposed methods successfully correct the biases resulting from the error-in-variables under various settings. The proposed methods are applied to analyze the outbred Carworth Farms White mice data arising from a genome-wide association study.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-02-26T06:10:57Z
      DOI: 10.1177/0962280220983587
       
  • Calibrating validation samples when accounting for measurement error in
           intervention studies
    • Authors: Benjamin Ackerman, Juned Siddique, Elizabeth A Stuart
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Many lifestyle intervention trials depend on collecting self-reported outcomes, such as dietary intake, to assess the intervention’s effectiveness. Self-reported outcomes are subject to measurement error, which impacts treatment effect estimation. External validation studies measure both self-reported outcomes and accompanying biomarkers, and can be used to account for measurement error. However, in order to account for measurement error using an external validation sample, an assumption must be made that the inferences are transportable from the validation sample to the intervention trial of interest. This assumption does not always hold. In this paper, we propose an approach that adjusts the validation sample to better resemble the trial sample, and we also formally investigate when bias due to poor transportability may arise. Lastly, we examine the performance of the methods using simulation, and illustrate them using PREMIER, a lifestyle intervention trial measuring self-reported sodium intake as an outcome, and OPEN, a validation study measuring both self-reported diet and urinary biomarkers.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-02-23T11:43:39Z
      DOI: 10.1177/0962280220988574
       
  • An adaptive seamless Phase 2-3 design with multiple endpoints
    • Authors: Man Jin, Pingye Zhang
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Adaptive seamless Phase 2-3 design has been considered as one possible way to expedite development time for a drug program by allowing the expansion from an ongoing Phase 2 trial into a Phase 3 trial. Multiple endpoints are often tested when a regulatory approval is pursued. Here we propose an adaptive seamless Phase 2-3 design with multiple endpoints which can expand an ongoing Phase 2 trial into a Phase 3 trial based on an intermediate endpoint for adaptive decision and test the endpoints with a powerful multiple test procedure. It is proved that the proposed design can preserve the familywise Type I error under a mild assumption that is expected to hold in practical considerations. We illustrate our proposed design with an example trial design for oncology. Simulations are conducted to confirm the control of the familywise Type I error and the adaptive seamless Phase 2-3 design is illustrated with an example.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-02-16T04:58:54Z
      DOI: 10.1177/0962280220986935
       
  • Harmonizing child mortality data at disparate geographic levels
    • Authors: Neal Marquez, Jon Wakefield
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      There is an increasing focus on reducing inequalities in health outcomes in developing countries. Subnational variation is of particular interest, with geographically-indexed data being used to understand the spatial risk of detrimental outcomes and to identify who is at greatest risk. While some health surveys provide observations with associated geographic coordinates (point data), many others provide data that have their locations masked and instead only report the strata (polygon information) within which the data resides (masked data). How to harmonize these data sources for spatial analysis has seen previously considered though only ad hoc methods have been previously considered, and comparison of methods is lacking. In this paper, we present a new method for analyzing masked survey data, using a method that is consistent with the data-generating process. In addition, we critique two previously proposed approaches to analyzing masked data and illustrate that they are fundamentally flawed methodologically. To validate our method, we compare our approach with previously formulated solutions in several realistic simulation environments in which the underlying structure of the risk field is known. We simulate samples from spatiotemporal fields in a way that mimics the sampling frame implemented in the most common health surveys in low- and middle-income countries, the Demographic and Health Surveys and Multiple Indicator Cluster Surveys. In simulations, the newly proposed approach outperforms previously proposed approaches in terms of minimizing error while increasing the precision of estimates. The approaches are subsequently compared using child mortality data from the Dominican Republic where our findings are reinforced. The ability to accurately increase precision of child mortality estimates, and health outcomes in general, by leveraging various types of data, improves our ability to implement precision public health initiatives and better understand the landscape of geographic health inequalities.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-02-02T03:15:32Z
      DOI: 10.1177/0962280220988742
       
  • Propensity score analysis methods with balancing constraints: A Monte
           Carlo study
    • Authors: Yan Li, Liang Li
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      The inverse probability weighting is an important propensity score weighting method to estimate the average treatment effect. Recent literature shows that it can be easily combined with covariate balancing constraints to reduce the detrimental effects of excessively large weights and improve balance. Other methods are available to derive weights that balance covariate distributions between the treatment groups without the involvement of propensity scores. We conducted comprehensive Monte Carlo experiments to study whether the use of covariate balancing constraints circumvent the need for correct propensity score model specification, and whether the use of a propensity score model further improves the estimation performance among methods that use similar covariate balancing constraints. We compared simple inverse probability weighting, two propensity score weighting methods with balancing constraints (covariate balancing propensity score, covariate balancing scoring rule), and two weighting methods with balancing constraints but without using the propensity scores (entropy balancing and kernel balancing). We observed that correct specification of the propensity score model remains important even when the constraints effectively balance the covariates. We also observed evidence suggesting that, with similar covariate balance constraints, the use of a propensity score model improves the estimation performance when the dimension of covariates is large. These findings suggest that it is important to develop flexible data-driven propensity score models that satisfy covariate balancing conditions.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-02-02T03:11:12Z
      DOI: 10.1177/0962280220983512
       
  • Combining multiple biomarkers to linearly maximize the diagnostic accuracy
           under ordered multi-class setting
    • Authors: Jia Hua, Lili Tian
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Either in clinical study or biomedical research, it is a common practice to combine multiple biomarkers to improve the overall diagnostic performance. Despite the fact there exist a large number of statistical methods for biomarker combination under binary classification, research on this topic under multi-class setting is sparse. The overall diagnostic accuracy, i.e. the sum of correct classification rates, directly measures the classification accuracy of the combined biomarkers. Hence the overall accuracy can serve as an important objective function for biomarker combination, especially when the combined biomarkers are used for the purpose of making medical diagnosis. In this paper, we address the problem of combining multiple biomarkers to directly maximize the overall diagnostic accuracy by presenting several grid search methods and derivation-based methods. A comprehensive simulation study was conducted to compare the performances of these methods. An ovarian cancer data set is analyzed in the end.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-02-01T10:38:03Z
      DOI: 10.1177/0962280220987587
       
  • Clustered longitudinal data subject to irregular observation
    • Authors: Eleanor M Pullenayegum, Catherine Birken, Jonathon Maguire
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Data collected longitudinally as part of usual health care is becoming increasingly available for research, and is often available across several centres. Because the frequency of follow-up is typically determined by the patient’s health, the timing of measurements may be related to the outcome of interest. Failure to account for the informative nature of the observation process can result in biased inferences. While methods for accounting for the association between observation frequency and outcome are available, they do not currently account for clustering within centres. We formulate a semi-parametric joint model to include random effects for centres as well as subjects. We also show how inverse-intensity weighted GEEs can be adapted to account for clustering, comparing stratification, frailty models, and covariate adjustment to account for clustering in the observation process. The finite-sample performance of the proposed methods is evaluated through simulation and the methods illustrated using a study of the relationship between outdoor play and air quality in children aged 2–9 living in the Greater Toronto Area.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-01-29T08:19:36Z
      DOI: 10.1177/0962280220986193
       
  • Inference under covariate-adaptive randomization: A simulation study
    • Authors: Andrea Callegaro, B S Harsha Shree, Naveen Karkada
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      In clinical trials, several covariate-adaptive designs have been proposed to balance treatment arms with respect to key covariates. Although some argue that conventional asymptotic tests are still appropriate when covariate-adaptive randomization is used, others think that re-randomization tests should be used. In this manuscript, we compare by simulation the performance of asymptotic and re-randomization tests under covariate-adaptive randomization. Our simulation study confirms results expected by the existing theory (e.g. asymptotic tests do not control type I error when the model is miss-specified). Furthermore, it shows that (i) re-randomization tests are as powerful as the asymptotic tests if the model is correct; (ii) re-randomization tests are more powerful when adjusting for covariates; (iii) minimization and permuted blocks provide similar results.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-01-28T04:26:00Z
      DOI: 10.1177/0962280220985564
       
  • A Bayesian dose–response meta-analysis model: A simulations study
           and application
    • Authors: Tasnim Hamza, Andrea Cipriani, Toshi A Furukawa, Matthias Egger, Nicola Orsini, Georgia Salanti
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Dose–response models express the effect of different dose or exposure levels on a specific outcome. In meta-analysis, where aggregated-level data is available, dose–response evidence is synthesized using either one-stage or two-stage models in a frequentist setting. We propose a hierarchical dose–response model implemented in a Bayesian framework. We develop our model assuming normal or binomial likelihood and accounting for exposures grouped in clusters. To allow maximum flexibility, the dose–response association is modelled using restricted cubic splines. We implement these models in R using JAGS and we compare our approach to the one-stage dose–response meta-analysis model in a simulation study. We found that the Bayesian dose–response model with binomial likelihood has lower bias than the Bayesian model with normal likelihood and the frequentist one-stage model when studies have small sample size. When the true underlying shape is log–log or half-sigmoid, the performance of all models depends on choosing an appropriate location for the knots. In all other examined situations, all models perform very well and give practically identical results. We also re-analyze the data from 60 randomized controlled trials (15,984 participants) examining the efficacy (response) of various doses of serotonin-specific reuptake inhibitor (SSRI) antidepressant drugs. All models suggest that the dose–response curve increases between zero dose and 30–40 mg of fluoxetine-equivalent dose, and thereafter shows small decline. We draw the same conclusion when we take into account the fact that five different antidepressants have been studied in the included trials. We show that implementation of the hierarchical model in Bayesian framework has similar performance to, but overcomes some of the limitations of the frequentist approach and offers maximum flexibility to accommodate features of the data.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-01-28T04:23:41Z
      DOI: 10.1177/0962280220982643
       
  • Bridging across patient subgroups in phase I oncology trials that
           incorporate animal data
    • Authors: Haiyan Zheng, Lisa V Hampson, Thomas Jaki
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      In this paper, we develop a general Bayesian hierarchical model for bridging across patient subgroups in phase I oncology trials, for which preliminary information about the dose–toxicity relationship can be drawn from animal studies. Parameters that re-scale the doses to adjust for intrinsic differences in toxicity, either between animals and humans or between human subgroups, are introduced to each dose–toxicity model. Appropriate priors are specified for these scaling parameters, which capture the magnitude of uncertainty surrounding the animal-to-human translation and bridging assumption. After mapping data onto a common, ‘average’ human dosing scale, human dose–toxicity parameters are assumed to be exchangeable either with the standardised, animal study-specific parameters, or between themselves across human subgroups. Random-effects distributions are distinguished by different covariance matrices that reflect the between-study heterogeneity in animals and humans. Possibility of non-exchangeability is allowed to avoid inferences for extreme subgroups being overly influenced by their complementary data. We illustrate the proposed approach with hypothetical examples, and use simulation to compare the operating characteristics of trials analysed using our Bayesian model with several alternatives. Numerical results show that the proposed approach yields robust inferences, even when data from multiple sources are inconsistent and/or the bridging assumptions are incorrect.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-01-27T11:22:07Z
      DOI: 10.1177/0962280220986580
       
  • An effective technique for diabetic retinopathy using hybrid machine
           learning technique
    • Authors: N Satyanarayana Murthy, B Arunadevi
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Diabetic retinopathy (DR) stays as an eye issue that has continuously developed in individuals who experienced diabetes. The complexities in diabetes cause harm to the vein at the back of the retina. In outrageous cases, DR could swift apparition disaster or visual impairment. This genuine impact had the option to charge through convenient treatment and early recognition. As of late, this issue has been spreading quickly, particularly in the working region, which in the end constrained the interest of an analysis of this disease from the most prompt stage. Therefore, that are castoff to protect the progressions of this disorder, revealing of the retinal blood vessels (RBVs) play a foremost role. The growth of an abnormal vessel leads to the development steps of DR, where it can be well known by extracting the RBV. The recognition of the BV for DR by developing an automatic approach is a major aim of our research study. In the proposed method, there are two major steps: one is segmentation and the second one is classification of affected retinal BV. The proposed method uses the Kinetic Gas Molecule Optimization based on centroid initialization used for the Fuzzy C-means Clustering. In the classification step, those segmented images are given as input to hybrid techniques such as a convolution neural network with bidirectional-long short-term memory (CNN with Bi-LSTM). The learning degree of Bi-LSTM is revised by using the self-attention mechanism for refining the classification accuracy. The trial consequences disclosed that the mixture algorithm achieved higher accuracy, specificity, and sensitivity than existing techniques.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-01-27T04:20:28Z
      DOI: 10.1177/0962280220983541
       
  • An automation-based adaptive seamless design for dose selection and
           confirmation with improved power and efficiency
    • Authors: Lu Cui, Tianyu Zhan, Lanju Zhang, Ziqian Geng, Yihua Gu, Ivan SF Chan
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      In a drug development program, the efficacy and safety of multiple doses can be evaluated in patients through a phase 2b dose ranging study. With a demonstrated dose response in the trial, promising doses are identified. Their effectiveness then is further investigated and confirmed in phase 3 studies. Although this two-step approach serves the purpose of the program, in general, it is inefficient because of its prolonged development duration and the exclusion of the phase 2b data in the final efficacy evaluation and confirmation which are only based on phase 3 data. To address the issue, we propose a new adaptive design, which seamlessly integrates the dose finding and confirmation steps under one pivotal study. Unlike existing adaptive seamless phase 2b/3 designs, the proposed design combines the response adaptive randomization, sample size modification, and multiple testing techniques to achieve better efficiency. The design can be easily implemented through an automated randomization process. At the end, a number of targeted doses are selected and their effectiveness is confirmed with guaranteed control of family-wise error rate.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-01-18T02:58:57Z
      DOI: 10.1177/0962280220984822
       
  • Improving convergence in growth mixture models without covariance
           structure constraints
    • Authors: Daniel McNeish, Jeffrey R. Harring
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Growth mixture models are a popular method to uncover heterogeneity in growth trajectories. Harnessing the power of growth mixture models in applications is difficult given the prevalence of nonconvergence when fitting growth mixture models to empirical data. Growth mixture models are rooted in the random effect tradition, and nonconvergence often leads researchers to modify their intended model with constraints in the random effect covariance structure to facilitate estimation. While practical, doing so has been shown to adversely affect parameter estimates, class assignment, and class enumeration. Instead, we advocate specifying the models with a marginal approach to prevent the widespread practice of sacrificing class-specific covariance structures to appease nonconvergence. A simulation is provided to show the importance of modeling class-specific covariance structures and builds off existing literature showing that applying constraints to the covariance leads to poor performance. These results suggest that retaining class-specific covariance structures should be a top priority and that marginal models like covariance pattern growth mixture models that model the covariance structure without random effects are well-suited for such a purpose, particularly with modest sample sizes and attrition commonly found in applications. An application to PTSD data with such characteristics is provided to demonstrate (a) convergence difficulties with random effect models, (b) how covariance structure constraints improve convergence but to the detriment of performance, and (c) how covariance pattern growth mixture models may provide a path forward that improves convergence without forfeiting class-specific covariance structures.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-01-13T03:33:21Z
      DOI: 10.1177/0962280220981747
       
  • Online control of the familywise error rate
    • Authors: Jinjin Tian, Aaditya Ramdas
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Biological research often involves testing a growing number of null hypotheses as new data are accumulated over time. We study the problem of online control of the familywise error rate, that is testing an a priori unbounded sequence of hypotheses (p-values) one by one over time without knowing the future, such that with high probability there are no false discoveries in the entire sequence. This paper unifies algorithmic concepts developed for offline (single batch) familywise error rate control and online false discovery rate control to develop novel online familywise error rate control methods. Though many offline familywise error rate methods (e.g., Bonferroni, fallback procedures and Sidak’s method) can trivially be extended to the online setting, our main contribution is the design of new, powerful, adaptive online algorithms that control the familywise error rate when the p-values are independent or locally dependent in time. Our numerical experiments demonstrate substantial gains in power, that are also formally proved in an idealized Gaussian sequence model. A promising application to the International Mouse Phenotyping Consortium is described.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-01-08T07:32:13Z
      DOI: 10.1177/0962280220983381
       
  • Continuous(ly) missing outcome data in network meta-analysis: A one-stage
           pattern-mixture model approach
    • Authors: Loukia M Spineli, Chrysostomos Kalyvas, Katerina Papadimitropoulou
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Appropriate handling of aggregate missing outcome data is necessary to minimise bias in the conclusions of systematic reviews. The two-stage pattern-mixture model has been already proposed to address aggregate missing continuous outcome data. While this approach is more proper compared with the exclusion of missing continuous outcome data and simple imputation methods, it does not offer flexible modelling of missing continuous outcome data to investigate their implications on the conclusions thoroughly. Therefore, we propose a one-stage pattern-mixture model approach under the Bayesian framework to address missing continuous outcome data in a network of interventions and gain knowledge about the missingness process in different trials and interventions. We extend the hierarchical network meta-analysis model for one aggregate continuous outcome to incorporate a missingness parameter that measures the departure from the missing at random assumption. We consider various effect size estimates for continuous data, and two informative missingness parameters, the informative missingness difference of means and the informative missingness ratio of means. We incorporate our prior belief about the missingness parameters while allowing for several possibilities of prior structures to account for the fact that the missingness process may differ in the network. The method is exemplified in two networks from published reviews comprising a different amount of missing continuous outcome data.
      Citation: Statistical Methods in Medical Research
      PubDate: 2021-01-07T05:50:47Z
      DOI: 10.1177/0962280220983544
       
  • A two-stage Generalized Method of Moments model for feedback with
           time-dependent covariates
    • Authors: Elsa Vazquez-Arreola
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Correlated observations in longitudinal studies are often due to repeated measures on the subjects. Additionally, correlation may be realized due to the association between responses at a particular time and the predictors at earlier times. There are also feedback effects (relation between responses in the present and the covariates at a later time), though these are not always relevant and are often ignored. All these cases of correlation must be accounted for as they can have different effects on the regression coefficients. Several authors have provided models that reflect the direct and delayed impact of covariates on the response, utilizing valid moment conditions to estimate the relevant regression coefficients. However, there are applications when one cannot ignore the effect of the responses on future covariates. A two-stage model to account for the feedback, modeling the direct as well as the delayed effects of the covariates on future responses and vice versa is presented. The use of the two-stage model is demonstrated by revisiting child morbidity and its impact on future values of body mass index using Philippines health data. Also, obesity status and its feedback effects on physical activity and depression levels using the Add Health dataset are analyzed.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-12-28T03:53:56Z
      DOI: 10.1177/0962280220981402
       
  • Monte Carlo approaches to frequentist multiplicity-adjusted benefiting
           subgroup identification
    • Authors: Patrick M Schnell
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      One common goal of subgroup analyses is to determine the subgroup of the population for which a given treatment is effective. Like most problems in subgroup analyses, this benefiting subgroup identification requires careful attention to multiple testing considerations, especially Type I error inflation. To partially address these concerns, the credible subgroups approach provides a pair of bounding subgroups for the benefiting subgroup, constructed so that with high posterior probability one is contained by the benefiting subgroup while the other contains the benefiting subgroup. To date, this approach has been presented within the Bayesian paradigm only, and requires sampling from the posterior of a Bayesian model. Additionally, in many cases, such as regulatory submission, guarantees of frequentist operating characteristics are helpful or necessary. We present Monte Carlo approaches to constructing confidence subgroups, frequentist analogues to credible subgroups that replace the posterior distribution with an estimate of the joint distribution of personalized treatment effect estimates, and yield frequentist interpretations and coverage guarantees. The estimated joint distribution is produced using either draws from asymptotic sampling distributions of estimated model parameters, or bootstrap resampling schemes. The approach is applied to a publicly available dataset from randomized trials of Alzheimer’s disease treatments.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-12-01T08:48:29Z
      DOI: 10.1177/0962280220973705
       
  • Random changepoint segmented regression with smooth transition
    • Authors: Julio M Singer, Francisco MM Rocha, Antonio Carlos Pedroso-de-Lima, Giovani L Silva, Giuliana C Coatti, Mayana Zatz
      First page: 643
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      We consider random changepoint segmented regression models to analyse data from a study conducted to verify whether treatment with stem cells may delay the onset of a symptom of amyotrophic lateral sclerosis in genetically modified mice. The proposed models capture the biological aspects of the data, accommodating a smooth transition between the periods with and without symptoms. An additional changepoint is considered to avoid negative predicted responses. Given the nonlinear nature of the model, we propose an algorithm to estimate the fixed parameters and to predict the random effects by fitting linear mixed models iteratively via standard software. We compare the variances obtained in the final step with bootstrapped and robust ones.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-11-04T02:36:06Z
      DOI: 10.1177/0962280220964953
       
  • Functional clustering methods for longitudinal data with application to
           electronic health records
    • Authors: Bret Zeldow, James Flory, Alisa Stephens-Shields, Marsha Raebel, Jason A Roy
      First page: 655
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      We develop a method to estimate subject-level trajectory functions from longitudinal data. The approach can be used for patient phenotyping, feature extraction, or, as in our motivating example, outcome identification, which refers to the process of identifying disease status through patient laboratory tests rather than through diagnosis codes or prescription information. We model the joint distribution of a continuous longitudinal outcome and baseline covariates using an enriched Dirichlet process prior. This joint model decomposes into (local) semiparametric linear mixed models for the outcome given the covariates and simple (local) marginals for the covariates. The nonparametric enriched Dirichlet process prior is placed on the regression and spline coefficients, the error variance, and the parameters governing the predictor space. This leads to clustering of patients based on their outcomes and covariates. We predict the outcome at unobserved time points for subjects with data at other time points as well as for new subjects with only baseline covariates. We find improved prediction over mixed models with Dirichlet process priors when there are a large number of covariates. Our method is demonstrated with electronic health records consisting of initiators of second-generation antipsychotic medications, which are known to increase the risk of diabetes. We use our model to predict laboratory values indicative of diabetes for each individual and assess incidence of suspected diabetes from the predicted dataset.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-11-12T04:23:58Z
      DOI: 10.1177/0962280220965630
       
  • Unifying instrumental variable and inverse probability weighting
           approaches for inference of causal treatment effect and unmeasured
           confounding in observational studies
    • Authors: Tao Liu, Joseph W Hogan
      First page: 671
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Confounding is a major concern when using data from observational studies to infer the causal effect of a treatment. Instrumental variables, when available, have been used to construct bound estimates on population average treatment effects when outcomes are binary and unmeasured confounding exists. With continuous outcomes, meaningful bounds are more challenging to obtain because the domain of the outcome is unrestricted. In this paper, we propose to unify the instrumental variable and inverse probability weighting methods, together with suitable assumptions in the context of an observational study, to construct meaningful bounds on causal treatment effects. The contextual assumptions are imposed in terms of the potential outcomes that are partially identified by data. The inverse probability weighting component incorporates a sensitivity parameter to encode the effect of unmeasured confounding. The instrumental variable and inverse probability weighting methods are unified using the principal stratification. By solving the resulting system of estimating equations, we are able to quantify both the causal treatment effect and the sensitivity parameter (i.e. the degree of the unmeasured confounding). We demonstrate our method by analyzing data from the HIV Epidemiology Research Study.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-11-20T07:08:09Z
      DOI: 10.1177/0962280220971835
       
  • Small sample sizes: A big data problem in high-dimensional data analysis
    • Authors: Frank Konietschke, Karima Schwab, Markus Pauly
      First page: 687
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      In many experiments and especially in translational and preclinical research, sample sizes are (very) small. In addition, data designs are often high dimensional, i.e. more dependent than independent replications of the trial are observed. The present paper discusses the applicability of max t-test-type statistics (multiple contrast tests) in high-dimensional designs (repeated measures or multivariate) with small sample sizes. A randomization-based approach is developed to approximate the distribution of the maximum statistic. Extensive simulation studies confirm that the new method is particularly suitable for analyzing data sets with small sample sizes. A real data set illustrates the application of the methods.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-11-24T09:53:35Z
      DOI: 10.1177/0962280220970228
       
  • Employing a latent variable framework to improve efficiency in composite
           endpoint analysis
    • Authors: Martina McMenamin, Jessica K Barrett, Anna Berglind, James MS Wason
      First page: 702
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Composite endpoints that combine multiple outcomes on different scales are common in clinical trials, particularly in chronic conditions. In many of these cases, patients will have to cross a predefined responder threshold in each of the outcomes to be classed as a responder overall. One instance of this occurs in systemic lupus erythematosus, where the responder endpoint combines two continuous, one ordinal and one binary measure. The overall binary responder endpoint is typically analysed using logistic regression, resulting in a substantial loss of information. We propose a latent variable model for the systemic lupus erythematosus endpoint, which assumes that the discrete outcomes are manifestations of latent continuous measures and can proceed to jointly model the components of the composite. We perform a simulation study and find that the method offers large efficiency gains over the standard analysis, the magnitude of which is highly dependent on the components driving response. Bias is introduced when joint normality assumptions are not satisfied, which we correct for using a bootstrap procedure. The method is applied to the Phase IIb MUSE trial in patients with moderate to severe systemic lupus erythematosus. We show that it estimates the treatment effect 2.5 times more precisely, offering a 60% reduction in required sample size.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-11-25T01:57:32Z
      DOI: 10.1177/0962280220970986
       
  • Bayesian adaptive decision-theoretic designs for multi-arm multi-stage
           clinical trials
    • Authors: Andrea Bassi, Johannes Berkhof, Daphne de Jong, Peter M van de Ven
      First page: 717
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Multi-arm multi-stage clinical trials in which more than two drugs are simultaneously investigated provide gains over separate single- or two-arm trials. In this paper we propose a generic Bayesian adaptive decision-theoretic design for multi-arm multi-stage clinical trials with K ([math]) arms. The basic idea is that after each stage a decision about continuation of the trial and accrual of patients for an additional stage is made on the basis of the expected reduction in loss. For this purpose, we define a loss function that incorporates the patient accrual costs as well as costs associated with an incorrect decision at the end of the trial. An attractive feature of our loss function is that its estimation is computationally undemanding, also when K > 2. We evaluate the frequentist operating characteristics for settings with a binary outcome and multiple experimental arms. We consider both the situation with and without a control arm. In a simulation study, we show that our design increases the probability of making a correct decision at the end of the trial as compared to nonadaptive designs and adaptive two-stage designs.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-11-27T05:48:27Z
      DOI: 10.1177/0962280220973697
       
  • Bayesian mixture cure rate frailty models with an application to gastric
           cancer data
    • Authors: Ali Karamoozian, Mohammad Reza Baneshi, Abbas Bahrampour
      First page: 731
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Mixture cure rate models are commonly used to analyze lifetime data with long-term survivors. On the other hand, frailty models also lead to accurate estimation of coefficients by controlling the heterogeneity in survival data. Gamma frailty models are the most common models of frailty. Usually, the gamma distribution is used in the frailty random variable models. However, for survival data which are suitable for populations with a cure rate, it may be better to use a discrete distribution for the frailty random variable than a continuous distribution. Therefore, we proposed two models in this study. In the first model, continuous gamma as the distribution is used, and in the second model, discrete hyper-Poisson distribution is applied for the frailty random variable. Also, Bayesian inference with Weibull distribution and generalized modified Weibull distribution as the baseline distribution were used in the two proposed models, respectively. In this study, we used data of patients with gastric cancer to show the application of these models in real data analysis. The parameters and regression coefficients were estimated using the Metropolis with Gibbs sampling algorithm, so that this algorithm is one of the crucial techniques in Markov chain Monte Carlo simulation. A simulation study was also used to evaluate the performance of the Bayesian estimates to confirm the proposed models. Based on the results of the Bayesian inference, it was found that the model with generalized modified Weibull and hyper-Poisson distributions is a suitable model in practical study and also this model fits better than the model with Weibull and Gamma distributions.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-11-27T05:53:48Z
      DOI: 10.1177/0962280220974699
       
  • Unbiasedness and efficiency of non-parametric and UMVUE estimators of the
           probabilistic index and related statistics
    • Authors: Johan Verbeeck, Vaiva Deltuvaite-Thomas, Ben Berckmoes, Tomasz Burzykowski, Marc Aerts, Olivier Thas, Marc Buyse, Geert Molenberghs
      First page: 747
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      In reliability theory, diagnostic accuracy, and clinical trials, the quantity [math], also known as the Probabilistic Index (PI), is a common treatment effect measure when comparing two groups of observations. The quantity [math], a linear transformation of PI known as the net benefit, has also been advocated as an intuitively appealing treatment effect measure. Parametric estimation of PI has received a lot of attention in the past 40 years, with the formulation of the Uniformly Minimum-Variance Unbiased Estimator (UMVUE) for many distributions. However, the non-parametric Mann–Whitney estimator of the PI is also known to be UMVUE in some situations. To understand this seeming contradiction, in this paper a systematic comparison is performed between the non-parametric estimator for the PI and parametric UMVUE estimators in various settings. We show that the Mann–Whitney estimator is always an unbiased estimator of the PI with univariate, completely observed data, while the parametric UMVUE is not when the distribution is misspecified. Additionally, the Mann–Whitney estimator is the UMVUE when observations belong to an unrestricted family. When observations come from a more restrictive family of distributions, the loss in efficiency for the non-parametric estimator is limited in realistic clinical scenarios. In conclusion, the Mann–Whitney estimator is simple to use and is a reliable estimator for the PI and net benefit in realistic clinical scenarios.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-12-01T08:52:11Z
      DOI: 10.1177/0962280220966629
       
  • Joint analysis of multivariate interval-censored survival data and a
           time-dependent covariate
    • Authors: Di Wu, Chenxi Li
      First page: 769
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      We develop a joint modeling method for multivariate interval-censored survival data and a time-dependent covariate that is intermittently measured with error. The joint model is estimated using nonparametric maximum likelihood estimation, which is carried out via an expectation–maximization algorithm, and the inference for finite-dimensional parameters is performed using bootstrap. We also develop a similar joint modeling method for univariate interval-censored survival data and a time-dependent covariate, which excels the existing methods in terms of model flexibility and interpretation. Simulation studies show that the model fitting and inference approaches perform very well under realistic sample sizes. We apply the method to a longitudinal study of dental caries in African-American children from low-income families in the city of Detroit, Michigan.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-12-01T08:55:48Z
      DOI: 10.1177/0962280220975064
       
  • Statistical design considerations for trials that study multiple
           indications
    • Authors: Alexander M Kaizer, Joseph S Koopmeiners, Nan Chen, Brian P Hobbs
      First page: 785
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Breakthroughs in cancer biology have defined new research programs emphasizing the development of therapies that target specific pathways in tumor cells. Innovations in clinical trial design have followed with master protocols defined by inclusive eligibility criteria and evaluations of multiple therapies and/or histologies. Consequently, characterization of subpopulation heterogeneity has become central to the formulation and selection of a study design. However, this transition to master protocols has led to challenges in identifying the optimal trial design and proper calibration of hyperparameters. We often evaluate a range of null and alternative scenarios; however, there has been little guidance on how to synthesize the potentially disparate recommendations for what may be optimal. This may lead to the selection of suboptimal designs and statistical methods that do not fully accommodate the subpopulation heterogeneity. This article proposes novel optimization criteria for calibrating and evaluating candidate statistical designs of master protocols in the presence of the potential for treatment effect heterogeneity among enrolled patient subpopulations. The framework is applied to demonstrate the statistical properties of conventional study designs when treatments offer heterogeneous benefit as well as identify optimal designs devised to monitor the potential for heterogeneity among patients with differing clinical indications using Bayesian modeling.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-12-03T06:33:27Z
      DOI: 10.1177/0962280220975187
       
  • Efficient and flexible simulation-based sample size determination for
           clinical trials with multiple design parameters
    • Authors: Duncan T Wilson, Richard Hooper, Julia Brown, Amanda J Farrin, Rebecca EA Walwyn
      First page: 799
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Simulation offers a simple and flexible way to estimate the power of a clinical trial when analytic formulae are not available. The computational burden of using simulation has, however, restricted its application to only the simplest of sample size determination problems, often minimising a single parameter (the overall sample size) subject to power being above a target level. We describe a general framework for solving simulation-based sample size determination problems with several design parameters over which to optimise and several conflicting criteria to be minimised. The method is based on an established global optimisation algorithm widely used in the design and analysis of computer experiments, using a non-parametric regression model as an approximation of the true underlying power function. The method is flexible, can be used for almost any problem for which power can be estimated using simulation, and can be implemented using existing statistical software packages. We illustrate its application to a sample size determination problem involving complex clustering structures, two primary endpoints and small sample considerations.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-12-03T06:37:06Z
      DOI: 10.1177/0962280220975790
       
  • Concordance probability as a meaningful contrast across disparate survival
           times
    • Authors: Sean M Devlin, Glenn Heller
      First page: 816
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      The performance of time-to-event models is frequently assessed in part by estimating the concordance probability, which evaluates the probabilistic pairwise ordering of the model-based risk scores and survival times. The standard definition of this probability conditions on any survival time pair ordering, irrespective of whether the times are meaningfully separated. Inclusion of survival times that would be deemed clinically similar attenuates the concordance and moves the estimate away from the contrast-of-interest: comparing the risk scores between individuals with disparate survival times. In this manuscript, we propose a concordance definition and corresponding method to estimate the probability conditional on survival times being separated by at least a minimum difference. The proposed estimate requires direct input from the analyst to identify a separable survival region and, in doing so, is analogous to the clinically defined subgroups used for binary outcome area under the curve estimates. The method is illustrated in two cancer examples: a prognostic score in clear cell renal cell carcinoma and two biomarkers in metastatic prostate cancer.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-12-10T04:32:55Z
      DOI: 10.1177/0962280220973694
       
  • Bayesian variable selection in logistic regression with application to
           whole-brain functional connectivity analysis for Parkinson’s disease
    • Authors: Xuan Cao, Kyoungjae Lee, Qingling Huang
      First page: 826
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Parkinson’s disease is a progressive, chronic, and neurodegenerative disorder that is primarily diagnosed by clinical examinations and magnetic resonance imaging (MRI). In this paper, we propose a Bayesian model to predict Parkinson’s disease employing a functional MRI (fMRI) based radiomics approach. We consider a spike and slab prior for variable selection in high-dimensional logistic regression models, and present an approximate Gibbs sampler by replacing a logistic distribution with a t-distribution. Under mild conditions, we establish model selection consistency of the induced posterior and illustrate the performance of the proposed method outperforms existing state-of-the-art methods through simulation studies. In fMRI analysis, 6216 whole-brain functional connectivity features are extracted for 50 healthy controls along with 70 Parkinson’s disease patients. We apply our method to the resulting dataset and further show its benefits with a higher average prediction accuracy of 0.83 compared to other contenders based on 10 random splits. The model fitting procedure also reveals the most discriminative brain regions for Parkinson’s disease. These findings demonstrate that the proposed Bayesian variable selection method has the potential to support radiological diagnosis for patients with Parkinson’s disease.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-12-14T03:12:52Z
      DOI: 10.1177/0962280220978990
       
  • Probability intervals of toxicity and efficacy design for dose-finding
           clinical trials in oncology
    • Authors: Xiaolei Lin, Yuan Ji
      First page: 843
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Immunotherapy, gene therapy or adoptive cell therapies, such as the chimeric antigen receptor+ T-cell therapies, have demonstrated promising therapeutic effects in oncology patients. We consider statistical designs for dose-finding adoptive cell therapy trials, in which the monotonic dose–response relationship assumed in traditional oncology trials may not hold. Building upon a previous design called “TEPI”, we propose a new dose finding method – Probability Intervals of Toxicity and Efficacy (PRINTE), which utilizes toxicity and efficacy jointly in making dosing decisions, does not require a pre-elicited decision table and at the same time can handle Ockham’s razor properly in the statistical inference. We show that optimizing the joint posterior expected utility of toxicity and efficacy under a 0–1 loss is equivalent to maximizing the marginal model posterior probability in the two-dimensional probability space. An extensive simulation study under various scenarios are conducted and results show that PRINTE outperforms existing designs in the literature since it assigns more patients to optimal doses and less to toxic ones, and selects optimal doses with higher percentages. The simple and transparent features together with good operating characteristics make PRINTE an improved design for dose-finding trials in oncology trials.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-12-17T04:33:02Z
      DOI: 10.1177/0962280220977009
       
  • Two-phase analysis and study design for survival models with error-prone
           exposures
    • Authors: Kyunghee Han, Thomas Lumley, Bryan E Shepherd, Pamela A Shaw
      First page: 857
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Increasingly, medical research is dependent on data collected for non-research purposes, such as electronic health records data. Health records data and other large databases can be prone to measurement error in key exposures, and unadjusted analyses of error-prone data can bias study results. Validating a subset of records is a cost-effective way of gaining information on the error structure, which in turn can be used to adjust analyses for this error and improve inference. We extend the mean score method for the two-phase analysis of discrete-time survival models, which uses the unvalidated covariates as auxiliary variables that act as surrogates for the unobserved true exposures. This method relies on a two-phase sampling design and an estimation approach that preserves the consistency of complete case regression parameter estimates in the validated subset, with increased precision leveraged from the auxiliary data. Furthermore, we develop optimal sampling strategies which minimize the variance of the mean score estimator for a target exposure under a fixed cost constraint. We consider the setting where an internal pilot is necessary for the optimal design so that the phase two sample is split into a pilot and an adaptive optimal sample. Through simulations and data example, we evaluate efficiency gains of the mean score estimator using the derived optimal validation design compared to balanced and simple random sampling for the phase two sample. We also empirically explore efficiency gains that the proposed discrete optimal design can provide for the Cox proportional hazards model in the setting of a continuous-time survival outcome.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-12-17T05:18:04Z
      DOI: 10.1177/0962280220978500
       
  • Inferring median survival differences in general factorial designs via
           permutation tests
    • Authors: Marc Ditzhaus, Dennis Dobler, Markus Pauly
      First page: 875
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Factorial survival designs with right-censored observations are commonly inferred by Cox regression and explained by means of hazard ratios. However, in case of non-proportional hazards, their interpretation can become cumbersome; especially for clinicians. We therefore offer an alternative: median survival times are used to estimate treatment and interaction effects and null hypotheses are formulated in contrasts of their population versions. Permutation-based tests and confidence regions are proposed and shown to be asymptotically valid. Their type-1 error control and power behavior are investigated in extensive simulations, showing the new methods’ wide applicability. The latter is complemented by an illustrative data analysis.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-12-22T03:58:03Z
      DOI: 10.1177/0962280220980784
       
  • CWL: A conditional weighted likelihood method to account for the delayed
           joint toxicity–efficacy outcomes for phase I/II clinical trials
    • Authors: Yifei Zhang, Yong Zang
      First page: 892
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      The delayed outcome issue is common in early phase dose-finding clinical trials. This problem becomes more intractable in phase I/II clinical trials because both toxicity and efficacy responses are subject to the delayed outcome issue. The existing methods applying for the phase I trials cannot be used directly for the phase I/II trial due to a lack of capability to model the joint toxicity–efficacy distribution. In this paper, we propose a conditional weighted likelihood (CWL) method to circumvent this issue. The key idea of the CWL method is to decompose the joint probability into the product of marginal and conditional probabilities and then weight each probability based on each patient’s actual follow-up time. The CWL method makes no parametric model assumption on either the dose–response curve or the toxicity–efficacy correlation and therefore can be applied to any existing phase I/II trial design. Numerical trial applications show that the proposed CWL method yields desirable operating characteristics.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-12-22T04:02:22Z
      DOI: 10.1177/0962280220979328
       
  • A group sequential design and sample size estimation for an immunotherapy
           trial with a delayed treatment effect
    • Authors: Bosheng Li, Liwen Su, Jun Gao, Liyun Jiang, Fangrong Yan
      First page: 904
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      A delayed treatment effect is often observed in the confirmatory trials for immunotherapies and is reflected by a delayed separation of the survival curves of the immunotherapy groups versus the control groups. This phenomenon makes the design based on the log-rank test not applicable because this design would violate the proportional hazard assumption and cause loss of power. Thus, we propose a group sequential design allowing early termination on the basis of efficacy based on a more powerful piecewise weighted log-rank test for an immunotherapy trial with a delayed treatment effect. We present an approach on the group sequential monitoring, in which the information time is defined based on the number of events occurring after the delay time. Furthermore, we developed a one-dimensional search algorithm to determine the required maximum sample size for the proposed design, which uses an analytical estimation obtained by the inflation factor as an initial value and an empirical power function calculated by a simulation-based procedure as an objective function. In the simulation, we tested the unstable accuracy of the analytical estimation, the consistent accuracy of the maximum sample size determined by the search algorithm and the advantages of the proposed design on saving sample size.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-12-24T02:53:05Z
      DOI: 10.1177/0962280220980780
       
  • Class imbalance in gradient boosting classification algorithms:
           Application to experimental stroke data
    • Authors: Olga Lyashevska, Fiona Malone, Eugene MacCarthy, Jens Fiehler, Jan-Hendrik Buhk, Liam Morris
      First page: 916
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Imbalance between positive and negative outcomes, a so-called class imbalance, is a problem generally found in medical data. Imbalanced data hinder the performance of conventional classification methods which aim to improve the overall accuracy of the model without accounting for uneven distribution of the classes. To rectify this, the data can be resampled by oversampling the positive (minority) class until the classes are approximately equally represented. After that, a prediction model such as gradient boosting algorithm can be fitted with greater confidence. This classification method allows for non-linear relationships and deep interactive effects while focusing on difficult areas by iterative shifting towards problematic observations. In this study, we demonstrate application of these methods to medical data and develop a practical framework for evaluation of features contributing into the probability of stroke.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-12-28T08:12:12Z
      DOI: 10.1177/0962280220980484
       
  • Selecting the number of categories of the lymph node ratio in cancer
           research: A bootstrap-based hypothesis test
    • Authors: Irantzu Barrio, Javier Roca-Pardiñas, Inmaculada Arostegui
      First page: 926
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      The high impact of the lymph node ratio as a prognostic factor is widely established in colorectal cancer, and is being used as a categorized predictor variable in several studies. However, the cut-off points as well as the number of categories considered differ considerably in the literature. Motivated by the need to obtain the best categorization of the lymph node ratio as a predictor of mortality in colorectal cancer patients, we propose a method to select the best number of categories for a continuous variable in a logistic regression framework. Thus, to this end, we propose a bootstrap-based hypothesis test, together with a new estimation algorithm for the optimal location of the cut-off points called BackAddFor, which is an updated version of the previously proposed AddFor algorithm. The performance of the hypothesis test was evaluated by means of a simulation study, under different scenarios, yielding type I errors close to the nominal errors and good power values whenever a meaningful difference in terms of prediction ability existed. Finally, the methodology proposed was applied to the CCR-CARESS study where the lymph node ratio was included as a predictor of five-year mortality, resulting in the selection of three categories.
      Citation: Statistical Methods in Medical Research
      PubDate: 2020-11-10T03:19:39Z
      DOI: 10.1177/0962280220965631
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 3.238.235.155
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-