Subjects -> STATISTICS (Total: 130 journals)
| A B C D E F G H I J K L M N O P Q R S T U V W X Y Z | The end of the list has been reached or no journals were found for your choice. |
|
|
- Modeling unmeasured baseline information in observational time-to-event
data subject to delayed study entry-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Regina Stegherr, Jan Beyersmann, Peter Bramlage, Tobias Bluhmki Abstract: Statistical Methods in Medical Research, Ahead of Print. Unmeasured baseline information in left-truncated data situations frequently occurs in observational time-to-event analyses. For instance, a typical timescale in trials of antidiabetic treatment is “time since treatment initiation”, but individuals may have initiated treatment before the start of longitudinal data collection. When the focus is on baseline effects, one widespread approach is to fit a Cox proportional hazards model incorporating the measurements at delayed study entry. This has been criticized because of the potential time dependency of covariates. We tackle this problem by using a Bayesian joint model that combines a mixed-effects model for the longitudinal trajectory with a proportional hazards model for the event of interest incorporating the baseline covariate, possibly unmeasured in the presence of left truncation. The novelty is that our procedure is not used to account for non-continuously monitored longitudinal covariates in right-censored time-to-event studies, but to utilize these trajectories to make inferences about missing baseline measurements in left-truncated data. Simulating times-to-event depending on baseline covariates we also compared our proposal to a simpler two-stage approach which performed favorably. Our approach is illustrated by investigating the impact of baseline blood glucose levels on antidiabetic treatment failure using data from a German diabetes register. Citation: Statistical Methods in Medical Research PubDate: 2023-03-16T09:20:02Z DOI: 10.1177/09622802231163334
- Linear mixed models for investigating effect modification in subgroup
meta-analysis-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Anne Lyngholm Sørensen, Ian C Marschner Abstract: Statistical Methods in Medical Research, Ahead of Print. Subgroup meta-analysis can be used for comparing treatment effects between subgroups using information from multiple trials. If the effect of treatment is differential depending on subgroup, the results could enable personalization of the treatment. We propose using linear mixed models for estimating treatment effect modification in aggregate data meta-analysis. The linear mixed models capture existing subgroup meta-analysis methods while allowing for additional features such as flexibility in modeling heterogeneity, handling studies with missing subgroups and more. Reviews and simulation studies of the best suited models for estimating possible differential effect of treatment depending on subgroups have been studied mostly within individual participant data meta-analysis. While individual participant data meta-analysis in general is recommended over aggregate data meta-analysis, conducting an aggregate data subgroup meta-analysis could be valuable for exploring treatment effect modifiers before committing to an individual participant data subgroup meta-analysis. Additionally, using solely individual participant data for subgroup meta-analysis requires collecting sufficient individual participant data which may not always be possible. In this article, we compared existing methods with linear mixed models for aggregate data subgroup meta-analysis under a broad selection of scenarios using simulation and two case studies. Both the case studies and simulation studies presented here demonstrate the advantages of the linear mixed model approach in aggregate data subgroup meta-analysis. Citation: Statistical Methods in Medical Research PubDate: 2023-03-16T09:19:33Z DOI: 10.1177/09622802231163330
- Estimating individualized treatment rules in longitudinal studies with
covariate-driven observation times-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Janie Coulombe, Erica EM Moodie, Susan M Shortreed, Christel Renoux Abstract: Statistical Methods in Medical Research, Ahead of Print. The sequential treatment decisions made by physicians to treat chronic diseases are formalized in the statistical literature as dynamic treatment regimes. To date, methods for dynamic treatment regimes have been developed under the assumption that observation times, that is, treatment and outcome monitoring times, are determined by study investigators. That assumption is often not satisfied in electronic health records data in which the outcome, the observation times, and the treatment mechanism are associated with patients’ characteristics. The treatment and observation processes can lead to spurious associations between the treatment of interest and the outcome to be optimized under the dynamic treatment regime if not adequately considered in the analysis. We address these associations by incorporating two inverse weights that are functions of a patient’s covariates into dynamic weighted ordinary least squares to develop optimal single stage dynamic treatment regimes, known as individualized treatment rules. We show empirically that our methodology yields consistent, multiply robust estimators. In a cohort of new users of antidepressant drugs from the United Kingdom’s Clinical Practice Research Datalink, the proposed method is used to develop an optimal treatment rule that chooses between two antidepressants to optimize a utility function related to the change in body mass index. Citation: Statistical Methods in Medical Research PubDate: 2023-03-16T01:52:05Z DOI: 10.1177/09622802231158733
- Semiparametric generalized estimating equations for repeated measurements
in cross-over designs-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Nelson Alirio Cruz Gutierrez, Oscar Orlando Melo, Carlos Alberto Martinez Abstract: Statistical Methods in Medical Research, Ahead of Print. A model for cross-over designs with repeated measures within each period was developed. It was obtained using an extension of generalized estimating equations that includes a parametric component to model treatment effects and a non-parametric component to model time and carry-over effects; the estimation approach for the non-parametric component is based on splines. A simulation study was carried out to explore the model properties. Thus, when there is a carry-over effect or a functional temporal effect, the proposed model presents better results than the standard models. Among the theoretical properties, the solution is found to be analogous to weighted least squares. Therefore, model diagnostics can be made by adapting the results from a multiple regression. The proposed methodology was implemented in the data sets of the cross-over experiments that motivated the approach of this work: systolic blood pressure and insulin in rabbits. Citation: Statistical Methods in Medical Research PubDate: 2023-03-15T08:16:47Z DOI: 10.1177/09622802231158736
- A general averaging method for count data with overdispersion and/or
excess zeros in biomedicine-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yin Liu, Jianghong Zhou, Zhanshou Chen, Xinyu Zhang Abstract: Statistical Methods in Medical Research, Ahead of Print. With the aim of providing better estimation for count data with overdispersion and/or excess zeros, we develop a novel estimation method—optimal weighting based on cross-validation—for the zero-inflated negative binomial model, where the Poisson, negative binomial, and zero-inflated Poisson models are all included as its special cases. To facilitate the selection of the optimal weight vector, a [math]-fold cross-validation technique is adopted. Unlike the jackknife model averaging discussed in Hansen and Racine (2012), the proposed method deletes one group of observations rather than only one observation to enhance the computational efficiency. Furthermore, we also theoretically prove the asymptotic optimality of the newly developed optimal weighting based on cross-validation method. Simulation studies and three empirical applications indicate the superiority of the presented optimal weighting based on cross-validation method when compared with the three commonly used information-based model selection methods and their model averaging counterparts. Citation: Statistical Methods in Medical Research PubDate: 2023-03-15T08:16:18Z DOI: 10.1177/09622802231159213
- Confidence intervals for the length of the receiver-operating
characteristic curve based on a smooth estimator-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Pablo Martínez-Camblor Abstract: Statistical Methods in Medical Research, Ahead of Print. A good diagnostic test should show different behavior on both the positive and the negative populations. However, this is not enough for having a good classification system. The binary classification problem is a complex task, which implies to define decision criteria. The knowledge of the level of dissimilarity between the two involved distributions is not enough. We also have to know how to define those decision criteria. The length of the receiver-operating characteristic curve has been proposed as an index of the optimal discriminatory capacity of a biomarker. It is related not with the actual but with the optimal classification capacity of the considered diagnostic test. One particularity of this index is that its estimation should be based on parametric or smoothed models. We explore here the behavior of a kernel density estimator-based approximation for estimating the length of the receiver-operating characteristic curve. We prove the asymptotic distribution of the resulting statistic, propose a parametric bootstrap algorithm for confidence intervals construction, discuss the role that the bandwidth parameter plays in the quality of the provided estimations and, via Monte Carlo simulations, study its finite-sample behavior considering four different criteria for the bandwidth selection. The practical use of the length of the receiver-operating characteristic curve is illustrated through two real-world examples. Citation: Statistical Methods in Medical Research PubDate: 2023-03-15T07:44:38Z DOI: 10.1177/09622802231160053
- Using dichotomized survival data to construct a prior distribution for a
Bayesian seamless Phase II/III clinical trial-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Benjamin Duputel, Nigel Stallard, François Montestruc, Sarah Zohar, Moreno Ursino Abstract: Statistical Methods in Medical Research, Ahead of Print. Master protocol designs allow for simultaneous comparison of multiple treatments or disease subgroups. Master protocols can also be designed as seamless studies, in which two or more clinical phases are considered within the same trial. They can be divided into two categories: operationally seamless, in which the two phases are separated into two independent studies, and inferentially seamless, in which the interim analysis is considered an adaptation of the study. Bayesian designs are scarcely studied. Our aim is to propose and compare Bayesian operationally seamless Phase II/III designs using a binary endpoint for the first stage and a time-to-event endpoint for the second stage. At the end of Phase II, arm selection is based on posterior (futility) and predictive (selection) probabilities. The results of the first phase are then incorporated into prior distributions of a time-to-event model. Simulation studies showed that Bayesian operationally seamless designs can approach the inferentially seamless counterpart, allowing for an increasing simulated power with respect to the operationally frequentist design. Citation: Statistical Methods in Medical Research PubDate: 2023-03-15T07:43:14Z DOI: 10.1177/09622802231160554
- Bayesian order constrained adaptive design for phase II clinical trials
evaluating subgroup-specific treatment effect-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Mu Shan, Beibei Guo, Hao Liu, Qian Li, Yong Zang Abstract: Statistical Methods in Medical Research, Ahead of Print. The “one-size-fits-all’’ paradigm is inappropriate for phase II clinical trials evaluating biotherapies, which are often expected to have substantial heterogeneous treatment effects among different subgroups defined by biomarker. For these biotherapies, the objective of phase II clinical trials is often to evaluate subgroup-specific treatment effects. In this article, we propose a simple yet efficient Bayesian adaptive phase II biomarker-guided design, referred to as the Bayesian-order constrained adaptive design, to detect the subgroup-specific treatment effects of biotherapies. The Bayesian order constrained adaptive design combines the features of the enrichment design and sequential design. It starts with a “all-comers” stage, and subsequently switches to an enrichment stage for either the marker-positive subgroup or marker-negative subgroup, depending on the interim analysis results. The go/no go enrichment criteria are determined by two posterior probabilities utilizing the inherent ordering constraint between two subgroups. We also extend the Bayesian-order constrained adaptive design to handle the missing biomarker situation. We conducted comprehensive computer simulation studies to investigate the operating characteristics of the Bayesian order constrained adaptive design, and compared it with other existing and conventional designs. The results shown that the Bayesian order constrained adaptive design yielded the best overall performance in detecting the subgroup-specific treatment effects by jointly considering the efficiency and cost-effectiveness of the trials. The software for simulation and trial implementation are available for free download. Citation: Statistical Methods in Medical Research PubDate: 2023-03-15T07:39:37Z DOI: 10.1177/09622802231158738
- Copula graphic estimation of the survival function with dependent
censoring and its application to analysis of pancreatic cancer clinical trial-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Jung Hyun Jo, Zhan Gao, Inkyung Jung, Si Young Song, Geert Ridder, Hyungsik Roger Moon Abstract: Statistical Methods in Medical Research, Ahead of Print. In this article, we consider a survival function estimation method that may be suitable for analyses of clinical trials of cancer treatments whose prognosis is known to be poor such as pancreatic cancer treatment. Typically, these kinds of trials are not double-blind, and patients in the control group may drop out in more significant numbers than in the treatment group if their disease progresses (DP). If disease progression is associated with a higher risk of death, then censoring becomes dependent. To estimate the survival function with dependent censoring, we use copula-graphic estimation, where a parametric copula function is used to model the dependence in the joint survival function of the event and censoring time. In this article, we propose a novel method that one can use in choosing the copula parameter. As an application example, we estimate the survival function of the overall survival time of the KG4/2015 study, the phase 3 clinical trial of the efficacy of GV1001 as a treatment for pancreatic cancer. We provide both statistical and clinical pieces of evidence that support the violation of independent censoring. Applying the estimation method with dependent censoring, we obtain that the estimates of the median survival times are 339 days in the treatment group and 225.5 days in the control group. We also find that the estimated difference of the medians is 113.5 days, and the difference is statistically significant at the one-sided level with size 2.5[math]. Citation: Statistical Methods in Medical Research PubDate: 2023-03-15T07:21:22Z DOI: 10.1177/09622802231158812
- Bayesian semiparametric joint modeling of a count outcome and
inconveniently timed longitudinal predictors-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Woobeen Lim, Michael L Pennell, Michelle J Naughton, Electra D Paskett Abstract: Statistical Methods in Medical Research, Ahead of Print. The Women's Health Initiative (WHI) Life and Longevity After Cancer (LILAC) study is an excellent resource for studying the quality of life following breast cancer treatment. At study entry, women were asked about new symptoms that appeared following their initial cancer treatment. In this article, we were interested in using regression modeling to estimate associations of clinical and lifestyle factors at cancer diagnosis (independent variables) with the number of new symptoms (dependent variable). Although clinical and lifestyle data were collected longitudinally, few measurements were obtained at diagnosis or at a consistent timepoint prior to diagnosis, which complicates the analysis. Furthermore, parametric count models, such as the Poisson and negative binomial, do not fit the symptom data well. Thus, motivated by the issues encountered in LILAC, we propose two Bayesian joint models for longitudinal data and a count outcome. Our two models differ according to the assumption on the outcome distribution: one uses a negative binomial (NB) distribution and the other a nonparametric rounded mixture of Gaussians (RMG). The mean of each count distribution is dependent on imputed values of continuous, binary, and ordinal variables at a time point of interest (e.g. diagnosis). To facilitate imputation, longitudinal variables are modeled jointly using a linear mixed model for a latent underlying normal random variable, and a Dirichlet process prior is assigned to the random subject-specific effects to relax distribution assumptions. In simulation studies, the RMG joint model exhibited superior power and predictive accuracy over the NB model when the data were not NB. The RMG joint model also outperformed an RMG model containing predictors imputed using the last value carried forward, which generated estimates that were biased toward the null. We used our models to examine the relationship between sleep health at diagnosis and the number of new symptoms following breast cancer treatment in LILAC. Citation: Statistical Methods in Medical Research PubDate: 2023-03-01T07:31:15Z DOI: 10.1177/09622802231154325
- A unified approach based on multidimensional scaling for calibration
estimation in survey sampling with qualitative auxiliary information-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: J Fernando Vera, Carmen Cecilia Sánchez Zuleta, Maria del Mar Rueda Abstract: Statistical Methods in Medical Research, Ahead of Print. Survey calibration is a widely used method to estimate the population mean or total score of a target variable, particularly in medical research. In this procedure, auxiliary information related to the variable of interest is used to recalibrate the estimation weights. However, when the auxiliary information includes qualitative variables, traditional calibration techniques may be not feasible or the optimisation procedure may fail. In this article, we propose the use of linear calibration in conjunction with a multidimensional scaling-based set of continuous, uncorrelated auxiliary variables along with a suitable metric in a distance-based regression framework. The calibration weights are estimated using a projection of the auxiliary information on a low-dimensional Euclidean space. The approach becomes one of the linear calibration with quantitative variables avoiding the usual computational problems in the presence of qualitative auxiliary information. The new variables preserve the underlying assumption in linear calibration of a linear relationship between the auxiliary and target variables, and therefore the optimal properties of the linear calibration method remain true. The behaviour of this approach is examined using a Monte Carlo procedure and its value is illustrated by analysing real data sets and by comparing its performance with that of traditional calibration procedures. Citation: Statistical Methods in Medical Research PubDate: 2023-02-15T09:53:25Z DOI: 10.1177/09622802231151211
- Logistic regression with correlated measurement error and
misclassification in covariates-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zhiqiang Cao, Man Yu Wong, Garvin HL Cheng Abstract: Statistical Methods in Medical Research, Ahead of Print. Many areas of research, such as nutritional epidemiology, may encounter measurement errors of continuous covariates and misclassification of categorical variables when modeling. It is well known that ignoring measurement errors or misclassification can lead to biased results. But most research has focused on solving these two problems separately. Addressing both measurement error and misclassification simultaneously in a single analysis is less actively studied. In this article, we propose a new correction method for a logistic regression to handle correlated error variables involved in multivariate continuous covariates and misclassification in a categorical variable simultaneously. It is not computationally intensive since a closed-form of the approximate likelihood function conditional on observed covariates is derived. The asymptotic normality of this proposed estimator is established under regularity conditions and its finite-sample performance is empirically examined by simulation studies. We apply this new estimation method to handle measurement error in some nutrients of interest and misclassification of a categorical variable named physical activity in the European Prospective Investigation into Cancer and Nutrition-InterAct Study data. Analyses show that fruit is negatively associated with type 2 diabetes for a group of women doing active physical activity, protein has positive association with type 2 diabetes for the group of less active physical activity, and actual physical activity has a greater effect on reducing the risk of type 2 diabetes than observed physical activity. Citation: Statistical Methods in Medical Research PubDate: 2023-02-15T05:05:21Z DOI: 10.1177/09622802231154324
- Multivariate semiparametric control charts for mixed-type data
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Elisavet M Sofikitou, Marianthi Markatou,
Markos V Koutras Abstract: Statistical Methods in Medical Research, Ahead of Print. A useful tool that has gained popularity in the Quality Control area is the control chart which monitors a process over time, identifies potential changes, understands variations, and eventually improves the quality and performance of the process. This article introduces a new class of multivariate semiparametric control charts for monitoring multivariate mixed-type data, which comprise both continuous and discrete random variables (rvs). Our methodology leverages ideas from clustering and Statistical Process Control to develop control charts for MIxed-type data. We propose four control chart schemes based on modified versions of the KAy-means for MIxed LArge KAMILA data clustering algorithm, where we assume that the two existing clusters represent the reference and the test sample. The charts are semiparametric, the continuous rvs follow a distribution that belongs in the class of elliptical distributions. Categorical scale rvs follow a multinomial distribution. We present the algorithmic procedures and study the characteristics of the new control charts. The performance of the proposed schemes is evaluated on the basis of the False Alarm Rate and in-control Average Run Length. Finally, we demonstrate the effectiveness and applicability of our proposed methods utilizing real-world data. Citation: Statistical Methods in Medical Research PubDate: 2023-02-15T02:21:24Z DOI: 10.1177/09622802221142528
- Updating the probability of study success for combination therapies using
related combination study data-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Emily Graham, Chris Harbron, Thomas Jaki Abstract: Statistical Methods in Medical Research, Ahead of Print. Combination therapies are becoming increasingly used in a range of therapeutic areas such as oncology and infectious diseases, providing potential benefits such as minimising drug resistance and toxicity. Sets of combination studies may be related, for example, if they have at least one treatment in common and are used in the same indication. In this setting, value can be gained by sharing information between related combination studies. We present a framework that allows the study success probabilities of a set of related combination therapies to be updated based on the outcome of a single combination study. This allows us to incorporate both direct and indirect data on a combination therapy in the decision-making process for future studies. We also provide a robustification that accounts for the fact that the prior assumptions on the correlation structure of the set of combination therapies may be incorrect. We show how this framework can be used in practice and highlight the use of the study success probabilities in the planning of clinical studies. Citation: Statistical Methods in Medical Research PubDate: 2023-02-13T07:37:47Z DOI: 10.1177/09622802231151218
- Modeling comorbidity of chronic diseases using coupled hidden Markov model
with bivariate discrete copula-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Zarina Oflaz, Ceylan Yozgatligil, A Sevtap Selcuk-Kestel Abstract: Statistical Methods in Medical Research, Ahead of Print. A range of chronic diseases have a significant influence on each other and share common risk factors. Comorbidity, which shows the existence of two or more diseases interacting or triggering each other, is an important measure for actuarial valuations. The main proposal of the study is to model parallel interacting processes describing two or more chronic diseases by a combination of hidden Markov theory and copula function. This study introduces a coupled hidden Markov model with the bivariate discrete copula function in the hidden process. To estimate the parameters of the model and deal with the numerical intractability of the log-likelihood, we use a variational expectation maximization algorithm. To perform the variational expectation maximization algorithm, a lower bound of the model’s log-likelihood is defined, and estimators of the parameters are computed in the M-part. A possible numerical underflow occurring in the computation of forward–backward probabilities is solved. The simulation study is conducted for two different levels of association to assess the performance of the proposed model, resulting in satisfactory findings. The proposed model was applied to hospital appointment data from a private hospital. The model defines the dependency structure of unobserved disease data and its dynamics. The application results demonstrate that the model is useful for investigating disease comorbidity when only population dynamics over time and no clinical data are available. Citation: Statistical Methods in Medical Research PubDate: 2023-02-13T07:09:36Z DOI: 10.1177/09622802231155100
- Data-dependent early completion of dose-finding trials for
drug-combination-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Masahiro Kojima Abstract: Statistical Methods in Medical Research, Ahead of Print. I propose a data-dependent early completion of dose-finding trials for drug combinations. Early completion is determined when the dose retainment probability using both the trial data and the number of remaining patients is high. An early completion method in which the dose retainment probability is adjusted by a bivariate isotonic regression is also proposed. Early completion is demonstrated for a virtual trial. The performance of the early completion method is evaluated by simulation studies with 12 scenarios. I have shown that, compared with non-early completion designs, the proposed early completion methods reduce the number of patients treated while maintaining similar performance. The number of patients for determining early completion before a trial start is determined and the program code for calculating the dose retainment probability is provided.AbstractPurposeModel-assisted designs for drug combination trials have been proposed as novel designs with simple and superior performance. However, model-assisted designs have the disadvantage that the sample size must be set in advance, and trials cannot be completed until the number of patients treated reaches the pre-set sample size. Model-assisted designs have a stopping rule that can be used to terminate the trial if the number of patients treated exceeds the predetermined number, there is no statistical basis for the predetermined number. Here, I propose two methods for data-dependent early completion of dose-finding trials for drug combination: (1) an early completion method based on dose retainment probability, and (2) an early completion method in which the dose retainment probability is adjusted by a bivariate isotonic regression.MethodsEarly completion is determined when the dose retainment probability using both trial data and the number of remaining patients is high. Early completion of a virtual trial was demonstrated. The performances of the early completion methods were evaluated by simulation studies with 12 scenarios.ResultsThe simulation studies showed that the percentage of early completion was an average of approximately 70%, and the number of patients treated was 25% less than the planned sample size. The percentage of correct maximum tolerated dose combination selection for the early completion methods was similar to that of non-early completion methods with an average difference of approximately 3%.ConclusionThe performance of the proposed early completion methods was similar to that of the non-early completion methods. Furthermore, the number of patients for determining early completion before the trial starts was determined and a program code for calculating the dose retainment probability was proposed. Citation: Statistical Methods in Medical Research PubDate: 2023-02-13T07:08:05Z DOI: 10.1177/09622802231155094
- Estimation in discrete time coarsened multivariate longitudinal models
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Marcus Westerberg Abstract: Statistical Methods in Medical Research, Ahead of Print. We consider the analysis of longitudinal data of multiple types of events where some of the events are observed on a coarser level (e.g. grouped) at some time points during the follow-up, for example, when certain events, such as disease progression, are only observable during parts of follow-up for some subjects, causing gaps in the data, or when the time of death is observed but the cause of death is unknown. In this case, there is missing data in key characteristics of the event history such as onset, time in state, and number of events. We derive the likelihood function, score and observed information under independent and non-informative coarsening, and conduct a simulation study where we compare bias, empirical standard errors, and confidence interval coverage of estimators based on direct maximum likelihood, Monte Carlo Expectation Maximisation, ignoring the coarsening thus acting as if no event occurred, and artificial right censoring at the first time of coarsening. Longitudinal data on drug prescriptions and survival in men receiving palliative treatment for prostate cancer is used to estimate the parameters of one of the data-generating models. We demonstrate that the performance depends on several factors, including sample size and type of coarsening. Citation: Statistical Methods in Medical Research PubDate: 2023-02-13T07:06:48Z DOI: 10.1177/09622802231155010
- Change plane model averaging for subgroup identification
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Pan Liu, Jialiang Li, Michael R Kosorok Abstract: Statistical Methods in Medical Research, Ahead of Print. Central to personalized medicine and tailored therapies is discovering the subpopulations that account for treatment effect heterogeneity and are likely to benefit more from given interventions. In this article, we introduce a change plane model averaging method to identify subgroups characterized by linear combinations of predictive variables and multiple cut-offs. We first fit a sequence of statistical models, each incorporating the thresholding effect of one particular covariate. The estimation of submodels is accomplished through an iterative integration of a change point detection method and numerical optimization algorithms. A frequentist model averaging approach is then employed to linearly combine the submodels with optimal weights. Our approach can deal with high-dimensional settings involving enormous potential grouping variables by adopting the sparsity-inducing penalties. Simulation studies are conducted to investigate the prediction and subgrouping performance of the proposed method, with a comparison to various competing subgroup detection methods. Our method is applied to a dataset from a warfarin pharmacogenetics study, producing some new findings. Citation: Statistical Methods in Medical Research PubDate: 2023-02-13T07:05:36Z DOI: 10.1177/09622802231154327
- Semiparametric copula method for semi-competing risks data subject to
interval censoring and left truncation: Application to disability in elderly-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Tao Sun, Yunlong Li, Zhengyan Xiao, Ying Ding, Xiaojun Wang Abstract: Statistical Methods in Medical Research, Ahead of Print. We aim to evaluate the marginal effects of covariates on time-to-disability in the elderly under the semi-competing risks framework, as death dependently censors disability, not vice versa. It becomes particularly challenging when time-to-disability is subject to interval censoring due to intermittent assessments. A left truncation issue arises when the age time scale is applied. We develop a flexible two-parameter copula-based semiparametric transformation model for semi-competing risks data subject to interval censoring and left truncation. The two-parameter copula quantifies both upper and lower tail dependence between two margins. The semiparametric transformation models incorporate proportional hazards and proportional odds models in both margins. We propose a two-step sieve maximum likelihood estimation procedure and study the sieve estimators’ asymptotic properties. Simulations show that the proposed method corrects biases in the marginal method. We demonstrate the proposed method in a large-scale Chinese Longitudinal Healthy Longevity Study and provide new insights into preventing disability in the elderly. The proposed method could be applied to the general semi-competing risks data with intermittently assessed disease status. Citation: Statistical Methods in Medical Research PubDate: 2023-02-03T04:06:58Z DOI: 10.1177/09622802221133552
- Revisiting sample size planning for receiver operating characteristic
studies: A confidence interval approach with precision and assurance-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Di Shu, Guangyong Zou Abstract: Statistical Methods in Medical Research, Ahead of Print. Estimation of areas under receiver operating characteristic curves and their differences is a key task in diagnostic studies. Here we develop closed-form sample size formulas for such studies with a focus on estimation rather than hypothesis testing, by explicitly incorporating pre-specified precision and assurance, with precision denoted by the lower limit of confidence interval and assurance denoted by the probability of achieving that lower limit. For sample size estimation purposes, we introduce a normality-based variance function for valid estimation allowing for unequal variances of observations in the disease and non-disease groups. Simulation results demonstrate that the proposed formulas produce empirical assurance probability close to the pre-specified assurance probability and empirical coverage probability close to the nominal level. Compared with a frequently used existing variance function, the proposed function provides more accurate and efficient sample size estimates. For an illustration of the proposed formulas, we present real-world worked examples. To facilitate implementation, we have developed an online calculator openly available at https://dishu.page/calculator/. Citation: Statistical Methods in Medical Research PubDate: 2023-02-02T07:10:31Z DOI: 10.1177/09622802231151210
- Refined moderation analysis with binary outcomes in precision medicine
research-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Eric Anto, Xiaogang Su Abstract: Statistical Methods in Medical Research, Ahead of Print. Moderation analysis for evaluating differential treatment effects serves as the bedrock of precision medicine, which is of growing interest in many fields. In the analysis of data with binary outcomes, we observe an interesting symmetry property concerning the ratio of odds ratios, which suggests that heterogeneous treatment effects could be equivalently estimated via a role exchange between the outcome and treatment variable in logistic regression models. We then obtain refined inference on moderating effects by rearranging data and combining two models into one via a generalized estimating equation approach. The improved efficiency is helpful in addressing the lack-of-power problem that is common in the search for important moderators. We investigate the proposed method by simulation and provide an illustration with data from a randomized trial on wart treatment. Citation: Statistical Methods in Medical Research PubDate: 2023-02-01T06:57:39Z DOI: 10.1177/09622802231151206
- Estimation of the average treatment effect with variable selection and
measurement error simultaneously addressed for potential confounders-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Grace Y. Yi, Li-Pang Chen Abstract: Statistical Methods in Medical Research, Ahead of Print. In the framework of causal inference, the inverse probability weighting estimation method and its variants have been commonly employed to estimate the average treatment effect. Such methods, however, are challenged by the presence of irrelevant pre-treatment variables and measurement error. Ignoring these features and naively applying the usual inverse probability weighting estimation procedures may typically yield biased inference results. In this article, we develop an inference method for estimating the average treatment effect with those features taken into account. We establish theoretical properties for the resulting estimator and carry out numerical studies to assess the finite sample performance of the proposed estimator. Citation: Statistical Methods in Medical Research PubDate: 2023-01-25T06:57:19Z DOI: 10.1177/09622802221146308
- Saddlepoint approximation [math]-values of weighted log-rank tests based
on censored clustered data under block Efron’s biased-coin design-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Haidy A. Newer First page: 465 Abstract: Statistical Methods in Medical Research, Ahead of Print. Clustered survival data frequently occurs in biomedical research fields and clinical trials. The log-rank tests are used for two independent samples of clustered data tests. We use the block Efron’s biased-coin randomization (design) to assign patients to treatment groups in a clinical trial by forcing a sequential experiment to be balanced. In this article, the [math]-values of the null permutation distribution of log-rank tests for clustered data are approximated via the double saddlepoint approximation method. Comprehensive numerical studies are carried out to assess the accuracy of the saddlepoint approximation. This approximation demonstrates great accuracy over the asymptotic normal approximation. Citation: Statistical Methods in Medical Research PubDate: 2023-01-10T05:31:42Z DOI: 10.1177/09622802221143498
- A novel power prior approach for borrowing historical control data in
clinical trials-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Yaru Shi, Wen Li, Guanghan (Frank) Liu First page: 493 Abstract: Statistical Methods in Medical Research, Ahead of Print. There has been an increased interest in borrowing information from historical control data to improve the statistical power for hypothesis testing, therefore reducing the required sample sizes in clinical trials. To account for the heterogeneity between the historical and current trials, power priors are often considered to discount the information borrowed from the historical data. However, it can be challenging to choose a fixed power prior parameter in the application. The modified power prior approach, which defines a random power parameter with initial prior to control the amount of historical information borrowed, may not directly account for heterogeneity between the trials. In this paper, we propose a novel approach to pick a power prior based on some direct measures of distributional differences between historical control data and current control data under normal assumptions. Simulations are conducted to investigate the performance of the proposed approach compared with current approaches (e.g. commensurate prior, meta-analytic-predictive, and modified power prior). The results show that the proposed power prior improves the study power while controlling the type I error within a tolerable limit when the distribution of the historical control data is similar to that of the current control data. The method is developed for both superiority and non-inferiority trials and is illustrated with an example from vaccine clinical trials. Citation: Statistical Methods in Medical Research PubDate: 2023-01-05T07:17:25Z DOI: 10.1177/09622802221146309
- Intervention treatment distributions that depend on the observed treatment
process and model double robustness in causal survival analysis-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Lan Wen, Julia L. Marcus, Jessica G. Young First page: 509 Abstract: Statistical Methods in Medical Research, Ahead of Print. The generalized g-formula can be used to estimate the probability of survival under a sustained treatment strategy. When treatment strategies are deterministic, estimators derived from the so-called efficient influence function (EIF) for the g-formula will be doubly robust to model misspecification. In recent years, several practical applications have motivated estimation of the g-formula under non-deterministic treatment strategies where treatment assignment at each time point depends on the observed treatment process. In this case, EIF-based estimators may or may not be doubly robust. In this paper, we provide sufficient conditions to ensure the existence of doubly robust estimators for intervention treatment distributions that depend on the observed treatment process for point treatment interventions and give a class of intervention treatment distributions dependent on the observed treatment process that guarantee model doubly and multiply robust estimators in longitudinal settings. Motivated by an application to pre-exposure prophylaxis (PrEP) initiation studies, we propose a new treatment intervention dependent on the observed treatment process. We show there exist (1) estimators that are doubly and multiply robust to model misspecification and (2) estimators that when used with machine learning algorithms can attain fast convergence rates for our proposed intervention. Finally, we explore the finite sample performance of our estimators via simulation studies. Citation: Statistical Methods in Medical Research PubDate: 2023-01-04T08:03:07Z DOI: 10.1177/09622802221146311
- Robust weights that optimally balance confounders for estimating marginal
hazard ratios-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Michele Santacatterina First page: 524 Abstract: Statistical Methods in Medical Research, Ahead of Print. Covariate balance is crucial in obtaining unbiased estimates of treatment effects in observational studies. Methods that target covariate balance have been successfully proposed and largely applied to estimate treatment effects on continuous outcomes. However, in many medical and epidemiological applications, the interest lies in estimating treatment effects on time-to-event outcomes. With this type of data, one of the most common estimands of interest is the marginal hazard ratio of the Cox proportional hazards model. In this article, we start by presenting robust orthogonality weights, a set of weights obtained by solving a quadratic constrained optimization problem that maximizes precision while constraining covariate balance defined as the correlation between confounders and treatment. By doing so, robust orthogonality weights optimally deal with both binary and continuous treatments. We then evaluate the performance of the proposed weights in estimating marginal hazard ratios of binary and continuous treatments with time-to-event outcomes in a simulation study. We finally apply robust orthogonality weights in the evaluation of the effect of hormone therapy on time to coronary heart disease and on the effect of red meat consumption on time to colon cancer among 24,069 postmenopausal women enrolled in the Women’s Health Initiative observational study. Citation: Statistical Methods in Medical Research PubDate: 2023-01-12T08:21:00Z DOI: 10.1177/09622802221146310
- Minimum sample size for developing a multivariable prediction model using
multinomial logistic regression-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Alexander Pate, Richard D Riley, Gary S Collins, Maarten van Smeden, Ben Van Calster, Joie Ensor, Glen P Martin First page: 555 Abstract: Statistical Methods in Medical Research, Ahead of Print. AimsMultinomial logistic regression models allow one to predict the risk of a categorical outcome with> 2 categories. When developing such a model, researchers should ensure the number of participants ([math]) is appropriate relative to the number of events ([math]) and the number of predictor parameters ([math]) for each category k. We propose three criteria to determine the minimum n required in light of existing criteria developed for binary outcomes.Proposed criteriaThe first criterion aims to minimise the model overfitting. The second aims to minimise the difference between the observed and adjusted [math] Nagelkerke. The third criterion aims to ensure the overall risk is estimated precisely. For criterion (i), we show the sample size must be based on the anticipated Cox-snell [math] of distinct ‘one-to-one’ logistic regression models corresponding to the sub-models of the multinomial logistic regression, rather than on the overall Cox-snell [math] of the multinomial logistic regression.Evaluation of criteriaWe tested the performance of the proposed criteria (i) through a simulation study and found that it resulted in the desired level of overfitting. Criterion (ii) and (iii) were natural extensions from previously proposed criteria for binary outcomes and did not require evaluation through simulation.SummaryWe illustrated how to implement the sample size criteria through a worked example considering the development of a multinomial risk prediction model for tumour type when presented with an ovarian mass. Code is provided for the simulation and worked example. We will embed our proposed criteria within the pmsampsize R library and Stata modules. Citation: Statistical Methods in Medical Research PubDate: 2023-01-20T07:18:39Z DOI: 10.1177/09622802231151220
- Taking a chance: How likely am I to receive my preferred treatment in a
clinical trial'-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Stephen D Walter, Ondrej Blaha, Denise Esserman First page: 572 Abstract: Statistical Methods in Medical Research, Ahead of Print. Researchers should ideally conduct clinical trials under a presumption of clinical equipoise, but in fact trial patients will often prefer one or other of the treatments being compared. Receiving an unblinded preferred treatment may affect the study outcome, possibly beneficially, but receiving a non-preferred treatment may induce ‘reluctant acquiescence’, and poorer outcomes. Even in blinded trials, patients’ primary motivation to enrol may be the chance of potentially receiving a desirable experimental treatment, which is otherwise unavailable. Study designs with a higher probability of receiving a preferred treatment (denoted as ‘concordance’) will be attractive to potential participants, and investigators, because they may improve recruitment and hence enhance study efficiency. Therefore, it is useful to consider the concordance rates associated with various study designs. We consider this question with a focus on comparing the standard, randomised, two-arm, parallel group design with the two-stage randomised patient preference design and Zelen designs; we also mention the fully randomised and partially randomised patient preference designs. For each of these designs, we evaluate the concordance rate as a function of the proportions randomised to the alternative treatments, the distribution of preferences over treatments, and (for the Zelen designs) the proportion of patients who consent to receive their assigned treatment. We also examine the equity of each design, which we define as the similarity between the concordance rates for participants with different treatment preferences. Finally, we contrast each of the alternative designs with the standard design in terms of gain in concordance and change in equity. Citation: Statistical Methods in Medical Research PubDate: 2023-01-11T07:58:35Z DOI: 10.1177/09622802221146305
- Flexible modeling of multiple nonlinear longitudinal trajectories with
censored and non-ignorable missing outcomes-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Tsung-I Lin, Wan-Lun Wang First page: 593 Abstract: Statistical Methods in Medical Research, Ahead of Print. Multivariate nonlinear mixed-effects models (MNLMMs) have become a promising tool for analyzing multi-outcome longitudinal data following nonlinear trajectory patterns. However, such a classical analysis can be challenging due to censorship induced by detection limits of the quantification assay or non-response occurring when participants missed scheduled visits intermittently or discontinued participation. This article proposes an extension of the MNLMM approach, called the MNLMM-CM, by taking the censored and non-ignorable missing responses into account simultaneously. The non-ignorable missingness is described by the selection-modeling factorization to tackle the missing not at random mechanism. A Monte Carlo expectation conditional maximization algorithm coupled with the first-order Taylor approximation is developed for parameter estimation. The techniques for the calculation of standard errors of fixed effects, estimation of unobservable random effects, imputation of censored and missing responses and prediction of future values are also provided. The proposed methodology is motivated and illustrated by the analysis of a clinical HIV/AIDS dataset with censored RNA viral loads and the presence of missing CD4 and CD8 cell counts. The superiority of our method on the provision of more adequate estimation is validated by a simulation study. Citation: Statistical Methods in Medical Research PubDate: 2023-01-10T05:33:12Z DOI: 10.1177/09622802221146312
- Divided-and-combined omnibus test for genetic association analysis with
high-dimensional data-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Jinjuan Wang, Zhenzhen Jiang, Hongping Guo, Zhengbang Li First page: 626 Abstract: Statistical Methods in Medical Research, Ahead of Print. Advances in biologic technology enable researchers to obtain a huge amount of genetic and genomic data, whose dimensions are often quite high on both phenotypes and variants. Testing their association with multiple phenotypes has been a hot topic in recent years. Traditional single phenotype multiple variant analysis has to be adjusted for multiple testing and thus suffers from substantial power loss due to ignorance of correlation across phenotypes. Similarity-based method, which uses the trace of product of two similarity matrices as a test statistic, has emerged as a useful tool to handle this problem. However, it loses power when the correlation strength within multiple phenotypes is middle or strong, for some signals represented by the eigenvalues of phenotypic similarity matrix are masked by others. We propose a divided-and-combined omnibus test to handle this drawback of the similarity-based method. Based on the divided-and-combined strategy, we first divide signals into two groups in a series of cut points according to eigenvalues of the phenotypic similarity matrix and combine analysis results via the Cauchy-combined method to reach a final statistic. Extensive simulations and application to a pig data demonstrate that the proposed statistic is much more powerful and robust than the original test under most of the considered scenarios, and sometimes the power increase can be more than 0.6. Divided-and-combined omnibus test facilitates genetic association analysis with high-dimensional data and achieves much higher power than the existing similarity based method. In fact, divided-and-combined omnibus test can be used whenever the association analysis between two multivariate variables needs to be conducted. Citation: Statistical Methods in Medical Research PubDate: 2023-01-18T07:29:51Z DOI: 10.1177/09622802231151204
- An overview of propensity score matching methods for clustered data
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Benjamin Langworthy, Yujie Wu, Molin Wang Abstract: Statistical Methods in Medical Research, Ahead of Print. Propensity score matching is commonly used in observational studies to control for confounding and estimate the causal effects of a treatment or exposure. Frequently, in observational studies data are clustered, which adds to the complexity of using propensity score techniques. In this article, we give an overview of propensity score matching methods for clustered data, and highlight how propensity score matching can be used to account for not just measured confounders, but also unmeasured cluster level confounders. We also consider using machine learning methods such as generalized boosted models to estimate the propensity score and show that accounting for clustering when using these methods can greatly reduce the performance, particularly when there are a large number of clusters and a small number of subjects per cluster. In order to get around this we highlight scenarios where it may be possible to control for measured covariates using propensity score matching, while using fixed effects regression in the outcome model to control for cluster level covariates. Using simulation studies we compare the performance of different propensity score matching methods for clustered data across a number of different settings. Finally, as an illustrative example we apply propensity score matching methods for clustered data to study the causal effect of aspirin on hearing deterioration using data from the conservation of hearing study. Citation: Statistical Methods in Medical Research PubDate: 2022-11-25T08:59:02Z DOI: 10.1177/09622802221133556
- A dose–effect network meta-analysis model with application in
antidepressants using restricted cubic splines-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Tasnim Hamza, Toshi A Furukawa, Nicola Orsini, Andrea Cipriani, Cynthia P Iglesias, Georgia Salanti Abstract: Statistical Methods in Medical Research, Ahead of Print. Network meta-analysis has been used to answer a range of clinical questions about the preferred intervention for a given condition. Although the effectiveness and safety of pharmacological agents depend on the dose administered, network meta-analysis applications typically ignore the role that drugs dosage plays in the results. This leads to more heterogeneity in the network. In this paper, we present a suite of network meta-analysis models that incorporate the dose–effect relationship using restricted cubic splines. We extend existing models into a dose–effect network meta-regression to account for study-level covariates and for groups of agents in a class-effect dose–effect network meta-analysis model. We apply our models to a network of aggregate data about the efficacy of 21 antidepressants and placebo for depression. We find that all antidepressants are more efficacious than placebo after a certain dose. Also, we identify the dose level at which each antidepressant's effect exceeds that of placebo and estimate the dose beyond which the effect of antidepressants no longer increases. When covariates were introduced to the model, we find that studies with small sample size tend to exaggerate antidepressants efficacy for several of the drugs. Our dose–effect network meta-analysis model with restricted cubic splines provides a flexible approach to modelling the dose–effect relationship in multiple interventions. Decision-makers can use our model to inform treatment choice. Citation: Statistical Methods in Medical Research PubDate: 2022-02-24T04:44:34Z DOI: 10.1177/09622802211070256
- Shotgun-2: A Bayesian phase I/II basket trial design to identify
indication-specific optimal biological doses-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Xin Chen, Jingyi Zhang, Liyun Jianga, Fangrong Yan First page: 443 Abstract: Statistical Methods in Medical Research, Ahead of Print. For novel molecularly targeted agents and immunotherapies, the objective of dose-finding is often to identify the optimal biological dose, rather than the maximum tolerated dose. However, optimal biological doses may not be the same for different indications, challenging the traditional dose-finding framework. Therefore, we proposed a Bayesian phase I/II basket trial design, named “shotgun-2,” to identify indication-specific optimal biological doses. A dose-escalation part is conducted in stage I to identify the maximum tolerated dose and admissible dose sets. In stage II, dose optimization is performed incorporating both toxicity and efficacy for each indication. Simulation studies under both fixed and random scenarios show that, compared with the traditional “phase I + cohort expansion” design, the shotgun-2 design is robust and can improve the probability of correctly selecting the optimal biological doses. Furthermore, this study provides a useful tool for identifying indication-specific optimal biological doses and accelerating drug development. Citation: Statistical Methods in Medical Research PubDate: 2022-10-11T07:56:42Z DOI: 10.1177/09622802221129049
- Bivariate joint models for survival and change of cognitive function
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Shengning Pan, Ardo van den Hout First page: 474 Abstract: Statistical Methods in Medical Research, Ahead of Print. Changes in cognitive function over time are of interest in ageing research. A joint model is constructed to investigate. Generally, cognitive function is measured through more than one test, and the test scores are integers. The aim is to investigate two test scores and use an extension of a bivariate binomial distribution to define a new joint model. This bivariate distribution model the correlation between the two test scores. To deal with attrition due to death, the Weibull hazard model and the Gompertz hazard model are used. A shared random-effects model is constructed, and the random effects are assumed to follow a bivariate normal distribution. It is shown how to incorporate random effects that link the bivariate longitudinal model and the survival model. The joint model is applied to the English Longitudinal Study of Ageing data. Citation: Statistical Methods in Medical Research PubDate: 2022-12-27T05:45:44Z DOI: 10.1177/09622802221146307
- A generalization of moderated statistics to data adaptive semiparametric
estimation in high-dimensional biology-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Nima S Hejazi, Philippe Boileau, Mark J van der Laan, Alan E Hubbard First page: 539 Abstract: Statistical Methods in Medical Research, Ahead of Print. The widespread availability of high-dimensional biological data has made the simultaneous screening of many biological characteristics a central problem in computational and high-dimensional biology. As the dimensionality of datasets continues to grow, so too does the complexity of identifying biomarkers linked to exposure patterns. The statistical analysis of such data often relies upon parametric modeling assumptions motivated by convenience, inviting opportunities for model misspecification. While estimation frameworks incorporating flexible, data adaptive regression strategies can mitigate this, their standard variance estimators are often unstable in high-dimensional settings, resulting in inflated Type-I error even after standard multiple testing corrections. We adapt a shrinkage approach compatible with parametric modeling strategies to semiparametric variance estimators of a family of efficient, asymptotically linear estimators of causal effects, defined by counterfactual exposure contrasts. Augmenting the inferential stability of these estimators in high-dimensional settings yields a data adaptive approach for robustly uncovering stable causal associations, even when sample sizes are limited. Our generalized variance estimator is evaluated against appropriate alternatives in numerical experiments, and an open source R/Bioconductor package, biotmle, is introduced. The proposal is demonstrated in an analysis of high-dimensional DNA methylation data from an observational study on the epigenetic effects of tobacco smoking. Citation: Statistical Methods in Medical Research PubDate: 2022-12-27T06:27:43Z DOI: 10.1177/09622802221146313
- Generalised pairwise comparisons for trend: An extension to the win ratio
and win odds for dose-response and prognostic variable analysis with arbitrary statements of outcome preference-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors: Hannah Johns, Bruce Campbell, Julie Bernhardt, Leonid Churilov First page: 609 Abstract: Statistical Methods in Medical Research, Ahead of Print. The win ratio is a novel approach for handling complex patient outcomes that have seen considerable interest in the medical statistics literature, and operates by considering all-to-all pairwise statements of preference on outcomes. Recent extensions to the method have focused on the two-group case, with few developments made for considering the impact of a well-ordered explanatory variable, which would allow for dose-response analysis or the analysis of links between complex patient outcomes and prognostic variables. Where such methods have been developed, they are semiparametric methods that can only be applied to survival outcomes. In this article, we introduce the generalised pairwise comparison for trend, a modified form of Agresti’s generalised odds ratio. This approach is capable of considering arbitrary statements of preference, thus enabling its use across all types of outcome data. We provide a simulation study validating the approach and illustrate it with three clinical applications in stroke research. Citation: Statistical Methods in Medical Research PubDate: 2022-12-27T06:26:23Z DOI: 10.1177/09622802221146306
|