A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

  Subjects -> STATISTICS (Total: 130 journals)
The end of the list has been reached or no journals were found for your choice.
Similar Journals
Journal Cover
Lifetime Data Analysis
Journal Prestige (SJR): 0.985
Citation Impact (citeScore): 1
Number of Followers: 7  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 1572-9249 - ISSN (Online) 1380-7870
Published by Springer-Verlag Homepage  [2469 journals]
  • Bias correction via outcome reassignment for cross-sectional data with
           binary disease outcome

    • Free pre-print version: Loading...

      Abstract: Abstract Cross-sectionally sampled data with binary disease outcome are commonly analyzed in observational studies to identify the relationship between covariates and disease outcome. A cross-sectional population is defined as a population of living individuals at the sampling or observational time. It is generally understood that binary disease outcome from cross-sectional data contains less information than longitudinally collected time-to-event data, but there is insufficient understanding as to whether bias can possibly exist in cross-sectional data and how the bias is related to the population risk of interest. Wang and Yang (2021) presented the complexity and bias in cross-sectional data with binary disease outcome with detailed analytical explorations into the data structure. As the distribution of the cross-sectional binary outcome is quite different from the population risk distribution, bias can arise when using cross-sectional data analysis to draw inference for population risk. In this paper we argue that the commonly adopted age-specific risk probability is biased for the estimation of population risk and propose an outcome reassignment approach which reassigns a portion of the observed binary outcome, 0 or 1, to the other disease category. A sign test and a semiparametric pseudo-likelihood method are developed for analyzing cross-sectional data using the OR approach. Simulations and an analysis based on Alzheimer’s Disease data are presented to illustrate the proposed methods.
      PubDate: 2022-10-01
       
  • Marker-dependent observation and carry-forward of internal covariates in
           Cox regression

    • Free pre-print version: Loading...

      Abstract: Abstract Studies of chronic disease often involve modeling the relationship between marker processes and disease onset or progression. The Cox regression model is perhaps the most common and convenient approach to analysis in this setting. In most cohort studies, however, biospecimens and biomarker values are only measured intermittently (e.g. at clinic visits) so Cox models often treat biomarker values as fixed at their most recently observed values, until they are updated at the next visit. We consider the implications of this convention on the limiting values of regression coefficient estimators when the marker values themselves impact the intensity for clinic visits. A joint multistate model is described for the marker-failure-visit process which can be fitted to mitigate this bias and an expectation-maximization algorithm is developed. An application to data from a registry of patients with psoriatic arthritis is given for illustration.
      PubDate: 2022-10-01
       
  • On the targets of inference with multivariate failure time data

    • Free pre-print version: Loading...

      Abstract: Abstract There are several different topics that can be addressed with multivariate failure time regression data. Data analysis methods are needed that are suited to each such topic. Specifically, marginal hazard rate models are well suited to the analysis of exposures or treatments in relation to individual failure time outcomes, when failure time dependencies are themselves of little or no interest. On the other hand semiparametric copula models are well suited to analyses where interest focuses primarily on the magnitude of dependencies between failure times. These models overlap with frailty models, that seem best suited to exploring the details of failure time clustering. Recently proposed multivariate marginal hazard methods, on the other hand, are well suited to the exploration of exposures or treatments in relation to single, pairwise, and higher dimensional hazard rates. Here these methods will be briefly described, and the final method will be illustrated using the Women’s Health Initiative hormone therapy trial data.
      PubDate: 2022-10-01
       
  • Flexible two-piece distributions for right censored survival data

    • Free pre-print version: Loading...

      Abstract: Abstract An important complexity in censored data is that only partial information on the variables of interest is observed. In recent years, a large family of asymmetric distributions and maximum likelihood estimation for the parameters in that family has been studied, in the complete data case. In this paper, we exploit the appealing family of quantile-based asymmetric distributions to obtain flexible distributions for modelling right censored survival data. The flexible distributions can be generated using a variety of symmetric distributions and monotonic link functions. The interesting feature of this family is that the location parameter coincides with an index-parameter quantile of the distribution. This family is also suitable to characterize different shapes of the hazard function (constant, increasing, decreasing, bathtub and upside-down bathtub or unimodal shapes). Statistical inference is done for the whole family of distributions. The parameter estimation is carried out by optimizing a non-differentiable likelihood function. The asymptotic properties of the estimators are established. The finite-sample performance of the proposed method and the impact of censorship are investigated via simulations. Finally, the methodology is illustrated on two real data examples (times to weaning in breast-fed data and German Breast Cancer data).
      PubDate: 2022-09-20
       
  • A general class of promotion time cure rate models with a new biological
           interpretation

    • Free pre-print version: Loading...

      Abstract: Abstract Over the last decades, the challenges in survival models have been changing considerably and full probabilistic modeling is crucial in many medical applications. Motivated from a new biological interpretation of cancer metastasis, we introduce a general method for obtaining more flexible cure rate models. The proposal model extended the promotion time cure rate model. Furthermore, it includes several well-known models as special cases and defines many new special models. We derive several properties of the hazard function for the proposed model and establish mathematical relationships with the promotion time cure rate model. We consider a frequentist approach to perform inferences, and the maximum likelihood method is employed to estimate the model parameters. Simulation studies are conducted to evaluate its performance with a discussion of the obtained results. A real dataset from population-based study of incident cases of melanoma diagnosed in the state of São Paulo, Brazil, is discussed in detail.
      PubDate: 2022-09-16
       
  • Special issue dedicated to David Oakes

    • Free pre-print version: Loading...

      PubDate: 2022-09-06
       
  • Joint modeling of generalized scale-change models for recurrent event and
           failure time data

    • Free pre-print version: Loading...

      Abstract: Abstract Recurrent event and failure time data arise frequently in many clinical and observational studies. In this article, we propose a joint modeling of generalized scale-change models for the recurrent event process and the failure time, and allow the two processes to be correlated through a shared frailty. The proposed joint model is flexible in that it requires neither the Poisson assumption for the recurrent event process nor a parametric assumption on the frailty distribution. Estimating equation approaches are developed for parameter estimation, and the asymptotic properties of the resulting estimators are established. Simulation studies are conducted to evaluate the finite sample performances of the proposed method. An application to a medical cost study of chronic heart failure patients is provided.
      PubDate: 2022-09-06
       
  • Choice of time scale for analysis of recurrent events data

    • Free pre-print version: Loading...

      Abstract: Abstract Recurrent events refer to events that over time can occur several times for each individual. Full use of such data in a clinical trial requires a method that addresses the dependence between events. For modelling this dependence, there are two time scales to consider, namely time since start of the study (running time) or time since most recent event (gap time). In the multi-state setup, it is possible to estimate parameters also in the case, where the hazard model allows for an effect of both time scales, making this an extremely flexible approach. However, for summarizing the effect of a treatment in a transparent and informative way, the choice of time scale and model requires much more care. This paper discusses these choices both from a theoretical and practical point of view. This is supported by a simulation study showing that in a frailty model with assumptions covered by both time scales, the gap time approach may give misleading results. A literature dataset is used for illustrating the issues.
      PubDate: 2022-08-15
       
  • Assessing dynamic covariate effects with survival data

    • Free pre-print version: Loading...

      Abstract: Abstract Dynamic (or varying) covariate effects often manifest meaningful physiological mechanisms underlying chronic diseases. However, a static view of covariate effects is typically adopted by standard approaches to evaluating disease prognostic factors, which can result in depreciation of some important disease markers. To address this issue, in this work, we take the perspective of globally concerned quantile regression, and propose a flexible testing framework suited to assess either constant or dynamic covariate effects. We study the powerful Kolmogorov–Smirnov (K–S) and Cramér–Von Mises (C–V) type test statistics and develop a simple resampling procedure to tackle their complicated limit distributions. We provide rigorous theoretical results, including the limit null distributions and consistency under a general class of alternative hypotheses of the proposed tests, as well as the justifications for the presented resampling procedure. Extensive simulation studies and a real data example demonstrate the utility of the new testing procedures and their advantages over existing approaches in assessing dynamic covariate effects.
      PubDate: 2022-08-13
       
  • Semiparametric single-index models for optimal treatment regimens with
           censored outcomes

    • Free pre-print version: Loading...

      Abstract: Abstract There is a growing interest in precision medicine, where a potentially censored survival time is often the most important outcome of interest. To discover optimal treatment regimens for such an outcome, we propose a semiparametric proportional hazards model by incorporating the interaction between treatment and a single index of covariates through an unknown monotone link function. This model is flexible enough to allow non-linear treatment-covariate interactions and yet provides a clinically interpretable linear rule for treatment decision. We propose a sieve maximum likelihood estimation approach, under which the baseline hazard function is estimated nonparametrically and the unknown link function is estimated via monotone quadratic B-splines. We show that the resulting estimators are consistent and asymptotically normal with a covariance matrix that attains the semiparametric efficiency bound. The optimal treatment rule follows naturally as a linear combination of the maximum likelihood estimators of the model parameters. Through extensive simulation studies and an application to an AIDS clinical trial, we demonstrate that the treatment rule derived from the single-index model outperforms the treatment rule under the standard Cox proportional hazards model.
      PubDate: 2022-08-08
       
  • Median regression models for clustered, interval-censored survival data -
           An application to prostate surgery study

    • Free pre-print version: Loading...

      Abstract: Abstract Genitourinary surgeons and oncologists are particularly interested in whether a robotic surgery improves times to Prostate Specific Antigen (PSA) recurrence compared to a non-robotic surgery for removing the cancerous prostate. Time to PSA recurrence is an example of a survival time that is typically interval-censored between two consecutive clinical inspections with opposite test results. In addition, success of medical devices and technologies often depends on factors such as experience and skill level of the medical service providers, thus leading to clustering of these survival times. For analyzing the effects of surgery types and other covariates on median of clustered interval-censored time to post-surgery PSA recurrence, we present three competing novel models and associated frequentist and Bayesian analyses. The first model is based on a transform-both-sides of survival time with Gaussian random effects to account for the within-cluster association. Our second model assumes an approximate marginal Laplace distribution for the transformed log-survival times with a Gaussian copula to accommodate clustering. Our third model is a special case of the second model with Laplace distribution for the marginal log-survival times and Gaussian copula for the within-cluster association. Simulation studies establish the second model to be highly robust against extreme observations while estimating median regression coefficients. We provide a comprehensive comparison among these three competing models based on the model properties and the computational ease of their Frequentist and Bayesian analysis. We also illustrate the practical implementations and uses of these methods via analysis of a simulated clustered interval-censored data-set similar in design to a post-surgery PSA recurrence study.
      PubDate: 2022-08-07
       
  • Double bias correction for high-dimensional sparse additive hazards
           regression with covariate measurement errors

    • Free pre-print version: Loading...

      Abstract: Abstract We propose an inferential procedure for additive hazards regression with high-dimensional survival data, where the covariates are prone to measurement errors. We develop a double bias correction method by first correcting the bias arising from measurement errors in covariates through an estimating function for the regression parameter. By adopting the convex relaxation technique, a regularized estimator for the regression parameter is obtained by elaborately designing a feasible loss based on the estimating function, which is solved via linear programming. Using the Neyman orthogonality, we propose an asymptotically unbiased estimator which further corrects the bias caused by the convex relaxation and regularization. We derive the convergence rate of the proposed estimator and establish the asymptotic normality for the low-dimensional parameter estimator and the linear combination thereof, accompanied with a consistent estimator for the variance. Numerical experiments are carried out on both simulated and real datasets to demonstrate the promising performance of the proposed double bias correction method.
      PubDate: 2022-07-22
      DOI: 10.1007/s10985-022-09568-2
       
  • Semiparametric regression analysis of doubly-censored data with
           applications to incubation period estimation

    • Free pre-print version: Loading...

      Abstract: Abstract The incubation period is a key characteristic of an infectious disease. In the outbreak of a novel infectious disease, accurate evaluation of the incubation period distribution is critical for designing effective prevention and control measures . Estimation of the incubation period distribution based on limited information from retrospective inspection of infected cases is highly challenging due to censoring and truncation. In this paper, we consider a semiparametric regression model for the incubation period and propose a sieve maximum likelihood approach for estimation based on the symptom onset time, travel history, and basic demographics of reported cases. The approach properly accounts for the pandemic growth and selection bias in data collection. We also develop an efficient computation method and establish the asymptotic properties of the proposed estimators. We demonstrate the feasibility and advantages of the proposed methods through extensive simulation studies and provide an application to a dataset on the outbreak of COVID-19.
      PubDate: 2022-07-13
      DOI: 10.1007/s10985-022-09567-3
       
  • On logistic regression with right censored data, with or without competing
           risks, and its use for estimating treatment effects

    • Free pre-print version: Loading...

      Abstract: Abstract Simple logistic regression can be adapted to deal with right-censoring by inverse probability of censoring weighting (IPCW). We here compare two such IPCW approaches, one based on weighting the outcome, the other based on weighting the estimating equations. We study the large sample properties of the two approaches and show that which of the two weighting methods is the most efficient depends on the censoring distribution. We show by theoretical computations that the methods can be surprisingly different in realistic settings. We further show how to use the two weighting approaches for logistic regression to estimate causal treatment effects, for both observational studies and randomized clinical trials (RCT). Several estimators for observational studies are compared and we present an application to registry data. We also revisit interesting robustness properties of logistic regression in the context of RCTs, with a particular focus on the IPCW weighting. We find that these robustness properties still hold when the censoring weights are correctly specified, but not necessarily otherwise.
      PubDate: 2022-07-07
      DOI: 10.1007/s10985-022-09564-6
       
  • Accounting for delayed entry into observational studies and clinical
           trials: length-biased sampling and restricted mean survival time

    • Free pre-print version: Loading...

      Abstract: Abstract Individuals in many observational studies and clinical trials for chronic diseases are enrolled well after onset or diagnosis of their disease. Times to events of interest after enrollment are therefore residual or left-truncated event times. Individuals entering the studies have disease that has advanced to varying extents. Moreover, enrollment usually entails probability sampling of the study population. Finally, event times over a short to moderate time horizon are often of interest in these investigations, rather than more speculative and remote happenings that lie beyond the study period. This research report looks at the issue of delayed entry into these kinds of studies and trials. Time to event for an individual is modelled as a first hitting time of an event threshold by a latent disease process, which is taken to be a Wiener process. It is emphasized that recruitment into these studies often involves length-biased sampling. The requisite mathematics for this kind of sampling and delayed entry are presented, including explicit formulas needed for estimation and inference. Restricted mean survival time (RMST) is taken as the clinically relevant outcome measure. Exact parametric formulas for this measure are derived and presented. The results are extended to settings that involve study covariates using threshold regression methods. Methods adapted for clinical trials are presented. An extensive case illustration for a clinical trial setting is then presented to demonstrate the methods, the interpretation of results, and the harvesting of useful insights. The closing discussion covers a number of important issues and concepts.
      PubDate: 2022-07-01
      DOI: 10.1007/s10985-022-09562-8
       
  • Inference for transition probabilities in non-Markov multi-state models

    • Free pre-print version: Loading...

      Abstract: Abstract Multi-state models are frequently used when data come from subjects observed over time and where focus is on the occurrence of events that the subjects may experience. A convenient modeling assumption is that the multi-state stochastic process is Markovian, in which case a number of methods are available when doing inference for both transition intensities and transition probabilities. The Markov assumption, however, is quite strict and may not fit actual data in a satisfactory way. Therefore, inference methods for non-Markov models are needed. In this paper, we review methods for estimating transition probabilities in such models and suggest ways of doing regression analysis based on pseudo observations. In particular, we will compare methods using land-marking with methods using plug-in. The methods are illustrated using simulations and practical examples from medical research.
      PubDate: 2022-06-28
      DOI: 10.1007/s10985-022-09560-w
       
  • Screening for chronic diseases: optimizing lead time through balancing
           prescribed frequency and individual adherence

    • Free pre-print version: Loading...

      Abstract: Abstract Screening for chronic diseases, such as cancer, is an important public health priority, but traditionally only the frequency or rate of screening has received attention. In this work, we study the importance of adhering to recommended screening policies and develop new methodology to better optimize screening policies when adherence is imperfect. We consider a progressive disease model with four states (healthy, undetectable preclinical, detectable preclinical, clinical), and overlay this with a stochastic screening–behavior model using the theory of renewal processes that allows us to capture imperfect adherence to screening programs in a transparent way. We show that decreased adherence leads to reduced efficacy of screening programs, quantified here using elements of the lead time distribution (i.e., the time between screening diagnosis and when diagnosis would have occurred clinically in the absence of screening). Under the assumption of an inverse relationship between prescribed screening frequency and individual adherence, we show that the optimal screening frequency generally decreases with increasing levels of non-adherence. We apply this model to an example in breast cancer screening, demonstrating how accounting for imperfect adherence affects the recommended screening frequency.
      PubDate: 2022-06-24
      DOI: 10.1007/s10985-022-09563-7
       
  • Optimum test planning for heterogeneous inverse Gaussian processes

    • Free pre-print version: Loading...

      Abstract: Abstract The heterogeneous inverse Gaussian (IG) process is one of the most popular and most considered degradation models for highly reliable products. One difficulty with heterogeneous IG processes is the lack of analytic expressions for the Fisher information matrix (FIM). Thus, it is a challenge to find an optimum test plan using any information-based criteria with decision variables such as the termination time, the number of measurements and sample size. In this article, the FIM of an IG process with random slopes can be derived explicitly in an algebraic expression to reduce uncertainty caused by the numerical approximation. The D- and V-optimum test plans with/without a cost constraint can be obtained by using a profile optimum plan. Sensitivity analysis is studied to elucidate how optimum planning is influenced by the experimental costs and planning values of the model parameters. The theoretical results are illustrated by numerical simulation and case studies. Simulations, technical derivations and auxiliary formulae are available online as supplementary materials.
      PubDate: 2022-06-13
      DOI: 10.1007/s10985-022-09556-6
       
  • Privacy-preserving estimation of an optimal individualized treatment rule:
           a case study in maximizing time to severe depression-related outcomes

    • Free pre-print version: Loading...

      Abstract: Abstract Estimating individualized treatment rules—particularly in the context of right-censored outcomes—is challenging because the treatment effect heterogeneity of interest is often small, thus difficult to detect. While this motivates the use of very large datasets such as those from multiple health systems or centres, data privacy may be of concern with participating data centres reluctant to share individual-level data. In this case study on the treatment of depression, we demonstrate an application of distributed regression for privacy protection used in combination with dynamic weighted survival modelling (DWSurv) to estimate an optimal individualized treatment rule whilst obscuring individual-level data. In simulations, we demonstrate the flexibility of this approach to address local treatment practices that may affect confounding, and show that DWSurv retains its double robustness even when performed through a (weighted) distributed regression approach. The work is motivated by, and illustrated with, an analysis of treatment for unipolar depression using the United Kingdom’s Clinical Practice Research Datalink.
      PubDate: 2022-05-02
      DOI: 10.1007/s10985-022-09554-8
       
  • Mixture survival trees for cancer risk classification

    • Free pre-print version: Loading...

      Abstract: Abstract In oncology studies, it is important to understand and characterize disease heterogeneity among patients so that patients can be classified into different risk groups and one can identify high-risk patients at the right time. This information can then be used to identify a more homogeneous patient population for developing precision medicine. In this paper, we propose a mixture survival tree approach for direct risk classification. We assume that the patients can be classified into a pre-specified number of risk groups, where each group has distinct survival profile. Our proposed tree-based methods are devised to estimate latent group membership using an EM algorithm. The observed data log-likelihood function is used as the splitting criterion in recursive partitioning. The finite sample performance is evaluated by extensive simulation studies and the proposed method is illustrated by a case study in breast cancer.
      PubDate: 2022-04-29
      DOI: 10.1007/s10985-022-09552-w
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 44.201.68.86
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-