A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

  Subjects -> STATISTICS (Total: 130 journals)
The end of the list has been reached or no journals were found for your choice.
Similar Journals
Journal Cover
Lifetime Data Analysis
Journal Prestige (SJR): 0.985
Citation Impact (citeScore): 1
Number of Followers: 5  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 1572-9249 - ISSN (Online) 1380-7870
Published by Springer-Verlag Homepage  [2468 journals]
  • A pairwise pseudo-likelihood approach for regression analysis of doubly
           truncated data

    • Free pre-print version: Loading...

      Abstract: Double truncation commonly occurs in astronomy, epidemiology and economics. Compared to one-sided truncation, double truncation, which combines both left and right truncation, is more challenging to handle and the methods for analyzing doubly truncated data are limited. For the situation, a common approach is to perform conditional analysis conditional on truncation times, which is simple but may not be efficient. Corresponding to this, we propose a pairwise pseudo-likelihood approach that aims to recover some information missed in the conditional methods and can yield more efficient estimation. The resulting estimator is shown to be consistent and asymptotically normal. An extensive simulation study indicates that the proposed procedure works well in practice and is indeed more efficient than the conditional approach. The proposed methodology applied to an AIDS study.
      PubDate: 2025-03-31
       
  • Quantile regression under dependent censoring with unknown association

    • Free pre-print version: Loading...

      Abstract: The study of survival data often requires taking proper care of the censoring mechanism that prohibits complete observation of the data. Under right censoring, only the first occurring event is observed: either the event of interest, or a competing event like withdrawal of a subject from the study. The corresponding identifiability difficulties led many authors to imposing (conditional) independence or a fully known dependence between survival and censoring times, both of which are not always realistic. However, recent results in survival literature showed that parametric copula models allow identification of all model parameters, including the association parameter, under appropriately chosen marginal distributions. The present paper is the first one to apply such models in a quantile regression context, hence benefiting from its well-known advantages in terms of e.g. robustness and richer inference results. The parametric copula is supplemented with a likewise parametric, yet flexible, enriched asymmetric Laplace distribution for the survival times conditional on the covariates. Its asymmetric Laplace basis provides its close connection to quantiles, while the extension with Laguerre orthogonal polynomials ensures sufficient flexibility for increasing polynomial degrees. The distributional flavour of the quantile regression presented, comes with advantages of both theoretical and computational nature. All model parameters are proven to be identifiable, consistent, and asymptotically normal. Finally, performance of the model and of the proposed estimation procedure is assessed through extensive simulation studies as well as an application on liver transplant data.
      PubDate: 2025-03-16
       
  • Goodness-of-fit testing in the presence of cured data: IPCW approach

    • Free pre-print version: Loading...

      Abstract: Here we revisit a goodness-of-fit testing problem for randomly right-censored data in the presence of cured subjects, i.e. the population consists of two parts: the cured or non-susceptible group, who will never experience the event of interest versus those who will undergo the event of interest when followed up sufficiently long. We consider the modifications of proposed characterization-based goodness-of-fit tests for the exponential distribution constructed via the inverse probability of censoring weighted U- or V-approach. We present their asymptotic properties and extend our discussion to encompass suitable generalizations applicable to a variety of tests formulated using the same methodology. A comparative power study of these proposed tests against a recent CvM-based competitor and the modifications of the most prominent competitors identified in prior studies that did not consider the presence of cured subjects, demonstrates good finite sample performance. Novel tests are illustrated on a real dataset related to leukemia relapse.
      PubDate: 2025-03-04
       
  • A global kernel estimator for partially linear varying coefficient
           additive hazards models

    • Free pre-print version: Loading...

      Abstract: We study kernel-based estimation methods for partially linear varying coefficient additive hazards models, where the effects of one type of covariates can be modified by another. Existing kernel estimation methods for varying coefficient models often use a “local” approach, where only a small local neighborhood of subjects are used for estimating the varying coefficient functions. Such a local approach, however, is generally inefficient as information about some non-varying nuisance parameter from subjects outside the neighborhood is discarded. In this paper, we develop a “global” kernel estimator that simultaneously estimates the varying coefficients over the entire domains of the functions, leveraging the non-varying nature of the nuisance parameter. We establish the consistency and asymptotic normality of the proposed estimators. The theoretical developments are substantially more challenging than those of the local methods, as the dimension of the global estimator increases with the sample size. We conduct extensive simulation studies to demonstrate the feasibility and superior performance of the proposed methods compared with existing local methods and provide an application to a motivating cancer genomic study.
      PubDate: 2025-01-09
       
  • Right-censored models by the expectile method

    • Free pre-print version: Loading...

      Abstract: Based on the expectile loss function and the adaptive LASSO penalty, the paper proposes and studies the estimation methods for the accelerated failure time (AFT) model. In this approach, we need to estimate the survival function of the censoring variable by the Kaplan–Meier estimator. The AFT model parameters are first estimated by the expectile method and afterwards, when the number of explanatory variables can be large, by the adaptive LASSO expectile method which directly carries out the automatic selection of variables. We also obtain the convergence rate and asymptotic normality for the two estimators, while showing the sparsity property for the censored adaptive LASSO expectile estimator. A numerical study using Monte Carlo simulations confirms the theoretical results and demonstrates the competitive performance of the two proposed estimators. The usefulness of these estimators is illustrated by applying them to three survival data sets.
      PubDate: 2025-01-03
       
  • Proportional rates model for recurrent event data with intermittent gaps
           and a terminal event

    • Free pre-print version: Loading...

      Abstract: Recurrent events are common in medical practice or epidemiologic studies when each subject experiences a particular event repeatedly over time. In some long-term observations of recurrent events, a terminal event such as death may exist in recurrent event data. Meanwhile, some inspected subjects will withdraw from a study for some time for various reasons and then resume, which may happen more than once. The period between the subject leaving and returning to the study is called an intermittent gap. One naive method typically ignores gaps and treats the events as usual recurrent events, which could result in misleading estimation results. In this article, we consider a semiparametric proportional rates model for recurrent event data with intermittent gaps and a terminal event. An estimation procedure is developed for the model parameters, and the asymptotic properties of the resulting estimators are established. Simulation studies demonstrate that the proposed estimators perform satisfactorily compared to the naive method that ignores gaps. A diabetes study further shows the utility of the proposed method.
      PubDate: 2024-12-16
       
  • A class of semiparametric models for bivariate survival data

    • Free pre-print version: Loading...

      Abstract: We propose a new class of bivariate survival models based on the family of Archimedean copulas with margins modeled by the Yang and Prentice (YP) model. The Ali-Mikhail-Haq (AMH), Clayton, Frank, Gumbel-Hougaard (GH), and Joe copulas are employed to accommodate the dependency among marginal distributions. Baseline distributions are modeled semiparametrically by the Piecewise Exponential (PE) distribution and the Bernstein polynomials (BP). Inference procedures for the proposed class of models are based on the maximum likelihood (ML) approach. The new class of models possesses some attractive features: i) the ability to take into account survival data with crossing survival curves; ii) the inclusion of the well-known proportional hazards (PH) and proportional odds (PO) models as particular cases; iii) greater flexibility provided by the semiparametric modeling of the marginal baseline distributions; iv) the availability of closed-form expressions for the likelihood functions, leading to more straightforward inferential procedures. The properties of the proposed class are numerically investigated through an extensive simulation study. Finally, we demonstrate the versatility of our new class of models through the analysis of survival data involving patients diagnosed with ovarian cancer.
      PubDate: 2024-12-14
       
  • Nonparametric estimation of the cumulative incidence function for
           doubly-truncated and interval-censored competing risks data

    • Free pre-print version: Loading...

      Abstract: Interval sampling is widely used for collection of disease registry data, which typically report incident cases during a certain time period. Such sampling scheme induces doubly truncated data if the failure time can be observed exactly and doubly truncated and interval censored (DTIC) data if the failure time is known only to lie within an interval. In this article, we consider nonparametric estimation of the cumulative incidence functions (CIF) using doubly-truncated and interval-censored competing risks (DTIC-C) data obtained from interval sampling scheme. Using the approach of Shen (Stat Methods Med Res 31:1157–1170, 2022b), we first obtain the nonparametric maximum likelihood estimator (NPMLE) of the distribution function of failure time ignoring failure types. Using the NPMLE, we proposed nonparametric estimators of the CIF with DTIC-C data and establish consistency of the proposed estimators. Simulation studies show that the proposed estimator performs well for finite sample size.
      PubDate: 2024-11-17
       
  • Two-stage pseudo maximum likelihood estimation of semiparametric
           copula-based regression models for semi-competing risks data

    • Free pre-print version: Loading...

      Abstract: We propose a two-stage estimation procedure for a copula-based model with semi-competing risks data, where the non-terminal event is subject to dependent censoring by the terminal event, and both events are subject to independent censoring. With a copula-based model, the marginal survival functions of individual event times are specified by semiparametric transformation models, and the dependence between the bivariate event times is specified by a parametric copula function. For the estimation procedure, in the first stage, the parameters associated with the marginal of the terminal event are estimated using only the corresponding observed outcomes, and in the second stage, the marginal parameters for the non-terminal event time and the copula parameter are estimated together via maximizing a pseudo-likelihood function based on the joint distribution of the bivariate event times. We derived the asymptotic properties of the proposed estimator and provided an analytic variance estimator for inference. Through simulation studies, we showed that our approach leads to consistent estimates with less computational cost and more robustness than the one-stage procedure developed in Chen YH (Lifetime Data Anal 18:36–57, 2012), where all parameters were estimated simultaneously. In addition, our approach demonstrates more desirable finite-sample performances over another existing two-stage estimation method proposed in Zhu H et al., (Commu Statistics-Theory Methods 51(22):7830–7845, 2021) . An R package PMLE4SCR is developed to implement our proposed method.
      PubDate: 2024-10-23
       
  • Call for papers for a special issue on survival analysis in artificial
           intelligence

    • Free pre-print version: Loading...

      PubDate: 2024-10-16
       
  • Evaluating time-to-event surrogates for time-to-event true endpoints: an
           information-theoretic approach based on causal inference

    • Free pre-print version: Loading...

      Abstract: Putative surrogate endpoints must undergo a rigorous statistical evaluation before they can be used in clinical trials. Numerous frameworks have been introduced for this purpose. In this study, we extend the scope of the information-theoretic causal-inference approach to encompass scenarios where both outcomes are time-to-event endpoints, using the flexibility provided by D-vine copulas. We evaluate the quality of the putative surrogate using the individual causal association (ICA)—a measure based on the mutual information between the individual causal treatment effects. However, in spite of its appealing mathematical properties, the ICA may be ill defined for composite endpoints. Therefore, we also propose an alternative rank-based metric for assessing the ICA. Due to the fundamental problem of causal inference, the joint distribution of all potential outcomes is only partially identifiable and, consequently, the ICA cannot be estimated without strong unverifiable assumptions. This is addressed by a formal sensitivity analysis that is summarized by the so-called intervals of ignorance and uncertainty. The frequentist properties of these intervals are discussed in detail. Finally, the proposed methods are illustrated with an analysis of pooled data from two advanced colorectal cancer trials. The newly developed techniques have been implemented in the R package Surrogate.
      PubDate: 2024-10-13
       
  • Conditional modeling of recurrent event data with terminal event

    • Free pre-print version: Loading...

      Abstract: Recurrent event data with a terminal event arise in follow-up studies. The current literature has primarily focused on the effect of covariates on the recurrent event process using marginal estimating equation approaches or joint modeling approaches via frailties. In this article, we propose a conditional model for recurrent event data with a terminal event, which provides an intuitive interpretation of the effect of the terminal event: at an early time, the rate of recurrent events is nearly independent of the terminal event, but the dependence gets stronger as time goes close to the terminal event time. A two-stage likelihood-based approach is proposed to estimate parameters of interest. Asymptotic properties of the estimators are established. The finite-sample behavior of the proposed method is examined through simulation studies. A real data of colorectal cancer is analyzed by the proposed method for illustration.
      PubDate: 2024-10-12
       
  • Optimal survival analyses with prevalent and incident patients

    • Free pre-print version: Loading...

      Abstract: Period-prevalent cohorts are often used for their cost-saving potential in epidemiological studies of survival outcomes. Under this design, prevalent patients allow for evaluations of long-term survival outcomes without the need for long follow-up, whereas incident patients allow for evaluations of short-term survival outcomes without the issue of left-truncation. In most period-prevalent survival analyses from the existing literature, patients have been recruited to achieve an overall sample size, with little attention given to the relative frequencies of prevalent and incident patients and their statistical implications. Furthermore, there are no existing methods available to rigorously quantify the impact of these relative frequencies on estimation and inference and incorporate this information into study design strategies. To address these gaps, we develop an approach to identify the optimal mix of prevalent and incident patients that maximizes precision over the entire estimated survival curve, subject to a flexible weighting scheme. In addition, we prove that inference based on the weighted log-rank test or Cox proportional hazards model is most powerful with an entirely prevalent or incident cohort, and we derive theoretical formulas to determine the optimal choice. Simulations confirm the validity of the proposed optimization criteria and show that substantial efficiency gains can be achieved by recruiting the optimal mix of prevalent and incident patients. The proposed methods are applied to assess waitlist outcomes among kidney transplant candidates.
      PubDate: 2024-10-12
       
  • Spatiotemporal multilevel joint modeling of longitudinal and survival
           outcomes in end-stage kidney disease

    • Free pre-print version: Loading...

      Abstract: Individuals with end-stage kidney disease (ESKD) on dialysis experience high mortality and excessive burden of hospitalizations over time relative to comparable Medicare patient cohorts without kidney failure. A key interest in this population is to understand the time-dynamic effects of multilevel risk factors that contribute to the correlated outcomes of longitudinal hospitalization and mortality. For this we utilize multilevel data from the United States Renal Data System (USRDS), a national database that includes nearly all patients with ESKD, where repeated measurements/hospitalizations over time are nested in patients and patients are nested within (health service) regions across the contiguous U.S. We develop a novel spatiotemporal multilevel joint model (STM-JM) that accounts for the aforementioned hierarchical structure of the data while considering the spatiotemporal variations in both outcomes across regions. The proposed STM-JM includes time-varying effects of multilevel (patient- and region-level) risk factors on hospitalization trajectories and mortality and incorporates spatial correlations across the spatial regions via a multivariate conditional autoregressive correlation structure. Efficient estimation and inference are performed via a Bayesian framework, where multilevel varying coefficient functions are targeted via thin-plate splines. The finite sample performance of the proposed method is assessed through simulation studies. An application of the proposed method to the USRDS data highlights significant time-varying effects of patient- and region-level risk factors on hospitalization and mortality and identifies specific time periods on dialysis and spatial locations across the U.S. with elevated hospitalization and mortality risks.
      PubDate: 2024-10-04
       
  • Unifying mortality forecasting model: an investigation of the
           COM–Poisson distribution in the GAS model for improved projections

    • Free pre-print version: Loading...

      Abstract: Forecasting mortality rates is crucial for evaluating life insurance company solvency, especially amid disruptions caused by phenomena like COVID-19. The Lee–Carter model is commonly employed in mortality modelling; however, extensions that can encompass count data with diverse distributions, such as the Generalized Autoregressive Score (GAS) model utilizing the COM–Poisson distribution, exhibit potential for enhancing time-to-event forecasting accuracy. Using mortality data from 29 countries, this research evaluates various distributions and determines that the COM–Poisson model surpasses the Poisson, binomial, and negative binomial distributions in forecasting mortality rates. The one-step forecasting capability of the GAS model offers distinct advantages, while the COM–Poisson distribution demonstrates enhanced flexibility and versatility by accommodating various distributions, including Poisson and negative binomial. Ultimately, the study determines that the COM–Poisson GAS model is an effective instrument for examining time series data on mortality rates, particularly when facing time-varying parameters and non-conventional data distributions.
      PubDate: 2024-09-13
       
  • Nested case–control sampling without replacement

    • Free pre-print version: Loading...

      Abstract: Nested case–control design (NCC) is a cost-effective outcome-dependent design in epidemiology that collects all cases and a fixed number of controls at the time of case diagnosis from a large cohort. Due to inefficiency relative to full cohort studies, previous research developed various estimation methodologies but changing designs in the formulation of risk sets was considered only in view of potential bias in the partial likelihood estimation. In this paper, we study a modified design that excludes previously selected controls from risk sets in view of efficiency improvement as well as bias. To this end, we extend the inverse probability weighting method of Samuelsen which was shown to outperform the partial likelihood estimator in the standard setting. We develop its asymptotic theory and a variance estimation of both regression coefficients and the cumulative baseline hazard function that takes account of the complex feature of the modified sampling design. In addition to good finite sample performance of variance estimation, simulation studies show that the modified design with the proposed estimator is more efficient than the standard design. Examples are provided using data from NIH-AARP Diet and Health Cohort Study.
      PubDate: 2024-09-05
       
  • Copula-based analysis of dependent current status data with semiparametric
           linear transformation model

    • Free pre-print version: Loading...

      Abstract: This paper discusses regression analysis of current status data with dependent censoring, a problem that often occurs in many areas such as cross-sectional studies, epidemiological investigations and tumorigenicity experiments. Copula model-based methods are commonly employed to tackle this issue. However, these methods often face challenges in terms of model and parameter identification. The primary aim of this paper is to propose a copula-based analysis for dependent current status data, where the association parameter is left unspecified. Our method is based on a general class of semiparametric linear transformation models and parametric copulas. We demonstrate that the proposed semiparametric model is identifiable under certain regularity conditions from the distribution of the observed data. For inference, we develop a sieve maximum likelihood estimation method, using Bernstein polynomials to approximate the nonparametric functions involved. The asymptotic consistency and normality of the proposed estimators are established. Finally, to demonstrate the effectiveness and practical applicability of our method, we conduct an extensive simulation study and apply the proposed method to a real data example.
      PubDate: 2024-08-24
       
  • Special issue dedicated to Mitchell H. Gail, M.D. Ph.D.

    • Free pre-print version: Loading...

      PubDate: 2024-06-24
       
  • A flexible time-varying coefficient rate model for panel count data

    • Free pre-print version: Loading...

      Abstract: Panel count regression is often required in recurrent event studies, where the interest is to model the event rate. Existing rate models are unable to handle time-varying covariate effects due to theoretical and computational difficulties. Mean models provide a viable alternative but are subject to the constraints of the monotonicity assumption, which tends to be violated when covariates fluctuate over time. In this paper, we present a new semiparametric rate model for panel count data along with related theoretical results. For model fitting, we present an efficient EM algorithm with three different methods for variance estimation. The algorithm allows us to sidestep the challenges of numerical integration and difficulties with the iterative convex minorant algorithm. We showed that the estimators are consistent and asymptotically normally distributed. Simulation studies confirmed an excellent finite sample performance. To illustrate, we analyzed data from a real clinical study of behavioral risk factors for sexually transmitted infections.
      PubDate: 2024-05-28
       
  • Measurement error models with zero inflation and multiple sources of
           zeros, with applications to hard zeros

    • Free pre-print version: Loading...

      Abstract: We consider measurement error models for two variables observed repeatedly and subject to measurement error. One variable is continuous, while the other variable is a mixture of continuous and zero measurements. This second variable has two sources of zeros. The first source is episodic zeros, wherein some of the measurements for an individual may be zero and others positive. The second source is hard zeros, i.e., some individuals will always report zero. An example is the consumption of alcohol from alcoholic beverages: some individuals consume alcoholic beverages episodically, while others never consume alcoholic beverages. However, with a small number of repeat measurements from individuals, it is not possible to determine those who are episodic zeros and those who are hard zeros. We develop a new measurement error model for this problem, and use Bayesian methods to fit it. Simulations and data analyses are used to illustrate our methods. Extensions to parametric models and survival analysis are discussed briefly.
      PubDate: 2024-05-28
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 18.97.14.85
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-
JournalTOCs
 
 

 A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

  Subjects -> STATISTICS (Total: 130 journals)
The end of the list has been reached or no journals were found for your choice.
Similar Journals
Similar Journals
HOME > Browse the 73 Subjects covered by JournalTOCs  
SubjectTotal Journals
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 18.97.14.85
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-