Subjects -> STATISTICS (Total: 130 journals)
| A B C D E F G H I J K L M N O P Q R S T U V W X Y Z | The end of the list has been reached or no journals were found for your choice. |
|
|
- Analysis and asymptotic theory for nested case–control designs under
highly stratified proportional hazards models-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Nested case–control sampled event time data under a highly stratified proportional hazards model, in which the number of strata increases proportional to sample size, is described and analyzed. The data can be characterized as stratified sampling from the event time risk sets and the analysis approach of Borgan et al. (Ann Stat 23:1749–1778, 1995) is adapted to accommodate both the stratification and case–control sampling from the stratified risk sets. Conditions for the consistency and asymptotic normality of the maximum partial likelihood estimator are provided and the results are used to compare the efficiency of the stratified analysis to an unstratified analysis when the baseline hazards can be semi-parametrically modeled in two special cases. Using the stratified sampling representation of the stratified analysis, methods for absolute risk estimation described by Borgan et al. (1995) for nested case–control data are used to develop methods for absolute risk estimation under the stratified model. The methods are illustrated by a year of birth stratified analysis of radon exposure and lung cancer mortality in a cohort of uranium miners from the Colorado Plateau. PubDate: 2023-04-01
- Cox regression can be collapsible and Aalen regression can be
non-collapsible-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract It is well-known that the additive hazards model is collapsible, in the sense that when omitting one covariate from a model with two independent covariates, the marginal model is still an additive hazards model with the same regression coefficient or function for the remaining covariate. In contrast, for the proportional hazards model under the same covariate assumption, the marginal model is no longer a proportional hazards model and is not collapsible. These results, however, relate to the model specification and not to the regression parameter estimators. We point out that if covariates in risk sets at all event times are independent then both Cox and Aalen regression estimators are collapsible, in the sense that the parameter estimators in the full and marginal models are consistent for the same value. Vice-versa, if this assumption fails, then the estimates will change systematically both for Cox and Aalen regression. In particular, if the data are generated by an Aalen model with censoring independent of covariates both Cox and Aalen regression is collapsible, but if generated by a proportional hazards model neither estimators are. We will also discuss settings where survival times are generated by proportional hazards models with censoring patterns providing uncorrelated covariates and hence collapsible Cox and Aalen regression estimates. Furthermore, possible consequences for instrumental variable analyses are discussed. PubDate: 2023-04-01
- Phase-type models for competing risks, with emphasis on identifiability
issues-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We first review some main results for phase-type distributions, including a discussion of Coxian distributions and their canonical representations. We then consider the extension of phase-type modeling to cover competing risks. This extension involves the consideration of finite state Markov chains with more than one absorbing state, letting each absorbing state correspond to a particular risk. The non-uniqueness of Markov chain representations of phase-type distributions is well known. In the paper we study corresponding issues for the competing risks case with the aim of obtaining identifiable parameterizations. Statistical inference for the Coxian competing risks model is briefly discussed and some real data are analyzed for illustration. PubDate: 2023-04-01
- The partly parametric and partly nonparametric additive risk model
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Aalen’s linear hazard rate regression model is a useful and increasingly popular alternative to Cox’ multiplicative hazard rate model. It postulates that an individual has hazard rate function \(h(s)=z_1\alpha _1(s)+\cdots +z_r\alpha _r(s)\) in terms of his covariate values \(z_1,\ldots ,z_r\) . These are typically levels of various hazard factors, and may also be time-dependent. The hazard factor functions \(\alpha _j(s)\) are the parameters of the model and are estimated from data. This is traditionally accomplished in a fully nonparametric way. This paper develops methodology for estimating the hazard factor functions when some of them are modelled parametrically while the others are left unspecified. Large-sample results are reached inside this partly parametric, partly nonparametric framework, which also enables us to assess the goodness of fit of the model’s parametric components. In addition, these results are used to pinpoint how much precision is gained, using the parametric-nonparametric model, over the standard nonparametric method. A real-data application is included, along with a brief simulation study. PubDate: 2023-04-01
- On logistic regression with right censored data, with or without competing
risks, and its use for estimating treatment effects-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Simple logistic regression can be adapted to deal with right-censoring by inverse probability of censoring weighting (IPCW). We here compare two such IPCW approaches, one based on weighting the outcome, the other based on weighting the estimating equations. We study the large sample properties of the two approaches and show that which of the two weighting methods is the most efficient depends on the censoring distribution. We show by theoretical computations that the methods can be surprisingly different in realistic settings. We further show how to use the two weighting approaches for logistic regression to estimate causal treatment effects, for both observational studies and randomized clinical trials (RCT). Several estimators for observational studies are compared and we present an application to registry data. We also revisit interesting robustness properties of logistic regression in the context of RCTs, with a particular focus on the IPCW weighting. We find that these robustness properties still hold when the censoring weights are correctly specified, but not necessarily otherwise. PubDate: 2023-04-01
- Bivariate pseudo-observations for recurrent event analysis with terminal
events-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The analysis of recurrent events in the presence of terminal events requires special attention. Several approaches have been suggested for such analyses either using intensity models or marginal models. When analysing treatment effects on recurrent events in controlled trials, special attention should be paid to competing deaths and their impact on interpretation. This paper proposes a method that formulates a marginal model for recurrent events and terminal events simultaneously. Estimation is based on pseudo-observations for both the expected number of events and survival probabilities. Various relevant hypothesis tests in the framework are explored. Theoretical derivations and simulation studies are conducted to investigate the behaviour of the method. The method is applied to two real data examples. The bivariate marginal pseudo-observation model carries the strength of a two-dimensional modelling procedure and performs well in comparison with available models. Finally, an extension to a three-dimensional model, which decomposes the terminal event per death cause, is proposed and exemplified. PubDate: 2023-04-01
- A boosting first-hitting-time model for survival analysis in
high-dimensional settings-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper we propose a boosting algorithm to extend the applicability of a first hitting time model to high-dimensional frameworks. Based on an underlying stochastic process, first hitting time models do not require the proportional hazards assumption, hardly verifiable in the high-dimensional context, and represent a valid parametric alternative to the Cox model for modelling time-to-event responses. First hitting time models also offer a natural way to integrate low-dimensional clinical and high-dimensional molecular information in a prediction model, that avoids complicated weighting schemes typical of current methods. The performance of our novel boosting algorithm is illustrated in three real data examples. PubDate: 2023-04-01
- Latency function estimation under the mixture cure model when the cure
status is available-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper addresses the problem of estimating the conditional survival function of the lifetime of the subjects experiencing the event (latency) in the mixture cure model when the cure status information is partially available. The approach of past work relies on the assumption that long-term survivors are unidentifiable because of right censoring. However, in some cases this assumption is invalid since some subjects are known to be cured, e.g., when a medical test ascertains that a disease has entirely disappeared after treatment. We propose a latency estimator that extends the nonparametric estimator studied in López-Cheda et al. (TEST 26(2):353–376, 2017b) to the case when the cure status is partially available. We establish the asymptotic normality distribution of the estimator, and illustrate its performance in a simulation study. Finally, the estimator is applied to a medical dataset to study the length of hospital stay of COVID-19 patients requiring intensive care. PubDate: 2023-03-08
- A semi-parametric weighted likelihood approach for regression analysis of
bivariate interval-censored outcomes from case-cohort studies-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The case-cohort design was developed to reduce costs when disease incidence is low and covariates are difficult to obtain. However, most of the existing methods are for right-censored data and there exists only limited research on interval-censored data, especially on regression analysis of bivariate interval-censored data. Interval-censored failure time data frequently occur in many areas and a large literature on their analyses has been established. In this paper, we discuss the situation of bivariate interval-censored data arising from case-cohort studies. For the problem, a class of semiparametric transformation frailty models is presented and for inference, a sieve weighted likelihood approach is developed. The large sample properties, including the consistency of the proposed estimators and the asymptotic normality of the regression parameter estimators, are established. Moreover, a simulation is conducted to assess the finite sample performance of the proposed method and suggests that it performs well in practice. PubDate: 2023-03-02
- RKHS-based covariate balancing for survival causal effect estimation
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Survival causal effect estimation based on right-censored data is of key interest in both survival analysis and causal inference. Propensity score weighting is one of the most popular methods in the literature. However, since it involves the inverse of propensity score estimates, its practical performance may be very unstable, especially when the covariate overlap is limited between treatment and control groups. To address this problem, a covariate balancing method is developed in this paper to estimate the counterfactual survival function. The proposed method is nonparametric and balances covariates in a reproducing kernel Hilbert space (RKHS) via weights that are counterparts of inverse propensity scores. The uniform rate of convergence for the proposed estimator is shown to be the same as that for the classical Kaplan–Meier estimator. The appealing practical performance of the proposed method is demonstrated by a simulation study as well as two real data applications to study the causal effect of smoking on survival time of stroke patients and that of endotoxin on survival time for female patients with lung cancer respectively. PubDate: 2023-02-23
- Special issue dedicated to Ørnulf Borgan
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
PubDate: 2023-02-18
- Estimating distribution of length of stay in a multi-state model
conditional on the pathway, with an application to patients hospitalised with Covid-19-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Multi-state models are used to describe how individuals transition through different states over time. The distribution of the time spent in different states, referred to as ‘length of stay’, is often of interest. Methods for estimating expected length of stay in a given state are well established. The focus of this paper is on the distribution of the time spent in different states conditional on the complete pathway taken through the states, which we call ‘conditional length of stay’. This work is motivated by questions about length of stay in hospital wards and intensive care units among patients hospitalised due to Covid-19. Conditional length of stay estimates are useful as a way of summarising individuals’ transitions through the multi-state model, and also as inputs to mathematical models used in planning hospital capacity requirements. We describe non-parametric methods for estimating conditional length of stay distributions in a multi-state model in the presence of censoring, including conditional expected length of stay (CELOS). Methods are described for an illness-death model and then for the more complex motivating example. The methods are assessed using a simulation study and shown to give unbiased estimates of CELOS, whereas naive estimates of CELOS based on empirical averages are biased in the presence of censoring. The methods are applied to estimate conditional length of stay distributions for individuals hospitalised due to Covid-19 in the UK, using data on 42,980 individuals hospitalised from March to July 2020 from the COVID19 Clinical Information Network. PubDate: 2023-02-08
- Investigating non-inferiority or equivalence in time-to-event data under
non-proportional hazards-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The classical approach to analyze time-to-event data, e.g. in clinical trials, is to fit Kaplan–Meier curves yielding the treatment effect as the hazard ratio between treatment groups. Afterwards, a log-rank test is commonly performed to investigate whether there is a difference in survival or, depending on additional covariates, a Cox proportional hazard model is used. However, in numerous trials these approaches fail due to the presence of non-proportional hazards, resulting in difficulties of interpreting the hazard ratio and a loss of power. When considering equivalence or non-inferiority trials, the commonly performed log-rank based tests are similarly affected by a violation of this assumption. Here we propose a parametric framework to assess equivalence or non-inferiority for survival data. We derive pointwise confidence bands for both, the hazard ratio and the difference of the survival curves. Further we propose a test procedure addressing non-inferiority and equivalence by directly comparing the survival functions at certain time points or over an entire range of time. Once the model’s suitability is proven the method provides a noticeable power benefit, irrespectively of the shape of the hazard ratio. On the other hand, model selection should be carried out carefully as misspecification may cause type I error inflation in some situations. We investigate the robustness and demonstrate the advantages and disadvantages of the proposed methods by means of a simulation study. Finally, we demonstrate the validity of the methods by a clinical trial example. PubDate: 2023-01-28
- Estimation and testing for clustered interval-censored bivariate survival
data with application using the semi-parametric version of the Clayton–Oakes model-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The Kaplan–Meier estimator is ubiquitously used to estimate survival probabilities for time-to-event data. It is nonparametric, and thus does not require specification of a survival distribution, but it does assume that the risk set at any time t consists of independent observations. This assumption does not hold for data from paired organ systems such as occur in ophthalmology (eyes) or otolaryngology (ears), or for other types of clustered data. In this article, we estimate marginal survival probabilities in the setting of clustered data, and provide confidence limits for these estimates with intra-cluster correlation accounted for by an interval-censored version of the Clayton–Oakes model. We develop a goodness-of-fit test for general bivariate interval-censored data and apply it to the proposed interval-censored version of the Clayton–Oakes model. We also propose a likelihood ratio test for the comparison of survival distributions between two groups in the setting of clustered data under the assumption of a constant between-group hazard ratio. This methodology can be used both for balanced and unbalanced cluster sizes, and also when the cluster size is informative. We compare our test to the ordinary log rank test and the Lin-Wei (LW) test based on the marginal Cox proportional Hazards model with robust standard errors obtained from the sandwich estimator. Simulation results indicate that the ordinary log rank test over-inflates type I error, while the proposed unconditional likelihood ratio test has appropriate type I error and higher power than the LW test. The method is demonstrated in real examples from the Sorbinil Retinopathy Trial, and the Age-Related Macular Degeneration Study. Raw data from these two trials are provided. PubDate: 2023-01-20
- Incorporating delayed entry into the joint frailty model for recurrent
events and a terminal event-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In studies of recurrent events, joint modeling approaches are often needed to allow for potential dependent censoring by a terminal event such as death. Joint frailty models for recurrent events and death with an additional dependence parameter have been studied for cases in which individuals are observed from the start of the event processes. However, samples are often selected at a later time, which results in delayed entry so that only individuals who have not yet experienced the terminal event will be included. In joint frailty models such left truncation has effects on the frailty distribution that need to be accounted for in both the recurrence process and the terminal event process, if the two are associated. We demonstrate, in a comprehensive simulation study, the effects that not adjusting for late entry can have and derive the correctly adjusted marginal likelihood, which can be expressed as a ratio of two integrals over the frailty distribution. We extend the estimation method of Liu and Huang (Stat Med 27:2665–2683, 2008. https://doi.org/10.1002/sim.3077) to include potential left truncation. Numerical integration is performed by Gaussian quadrature, the baseline intensities are specified as piecewise constant functions, potential covariates are assumed to have multiplicative effects on the intensities. We apply the method to estimate age-specific intensities of recurrent urinary tract infections and mortality in an older population. PubDate: 2023-01-18
- Semiparametric predictive inference for failure data using
first-hitting-time threshold regression-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The progression of disease for an individual can be described mathematically as a stochastic process. The individual experiences a failure event when the disease path first reaches or crosses a critical disease level. This happening defines a failure event and a first hitting time or time-to-event, both of which are important in medical contexts. When the context involves explanatory variables then there is usually an interest in incorporating regression structures into the analysis and the methodology known as threshold regression comes into play. To date, most applications of threshold regression have been based on parametric families of stochastic processes. This paper presents a semiparametric form of threshold regression that requires the stochastic process to have only one key property, namely, stationary independent increments. As this property is frequently encountered in real applications, this model has potential for use in many fields. The mathematical underpinnings of this semiparametric approach for estimation and prediction are described. The basic data element required by the model is a pair of readings representing the observed change in time and the observed change in disease level, arising from either a failure event or survival of the individual to the end of the data record. An extension is presented for applications where the underlying disease process is unobservable but component covariate processes are available to construct a surrogate disease process. Threshold regression, used in combination with a data technique called Markov decomposition, allows the methods to handle longitudinal time-to-event data by uncoupling a longitudinal record into a sequence of single records. Computational aspects of the methods are straightforward. An array of simulation experiments that verify computational feasibility and statistical inference are reported in an online supplement. Case applications based on longitudinal observational data from The Osteoarthritis Initiative (OAI) study are presented to demonstrate the methodology and its practical use. PubDate: 2023-01-10 DOI: 10.1007/s10985-022-09583-3
- On a simple estimation of the proportional odds model under right
truncation-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Retrospective sampling can be useful in epidemiological research for its convenience to explore an etiological association. One particular retrospective sampling is that disease outcomes of the time-to-event type are collected subject to right truncation, along with other covariates of interest. For regression analysis of the right-truncated time-to-event data, the so-called proportional reverse-time hazards model has been proposed, but the interpretation of its regression parameters tends to be cumbersome, which has greatly hampered its application in practice. In this paper, we instead consider the proportional odds model, an appealing alternative to the popular proportional hazards model. Under the proportional odds model, there is an embedded relationship between the reverse-time hazard function and the usual hazard function. Building on this relationship, we provide a simple procedure to estimate the regression parameters in the proportional odds model for the right truncated data. Weighted estimations are also studied. PubDate: 2023-01-05 DOI: 10.1007/s10985-022-09584-2
- Double bias correction for high-dimensional sparse additive hazards
regression with covariate measurement errors-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We propose an inferential procedure for additive hazards regression with high-dimensional survival data, where the covariates are prone to measurement errors. We develop a double bias correction method by first correcting the bias arising from measurement errors in covariates through an estimating function for the regression parameter. By adopting the convex relaxation technique, a regularized estimator for the regression parameter is obtained by elaborately designing a feasible loss based on the estimating function, which is solved via linear programming. Using the Neyman orthogonality, we propose an asymptotically unbiased estimator which further corrects the bias caused by the convex relaxation and regularization. We derive the convergence rate of the proposed estimator and establish the asymptotic normality for the low-dimensional parameter estimator and the linear combination thereof, accompanied with a consistent estimator for the variance. Numerical experiments are carried out on both simulated and real datasets to demonstrate the promising performance of the proposed double bias correction method. PubDate: 2023-01-01
- A general class of promotion time cure rate models with a new biological
interpretation-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Over the last decades, the challenges in survival models have been changing considerably and full probabilistic modeling is crucial in many medical applications. Motivated from a new biological interpretation of cancer metastasis, we introduce a general method for obtaining more flexible cure rate models. The proposal model extended the promotion time cure rate model. Furthermore, it includes several well-known models as special cases and defines many new special models. We derive several properties of the hazard function for the proposed model and establish mathematical relationships with the promotion time cure rate model. We consider a frequentist approach to perform inferences, and the maximum likelihood method is employed to estimate the model parameters. Simulation studies are conducted to evaluate its performance with a discussion of the obtained results. A real dataset from population-based study of incident cases of melanoma diagnosed in the state of São Paulo, Brazil, is discussed in detail. PubDate: 2023-01-01
- Joint modeling of generalized scale-change models for recurrent event and
failure time data-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Recurrent event and failure time data arise frequently in many clinical and observational studies. In this article, we propose a joint modeling of generalized scale-change models for the recurrent event process and the failure time, and allow the two processes to be correlated through a shared frailty. The proposed joint model is flexible in that it requires neither the Poisson assumption for the recurrent event process nor a parametric assumption on the frailty distribution. Estimating equation approaches are developed for parameter estimation, and the asymptotic properties of the resulting estimators are established. Simulation studies are conducted to evaluate the finite sample performances of the proposed method. An application to a medical cost study of chronic heart failure patients is provided. PubDate: 2023-01-01
|