A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

              [Sort alphabetically]   [Restore default list]

  Subjects -> STATISTICS (Total: 130 journals)
Showing 1 - 151 of 151 Journals sorted by number of followers
Review of Economics and Statistics     Hybrid Journal   (Followers: 313)
Statistics in Medicine     Hybrid Journal   (Followers: 166)
Journal of Econometrics     Hybrid Journal   (Followers: 85)
Journal of the American Statistical Association     Full-text available via subscription   (Followers: 79, SJR: 3.746, CiteScore: 2)
Advances in Data Analysis and Classification     Hybrid Journal   (Followers: 53)
Biometrics     Hybrid Journal   (Followers: 52)
Sociological Methods & Research     Hybrid Journal   (Followers: 49)
Journal of the Royal Statistical Society, Series B (Statistical Methodology)     Hybrid Journal   (Followers: 43)
Journal of Business & Economic Statistics     Full-text available via subscription   (Followers: 42, SJR: 3.664, CiteScore: 2)
Computational Statistics & Data Analysis     Hybrid Journal   (Followers: 39)
Journal of the Royal Statistical Society Series C (Applied Statistics)     Hybrid Journal   (Followers: 36)
Journal of Risk and Uncertainty     Hybrid Journal   (Followers: 35)
Oxford Bulletin of Economics and Statistics     Hybrid Journal   (Followers: 35)
Journal of the Royal Statistical Society, Series A (Statistics in Society)     Hybrid Journal   (Followers: 31)
Journal of Urbanism: International Research on Placemaking and Urban Sustainability     Hybrid Journal   (Followers: 28)
The American Statistician     Full-text available via subscription   (Followers: 27)
Statistical Methods in Medical Research     Hybrid Journal   (Followers: 25)
Journal of Applied Statistics     Hybrid Journal   (Followers: 22)
Journal of Computational & Graphical Statistics     Full-text available via subscription   (Followers: 21)
Journal of Forecasting     Hybrid Journal   (Followers: 21)
Statistical Modelling     Hybrid Journal   (Followers: 19)
Journal of Statistical Software     Open Access   (Followers: 19, SJR: 13.802, CiteScore: 16)
Journal of Time Series Analysis     Hybrid Journal   (Followers: 18)
Computational Statistics     Hybrid Journal   (Followers: 17)
Journal of Biopharmaceutical Statistics     Hybrid Journal   (Followers: 17)
Risk Management     Hybrid Journal   (Followers: 16)
Decisions in Economics and Finance     Hybrid Journal   (Followers: 15)
Demographic Research     Open Access   (Followers: 15)
Statistics and Computing     Hybrid Journal   (Followers: 14)
Statistics & Probability Letters     Hybrid Journal   (Followers: 13)
Geneva Papers on Risk and Insurance - Issues and Practice     Hybrid Journal   (Followers: 13)
Australian & New Zealand Journal of Statistics     Hybrid Journal   (Followers: 12)
International Statistical Review     Hybrid Journal   (Followers: 12)
Journal of Statistical Physics     Hybrid Journal   (Followers: 12)
Structural and Multidisciplinary Optimization     Hybrid Journal   (Followers: 12)
Statistics: A Journal of Theoretical and Applied Statistics     Hybrid Journal   (Followers: 12)
Pharmaceutical Statistics     Hybrid Journal   (Followers: 10)
The Canadian Journal of Statistics / La Revue Canadienne de Statistique     Hybrid Journal   (Followers: 10)
Communications in Statistics - Theory and Methods     Hybrid Journal   (Followers: 10)
Advances in Complex Systems     Hybrid Journal   (Followers: 10)
Stata Journal     Full-text available via subscription   (Followers: 10)
Multivariate Behavioral Research     Hybrid Journal   (Followers: 9)
Scandinavian Journal of Statistics     Hybrid Journal   (Followers: 9)
Communications in Statistics - Simulation and Computation     Hybrid Journal   (Followers: 9)
Handbook of Statistics     Full-text available via subscription   (Followers: 9)
Fuzzy Optimization and Decision Making     Hybrid Journal   (Followers: 9)
Current Research in Biostatistics     Open Access   (Followers: 9)
Journal of Educational and Behavioral Statistics     Hybrid Journal   (Followers: 8)
Journal of Statistical Planning and Inference     Hybrid Journal   (Followers: 8)
Teaching Statistics     Hybrid Journal   (Followers: 8)
Law, Probability and Risk     Hybrid Journal   (Followers: 8)
Argumentation et analyse du discours     Open Access   (Followers: 8)
Research Synthesis Methods     Hybrid Journal   (Followers: 8)
Environmental and Ecological Statistics     Hybrid Journal   (Followers: 7)
Journal of Combinatorial Optimization     Hybrid Journal   (Followers: 7)
Journal of Global Optimization     Hybrid Journal   (Followers: 7)
Journal of Nonparametric Statistics     Hybrid Journal   (Followers: 7)
Queueing Systems     Hybrid Journal   (Followers: 7)
Asian Journal of Mathematics & Statistics     Open Access   (Followers: 7)
Biometrical Journal     Hybrid Journal   (Followers: 6)
Significance     Hybrid Journal   (Followers: 6)
International Journal of Computational Economics and Econometrics     Hybrid Journal   (Followers: 6)
Journal of Mathematics and Statistics     Open Access   (Followers: 6)
Applied Categorical Structures     Hybrid Journal   (Followers: 5)
Engineering With Computers     Hybrid Journal   (Followers: 5)
Lifetime Data Analysis     Hybrid Journal   (Followers: 5)
Optimization Methods and Software     Hybrid Journal   (Followers: 5)
Statistical Methods and Applications     Hybrid Journal   (Followers: 5)
CHANCE     Hybrid Journal   (Followers: 5)
ESAIM: Probability and Statistics     Open Access   (Followers: 4)
Mathematical Methods of Statistics     Hybrid Journal   (Followers: 4)
Metrika     Hybrid Journal   (Followers: 4)
Statistical Papers     Hybrid Journal   (Followers: 4)
Monthly Statistics of International Trade - Statistiques mensuelles du commerce international     Full-text available via subscription   (Followers: 4)
TEST     Hybrid Journal   (Followers: 3)
Journal of Algebraic Combinatorics     Hybrid Journal   (Followers: 3)
Journal of Theoretical Probability     Hybrid Journal   (Followers: 3)
Statistical Inference for Stochastic Processes     Hybrid Journal   (Followers: 3)
Handbook of Numerical Analysis     Full-text available via subscription   (Followers: 3)
Sankhya A     Hybrid Journal   (Followers: 3)
AStA Advances in Statistical Analysis     Hybrid Journal   (Followers: 2)
Extremes     Hybrid Journal   (Followers: 2)
Optimization Letters     Hybrid Journal   (Followers: 2)
Stochastic Models     Hybrid Journal   (Followers: 2)
Stochastics An International Journal of Probability and Stochastic Processes: formerly Stochastics and Stochastics Reports     Hybrid Journal   (Followers: 2)
IEA World Energy Statistics and Balances -     Full-text available via subscription   (Followers: 2)
Building Simulation     Hybrid Journal   (Followers: 2)
Technology Innovations in Statistics Education (TISE)     Open Access   (Followers: 2)
Measurement Interdisciplinary Research and Perspectives     Hybrid Journal   (Followers: 1)
Statistica Neerlandica     Hybrid Journal   (Followers: 1)
Sequential Analysis: Design Methods and Applications     Hybrid Journal   (Followers: 1)
Journal of the Korean Statistical Society     Hybrid Journal   (Followers: 1)
Wiley Interdisciplinary Reviews - Computational Statistics     Hybrid Journal   (Followers: 1)
Statistics and Economics     Open Access  
Review of Socionetwork Strategies     Hybrid Journal  
SourceOECD Measuring Globalisation Statistics - SourceOCDE Mesurer la mondialisation - Base de donnees statistiques     Full-text available via subscription  

              [Sort alphabetically]   [Restore default list]

Similar Journals
Journal Cover
Statistical Methods in Medical Research
Journal Prestige (SJR): 1.402
Citation Impact (citeScore): 2
Number of Followers: 25  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 0962-2802 - ISSN (Online) 1477-0334
Published by Sage Publications Homepage  [1176 journals]
  • Retraction notice

    • Free pre-print version: Loading...

      Pages: NP1 - NP1
      Abstract: Statistical Methods in Medical Research, Volume 33, Issue 6, Page NP1-NP1, June 2024.

      Citation: Statistical Methods in Medical Research
      PubDate: 2024-06-07T11:56:15Z
      DOI: 10.1177/0962280215586011
      Issue No: Vol. 33, No. 6 (2024)
       
  • Accounting for regression to the mean under the bivariate
           [math]-distribution

    • Free pre-print version: Loading...

      Authors: Muhammad Umair, Manzoor Khan, Jake Olivier
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Regression to the mean occurs when an unusual observation is followed by a more typical outcome closer to the population mean. In pre- and post-intervention studies, treatment is administered to subjects with initial measurements located in the tail of a distribution, and a paired sample [math]-test can be utilized to assess the effectiveness of the intervention. The observed change in the pre-post means is the sum of regression to the mean and treatment effects, and ignoring regression to the mean could lead to erroneous conclusions about the effectiveness of the treatment effect. In this study, formulae for regression to the mean are derived, and maximum likelihood estimation is employed to numerically estimate the regression to the mean effect when the test statistic follows the bivariate [math]-distribution based on a baseline criterion or a cut-off point. The pre-post degrees of freedom could be equal but also unequal such as when there is missing data. Additionally, we illustrate how regression to the mean is influenced by cut-off points, mixing angles which are related to correlation, and degrees of freedom. A simulation study is conducted to assess the statistical properties of unbiasedness, consistency, and asymptotic normality of the regression to the mean estimator. Moreover, the proposed methods are compared with an existing one assuming bivariate normality. The [math]-values are compared when regression to the mean is either ignored or accounted for to gauge the statistical significance of the paired [math]-test. The proposed method is applied to real data concerning schizophrenia patients, and the observed conditional mean difference called the total effect is decomposed into the regression to the mean and treatment effects.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-08-09T05:49:09Z
      DOI: 10.1177/09622802241267808
       
  • Estimation and inference on the partial volume under the receiver
           operating characteristic surface

    • Free pre-print version: Loading...

      Authors: Kate J Young, Leonidas E Bantis
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Summary measures of biomarker accuracy that employ the receiver operating characteristic surface have been proposed for biomarkers that classify patients into one of three groups: healthy, benign, or aggressive disease. The volume under the receiver operating characteristic surface summarizes the overall discriminatory ability of a biomarker in such configurations, but includes cutoffs associated with clinically irrelevant true classification rates. Due to the lethal nature of pancreatic cancer, cutoffs associated with a low true classification rate for identifying patients with pancreatic cancer may be undesirable and not appropriate for use in a clinical setting. In this project, we study the properties of a more focused criterion, the partial volume under the receiver operating characteristic surface, that summarizes the diagnostic accuracy of a marker in the three-class setting for regions restricted to only those of clinical interest. We propose methods for estimation and inference on the partial volume under the receiver operating characteristic surface under parametric and non-parametric frameworks and apply these methods to the evaluation of potential biomarkers for the diagnosis of pancreatic cancer.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-08-09T05:37:41Z
      DOI: 10.1177/09622802241267356
       
  • Inference for restricted mean survival time as a function of restriction
           time under length-biased sampling

    • Free pre-print version: Loading...

      Authors: Fangfang Bai, Xiaoran Yang, Xuerong Chen, Xiaofei Wang
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      The restricted mean survival time (RMST) is often of direct interest in clinical studies involving censored survival outcomes. It describes the area under the survival curve from time zero to a specified time point. When data are subject to length-biased sampling, as is frequently encountered in observational cohort studies, existing methods cannot estimate the RMST for various restriction times through a single model. In this article, we model the RMST as a continuous function of the restriction time under the setting of length-biased sampling. Two approaches based on estimating equations are proposed to estimate the time-varying effects of covariates. Finally, we establish the asymptotic properties for the proposed estimators. Simulation studies are performed to demonstrate the finite sample performance. Two real-data examples are analyzed by our procedures.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-08-07T03:14:47Z
      DOI: 10.1177/09622802241267812
       
  • Improving estimation efficiency of case-cohort studies with
           interval-censored failure time data

    • Free pre-print version: Loading...

      Authors: Qingning Zhou, Kin Yau Wong
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      The case-cohort design is a commonly used cost-effective sampling strategy for large cohort studies, where some covariates are expensive to measure or obtain. In this paper, we consider regression analysis under a case-cohort study with interval-censored failure time data, where the failure time is only known to fall within an interval instead of being exactly observed. A common approach to analyzing data from a case-cohort study is the inverse probability weighting approach, where only subjects in the case-cohort sample are used in estimation, and the subjects are weighted based on the probability of inclusion into the case-cohort sample. This approach, though consistent, is generally inefficient as it does not incorporate information outside the case-cohort sample. To improve efficiency, we first develop a sieve maximum weighted likelihood estimator under the Cox model based on the case-cohort sample and then propose a procedure to update this estimator by using information in the full cohort. We show that the update estimator is consistent, asymptotically normal, and at least as efficient as the original estimator. The proposed method can flexibly incorporate auxiliary variables to improve estimation efficiency. A weighted bootstrap procedure is employed for variance estimation. Simulation results indicate that the proposed method works well in practical situations. An application to a Phase 3 HIV vaccine efficacy trial is provided for illustration.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-08-06T10:31:09Z
      DOI: 10.1177/09622802241268601
       
  • Measuring the individualization potential of treatment individualization
           rules: Application to rules built with a new parametric interaction model
           for parallel-group clinical trials

    • Free pre-print version: Loading...

      Authors: Francisco J Diaz
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      For personalized medicine, we propose a general method of evaluating the potential performance of an individualized treatment rule in future clinical applications with new patients. We focus on rules that choose the most beneficial treatment for the patient out of two active (nonplacebo) treatments, which the clinician will prescribe regularly to the patient after the decision. We develop a measure of the individualization potential (IP) of a rule. The IP compares the expected effectiveness of the rule in a future clinical individualization setting versus the effectiveness of not trying individualization. We illustrate our evaluation method by explaining how to measure the IP of a useful type of individualized rules calculated through a new parametric interaction model of data from parallel-group clinical trials with continuous responses. Our interaction model implies a structural equation model we use to estimate the rule and its IP. We examine the IP both theoretically and with simulations when the estimated individualized rule is put into practice in new patients. Our individualization approach was superior to outcome-weighted machine learning according to simulations. We also show connections with crossover and N-of-1 trials. As a real data application, we estimate a rule for the individualization of treatments for diabetic macular edema and evaluate its IP.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-08-06T10:30:29Z
      DOI: 10.1177/09622802241259172
       
  • A dependent circular-linear model for multivariate biomechanical data:
           Ilizarov ring fixator study

    • Free pre-print version: Loading...

      Authors: Priyanka Nagar, Andriette Bekker, Mohammad Arashi, Cor-Jacques Kat, Annette-Christi Barnard
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Biomechanical and orthopaedic studies frequently encounter complex datasets that encompass both circular and linear variables. In most cases (i) the circular and linear variables are considered in isolation with dependency between variables neglected and (ii) the cyclicity of the circular variables is disregarded resulting in erroneous decision making. Given the inherent characteristics of circular variables, it is imperative to adopt methods that integrate directional statistics to achieve precise modelling. This paper is motivated by the modelling of biomechanical data, that is, the fracture displacements, that is used as a measure in external fixator comparisons. We focus on a dataset, based on an Ilizarov ring fixator, comprising of six variables. A modelling framework applicable to the six-dimensional joint distribution of circular-linear data based on vine copulas is proposed. The pair-copula decomposition concept of vine copulas represents the dependence structure as a combination of circular-linear, circular-circular and linear-linear pairs modelled by their respective copulas. This framework allows us to assess the dependencies in the joint distribution as well as account for the cyclicity of the circular variables. Thus, a new approach for accurate modelling of mechanical behaviour for Ilizarov ring fixators and other data of this nature is imparted.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-08-06T08:36:01Z
      DOI: 10.1177/09622802241268654
       
  • Sample size calculation for mixture cure model with restricted mean
           survival time as a primary endpoint

    • Free pre-print version: Loading...

      Authors: Zhaojin Li, Xiang Geng, Yawen Hou, Zheng Chen
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      It is not uncommon for a substantial proportion of patients to be cured (or survive long-term) in clinical trials with time-to-event endpoints, such as the endometrial cancer trial. When designing a clinical trial, a mixture cure model should be used to fully consider the cure fraction. Previously, mixture cure model sample size calculations were based on the proportional hazards assumption of latency distribution between groups, and the log-rank test was used for deriving sample size formulas. In real studies, the latency distributions of the two groups often do not satisfy the proportional hazards assumptions. This article has derived a sample size calculation formula for a mixture cure model with restricted mean survival time as the primary endpoint, and did simulation and example studies. The restricted mean survival time test is not subject to proportional hazards assumptions, and the difference in treatment effect obtained can be quantified as the number of years (or months) increased or decreased in survival time, making it very convenient for clinical patient-physician communication. The simulation results showed that the sample sizes estimated by the restricted mean survival time test for the mixture cure model were accurate regardless of whether the proportional hazards assumptions were satisfied and were smaller than the sample sizes estimated by the log-rank test in most cases for the scenarios in which the proportional hazards assumptions were violated.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-08-06T07:09:10Z
      DOI: 10.1177/09622802241265501
       
  • Minimizing confounding in comparative observational studies with
           time-to-event outcomes: An extensive comparison of covariate balancing
           methods using Monte Carlo simulation

    • Free pre-print version: Loading...

      Authors: Guy Cafri, Stephen Fortin, Peter C Austin
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Observational studies are frequently used in clinical research to estimate the effects of treatments or exposures on outcomes. To reduce the effects of confounding when estimating treatment effects, covariate balancing methods are frequently implemented. This study evaluated, using extensive Monte Carlo simulation, several methods of covariate balancing, and two methods for propensity score estimation, for estimating the average treatment effect on the treated using a hazard ratio from a Cox proportional hazards model. With respect to minimizing bias and maximizing accuracy (as measured by the mean square error) of the treatment effect, the average treatment effect on the treated weighting, fine stratification, and optimal full matching with a conventional logistic regression model for the propensity score performed best across all simulated conditions. Other methods performed well in specific circumstances, such as pair matching when sample sizes were large (n = 5000) and the proportion treated was < 0.25. Statistical power was generally higher for weighting methods than matching methods, and Type I error rates were at or below the nominal level for balancing methods with unbiased treatment effect estimates. There was also a decreasing effective sample size with an increasing number of strata, therefore for stratification-based weighting methods, it may be important to consider fewer strata. Generally, we recommend methods that performed well in our simulations, although the identification of methods that performed well is necessarily limited by the specific features of our simulation. The methods are illustrated using a real-world example comparing beta blockers and angiotensin-converting enzyme inhibitors among hypertensive patients at risk for incident stroke.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-07-25T11:10:12Z
      DOI: 10.1177/09622802241262527
       
  • Cause-specific hazard Cox models with partly interval censoring –
           Penalized likelihood estimation using Gaussian quadrature

    • Free pre-print version: Loading...

      Authors: Joseph Descallar, Jun Ma, Houying Zhu, Stephane Heritier, Rory Wolfe
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      The cause-specific hazard Cox model is widely used in analyzing competing risks survival data, and the partial likelihood method is a standard approach when survival times contain only right censoring. In practice, however, interval-censored survival times often arise, and this means the partial likelihood method is not directly applicable. Two common remedies in practice are (i) to replace each censoring interval with a single value, such as the middle point; or (ii) to redefine the event of interest, such as the time to diagnosis instead of the time to recurrence of a disease. However, the mid-point approach can cause biased parameter estimates. In this article, we develop a penalized likelihood approach to fit semi-parametric cause-specific hazard Cox models, and this method is general enough to allow left, right, and interval censoring times. Penalty functions are used to regularize the baseline hazard estimates and also to make these estimates less affected by the number and location of knots used for the estimates. We will provide asymptotic properties for the estimated parameters. A simulation study is designed to compare our method with the mid-point partial likelihood approach. We apply our method to the Aspirin in Reducing Events in the Elderly (ASPREE) study, illustrating an application of our proposed method.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-07-25T11:09:52Z
      DOI: 10.1177/09622802241262526
       
  • Estimating individualized treatment rules by optimizing the adjusted
           probability of a longer survival

    • Free pre-print version: Loading...

      Authors: Qijia He, Shixiao Zhang, Michael L LeBlanc, Ying-Qi Zhao
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Individualized treatment rules inform tailored treatment decisions based on the patient’s information, where the goal is to optimize clinical benefit for the population. When the clinical outcome of interest is survival time, most of current approaches typically aim to maximize the expected time of survival. We propose a new criterion for constructing Individualized treatment rules that optimize the clinical benefit with survival outcomes, termed as the adjusted probability of a longer survival. This objective captures the likelihood of living longer with being on treatment, compared to the alternative, which provides an alternative and often straightforward interpretation to communicate with clinicians and patients. We view it as an alternative to the survival analysis standard of the hazard ratio and the increasingly used restricted mean survival time. We develop a new method to construct the optimal Individualized treatment rule by maximizing a nonparametric estimator of the adjusted probability of a longer survival for a decision rule. Simulation studies demonstrate the reliability of the proposed method across a range of different scenarios. We further perform data analysis using data collected from a randomized Phase III clinical trial (SWOG S0819).
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-07-25T11:09:32Z
      DOI: 10.1177/09622802241262525
       
  • Group lasso priors for Bayesian accelerated failure time models with
           left-truncated and interval-censored data

    • Free pre-print version: Loading...

      Authors: Harrison T Reeder, Sebastien Haneuse, Kyu Ha Lee
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      An important task in health research is to characterize time-to-event outcomes such as disease onset or mortality in terms of a potentially high-dimensional set of risk factors. For example, prospective cohort studies of Alzheimer’s disease (AD) typically enroll older adults for observation over several decades to assess the long-term impact of genetic and other factors on cognitive decline and mortality. The accelerated failure time model is particularly well-suited to such studies, structuring covariate effects as “horizontal” changes to the survival quantiles that conceptually reflect shifts in the outcome distribution due to lifelong exposures. However, this modeling task is complicated by the enrollment of adults at differing ages, and intermittent follow-up visits leading to interval-censored outcome information. Moreover, genetic and clinical risk factors are not only high-dimensional, but characterized by underlying grouping structures, such as by function or gene location. Such grouped high-dimensional covariates require shrinkage methods that directly acknowledge this structure to facilitate variable selection and estimation. In this paper, we address these considerations directly by proposing a Bayesian accelerated failure time model with a group-structured lasso penalty, designed for left-truncated and interval-censored time-to-event data. We develop an R package with a Markov chain Monte Carlo sampler for estimation. We present a simulation study examining the performance of this method relative to an ordinary lasso penalty and apply the proposed method to identify groups of predictive genetic and clinical risk factors for AD in the Religious Orders Study and Memory and Aging Project prospective cohort studies of AD and dementia.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-07-25T11:07:22Z
      DOI: 10.1177/09622802241262523
       
  • Analyzing heterogeneity in biomarker discriminative performance through
           partial time-dependent receiver operating characteristic curve modeling

    • Free pre-print version: Loading...

      Authors: Xinyang Jiang, Wen Li, Kang Wang, Ruosha Li, Jing Ning
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      This study investigates the heterogeneity of a biomarker’s discriminative performance for predicting subsequent time-to-event outcomes across different patient subgroups. While the area under the curve (AUC) for the time-dependent receiver operating characteristic curve is commonly used to assess biomarker performance, the partial time-dependent AUC (PAUC) provides insights that are often more pertinent for population screening and diagnostic testing. To achieve this objective, we propose a regression model tailored for PAUC and develop two distinct estimation procedures for discrete and continuous covariates, employing a pseudo-partial likelihood method. Simulation studies are conducted to assess the performance of these procedures across various scenarios. We apply our model and inference procedure to the Alzheimer’s Disease Neuroimaging Initiative data set to evaluate potential heterogeneities in the discriminative performance of biomarkers for early Alzheimer’s disease diagnosis based on patients’ characteristics.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-07-25T11:06:52Z
      DOI: 10.1177/09622802241262521
       
  • Proportion of treatment effect explained: An overview of interpretations

    • Free pre-print version: Loading...

      Authors: Florian Stijven, Ariel Alonso, Geert Molenberghs
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      The selection of the primary endpoint in a clinical trial plays a critical role in determining the trial’s success. Ideally, the primary endpoint is the clinically most relevant outcome, also termed the true endpoint. However, practical considerations, like extended follow-up, may complicate this choice, prompting the proposal to replace the true endpoint with so-called surrogate endpoints. Evaluating the validity of these surrogate endpoints is crucial, and a popular evaluation framework is based on the proportion of treatment effect explained (PTE). While methodological advancements in this area have focused primarily on estimation methods, interpretation remains a challenge hindering the practical use of the PTE. We review various ways to interpret the PTE. These interpretations—two causal and one non-causal—reveal connections between the PTE principal surrogacy, causal mediation analysis, and the prediction of trial-level treatment effects. A common limitation across these interpretations is the reliance on unverifiable assumptions. As such, we argue that the PTE is only meaningful when researchers are willing to make very strong assumptions. These challenges are also illustrated in an analysis of three hypothetical vaccine trials.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-07-25T11:05:53Z
      DOI: 10.1177/09622802241259177
       
  • Erratum to “A dose–effect network meta-analysis model with application
           in antidepressants using restricted cubic splines”

    • Free pre-print version: Loading...

      Abstract: Statistical Methods in Medical Research, Ahead of Print.

      Citation: Statistical Methods in Medical Research
      PubDate: 2024-07-23T09:34:40Z
      DOI: 10.1177/09622802241254569
       
  • Point estimation of the 100p percent lethal dose using a novel penalised
           likelihood approach

    • Free pre-print version: Loading...

      Authors: Yilei Ma, Youpeng Su, Peng Wang, Ping Yin
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Estimation of the 100p percent lethal dose ([math]) is of great interest to pharmacologists for assessing the toxicity of certain compounds. However, most existing literature focuses on the interval estimation of [math] and little attention has been paid to its point estimation. Currently, the most commonly used method for estimating the [math] is the maximum likelihood estimator (MLE), which can be represented as a ratio estimator, with the denominator being the slope estimated from the logistic regression model. However, the MLE can be seriously biased when the sample size is small, a common nature in such studies, or when the dose–response curve is relatively flat (i.e. the slope approaches zero). In this study, we address these issues by developing a novel penalised maximum likelihood estimator (PMLE) that can prevent the denominator of the ratio from being close to zero. Similar to the MLE, the PMLE is computationally simple and thus can be conveniently used in practice. Moreover, with a suitable penalty parameter, we show that the PMLE can (a) reduce the bias to the second order with respect to the sample size and (b) avoid extreme estimates. Through simulation studies and real data applications, we show that the PMLE generally outperforms the existing methods in terms of bias and root mean square error.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-06-12T03:28:33Z
      DOI: 10.1177/09622802241259174
       
  • A robust regression model for bounded count health data

    • Free pre-print version: Loading...

      Authors: Cristian L Bayes, Jorge Luis Bazán, Luis Valdivieso
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Bounded count response data arise naturally in health applications. In general, the well-known beta-binomial regression model form the basis for analyzing this data, specially when we have overdispersed data. Little attention, however, has been given to the literature on the possibility of having extreme observations and overdispersed data. We propose in this work an extension of the beta-binomial regression model, named the beta-2-binomial regression model, which provides a rather flexible approach for fitting a regression model with a wide spectrum of bounded count response data sets under the presence of overdispersion, outliers, or excess of extreme observations. This distribution possesses more skewness and kurtosis than the beta-binomial model but preserves the same mean and variance form of the beta-binomial model. Additional properties of the beta-2-binomial distribution are derived including its behavior on the limits of its parametric space. A penalized maximum likelihood approach is considered to estimate parameters of this model and a residual analysis is included to assess departures from model assumptions as well as to detect outlier observations. Simulation studies, considering the robustness to outliers, are presented confirming that the beta-2-binomial regression model is a better robust alternative, in comparison with the binomial and beta-binomial regression models. We also found that the beta-2-binomial regression model outperformed the binomial and beta-binomial regression models in our applications of predicting liver cancer development in mice and the number of inappropriate days a patient spent in a hospital.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-06-07T11:52:52Z
      DOI: 10.1177/09622802241259178
       
  • Evaluating prognostic biomarkers for survival outcomes subject to
           informative censoring

    • Free pre-print version: Loading...

      Authors: Wei Liu, Danping Liu, Zhiwei Zhang
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Prognostic biomarkers for survival outcomes are widely used in clinical research and practice. Such biomarkers are often evaluated using a C-index as well as quantities based on time-dependent receiver operating characteristic curves. Existing methods for their evaluation generally assume that censoring is uninformative in the sense that the censoring time is independent of the failure time with or without conditioning on the biomarker under evaluation. With focus on the C-index and the area under a particular receiver operating characteristic curve, we describe and compare three estimation methods that account for informative censoring based on observed baseline covariates. Two of them are straightforward extensions of existing plug-in and inverse probability weighting methods for uninformative censoring. By appealing to semiparametric theory, we also develop a doubly robust, locally efficient method that is more robust than the plug-in and inverse probability weighting methods and typically more efficient than the inverse probability weighting method. The methods are evaluated and compared in a simulation study, and applied to real data from studies of breast cancer and heart failure.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-06-06T08:11:23Z
      DOI: 10.1177/09622802241259170
       
  • Group sequential methods based on supremum logrank statistics
           under proportional and nonproportional hazards

    • Free pre-print version: Loading...

      Authors: Jean Marie Boher, Thomas Filleron, Patrick Sfumato, Pierre Bunouf, Richard J Cook
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Despite the widespread use of Cox regression for modeling treatment effects in clinical trials, in immunotherapy oncology trials and other settings therapeutic benefits are not immediately realized thereby violating the proportional hazards assumption. Weighted logrank tests and the so-called Maxcombo test involving the combination of multiple logrank test statistics have been advocated to increase power for detecting effects in these and other settings where hazards are nonproportional. We describe a testing framework based on supremum logrank statistics created by successively analyzing and excluding early events, or obtained using a moving time window. We then describe how such tests can be conducted in a group sequential trial with interim analyses conducted for potential early stopping of benefit. The crossing boundaries for the interim test statistics are determined using an easy-to-implement Monte Carlo algorithm. Numerical studies illustrate the good frequency properties of the proposed group sequential methods.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-06-06T05:39:32Z
      DOI: 10.1177/09622802241254211
       
  • Optimal designs using generalized estimating equations in cluster
           randomized crossover and stepped wedge trials

    • Free pre-print version: Loading...

      Authors: Jingxia Liu, Fan Li
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Cluster randomized crossover and stepped wedge cluster randomized trials are two types of longitudinal cluster randomized trials that leverage both the within- and between-cluster comparisons to estimate the treatment effect and are increasingly used in healthcare delivery and implementation science research. While the variance expressions of estimated treatment effect have been previously developed from the method of generalized estimating equations for analyzing cluster randomized crossover trials and stepped wedge cluster randomized trials, little guidance has been provided for optimal designs to ensure maximum efficiency. Here, an optimal design refers to the combination of optimal cluster-period size and optimal number of clusters that provide the smallest variance of the treatment effect estimator or maximum efficiency under a fixed total budget. In this work, we develop optimal designs for multiple-period cluster randomized crossover trials and stepped wedge cluster randomized trials with continuous outcomes, including both closed-cohort and repeated cross-sectional sampling schemes. Local optimal design algorithms are proposed when the correlation parameters in the working correlation structure are known. MaxiMin optimal design algorithms are proposed when the exact values are unavailable, but investigators may specify a range of correlation values. The closed-form formulae of local optimal design and MaxiMin optimal design are derived for multiple-period cluster randomized crossover trials, where the cluster-period size and number of clusters are decimal. The decimal estimates from closed-form formulae can then be used to investigate the performances of integer estimates from local optimal design and MaxiMin optimal design algorithms. One unique contribution from this work, compared to the previous optimal design research, is that we adopt constrained optimization techniques to obtain integer estimates under the MaxiMin optimal design. To assist practical implementation, we also develop four SAS macros to find local optimal designs and MaxiMin optimal designs.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-05-30T09:27:37Z
      DOI: 10.1177/09622802241247717
       
  • Maintaining the validity of inference from linear mixed models in
           stepped-wedge cluster randomized trials under misspecified random-effects
           structures

    • Free pre-print version: Loading...

      Authors: Yongdong Ouyang, Monica Taljaard, Andrew B Forbes, Fan Li
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Linear mixed models are commonly used in analyzing stepped-wedge cluster randomized trials. A key consideration for analyzing a stepped-wedge cluster randomized trial is accounting for the potentially complex correlation structure, which can be achieved by specifying random-effects. The simplest random effects structure is random intercept but more complex structures such as random cluster-by-period, discrete-time decay, and more recently, the random intervention structure, have been proposed. Specifying appropriate random effects in practice can be challenging: assuming more complex correlation structures may be reasonable but they are vulnerable to computational challenges. To circumvent these challenges, robust variance estimators may be applied to linear mixed models to provide consistent estimators of standard errors of fixed effect parameters in the presence of random-effects misspecification. However, there has been no empirical investigation of robust variance estimators for stepped-wedge cluster randomized trials. In this article, we review six robust variance estimators (both standard and small-sample bias-corrected robust variance estimators) that are available for linear mixed models in R, and then describe a comprehensive simulation study to examine the performance of these robust variance estimators for stepped-wedge cluster randomized trials with a continuous outcome under different data generators. For each data generator, we investigate whether the use of a robust variance estimator with either the random intercept model or the random cluster-by-period model is sufficient to provide valid statistical inference for fixed effect parameters, when these working models are subject to random-effect misspecification. Our results indicate that the random intercept and random cluster-by-period models with robust variance estimators performed adequately. The CR3 robust variance estimator (approximate jackknife) estimator, coupled with the number of clusters minus two degrees of freedom correction, consistently gave the best coverage results, but could be slightly conservative when the number of clusters was below 16. We summarize the implications of our results for the linear mixed model analysis of stepped-wedge cluster randomized trials and offer some practical recommendations on the choice of the analytic model.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-05-29T07:19:57Z
      DOI: 10.1177/09622802241248382
       
  • Demystifying estimands in cluster-randomised trials

    • Free pre-print version: Loading...

      Authors: Brennan C Kahan, Bryan S Blette, Michael O Harhay, Scott D Halpern, Vipul Jairath, Andrew Copas, Fan Li
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Estimands can help clarify the interpretation of treatment effects and ensure that estimators are aligned with the study's objectives. Cluster-randomised trials require additional attributes to be defined within the estimand compared to individually randomised trials, including whether treatment effects are marginal or cluster-specific, and whether they are participant- or cluster-average. In this paper, we provide formal definitions of estimands encompassing both these attributes using potential outcomes notation and describe differences between them. We then provide an overview of estimators for each estimand, describe their assumptions, and show consistency (i.e. asymptotically unbiased estimation) for a series of analyses based on cluster-level summaries. Then, through a re-analysis of a published cluster-randomised trial, we demonstrate that the choice of both estimand and estimator can affect interpretation. For instance, the estimated odds ratio ranged from 1.38 (p = 0.17) to 1.83 (p = 0.03) depending on the target estimand, and for some estimands, the choice of estimator affected the conclusions by leading to smaller treatment effect estimates. We conclude that careful specification of the estimand, along with an appropriate choice of estimator, is essential to ensuring that cluster-randomised trials address the right question.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-05-23T02:17:37Z
      DOI: 10.1177/09622802241254197
       
  • Goodness-of-fit tests for modified Poisson regression possibly producing
           fitted values exceeding one in binary outcome analysis

    • Free pre-print version: Loading...

      Authors: Yasuhiro Hagiwara, Yutaka Matsuyama
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Modified Poisson regression, which estimates the regression parameters in the log-binomial regression model using the Poisson quasi-likelihood estimating equation and robust variance, is a useful tool for estimating the adjusted risk and prevalence ratio in binary outcome analysis. Although several goodness-of-fit tests have been developed for other binary regressions, few goodness-of-fit tests are available for modified Poisson regression. In this study, we proposed several goodness-of-fit tests for modified Poisson regression, including the modified Hosmer-Lemeshow test with empirical variance, Tsiatis test, normalized Pearson chi-square tests with binomial variance and Poisson variance, and normalized residual sum of squares test. The original Hosmer-Lemeshow test and normalized Pearson chi-square test with binomial variance are inappropriate for the modified Poisson regression, which can produce a fitted value exceeding 1 owing to the unconstrained parameter space. A simulation study revealed that the normalized residual sum of squares test performed well regarding the type I error probability and the power for a wrong link function. We applied the proposed goodness-of-fit tests to the analysis of cross-sectional data of patients with cancer. We recommend the normalized residual sum of squares test as a goodness-of-fit test in the modified Poisson regression.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-05-23T02:17:17Z
      DOI: 10.1177/09622802241254220
       
  • A structured iterative division approach for non-sparse regression models
           and applications in biological data analysis

    • Free pre-print version: Loading...

      Authors: Shun Yu, Yuehan Yang
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      In this paper, we focus on the modeling problem of estimating data with non-sparse structures, specifically focusing on biological data that exhibit a high degree of relevant features. Various fields, such as biology and finance, face the challenge of non-sparse estimation. We address the problems using the proposed method, called structured iterative division. Structured iterative division effectively divides data into non-sparse and sparse structures and eliminates numerous irrelevant variables, significantly reducing the error while maintaining computational efficiency. Numerical and theoretical results demonstrate the competitive advantage of the proposed method on a wide range of problems, and the proposed method exhibits excellent statistical performance in numerical comparisons with several existing methods. We apply the proposed algorithm to two biology problems, gene microarray datasets, and chimeric protein datasets, to the prognostic risk of distant metastasis in breast cancer and Alzheimer’s disease, respectively. Structured iterative division provides insights into gene identification and selection, and we also provide meaningful results in anticipating cancer risk and identifying key factors.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-05-23T02:16:41Z
      DOI: 10.1177/09622802241254251
       
  • A capture-recapture modeling framework emphasizing expert opinion in
           disease surveillance

    • Free pre-print version: Loading...

      Authors: Yuzi Zhang, Lin Ge, Lance A Waller, Sarita Shah, Robert H Lyles
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      In disease surveillance, capture-recapture methods are commonly used to estimate the number of diseased cases in a defined target population. Since the number of cases never identified by any surveillance system cannot be observed, estimation of the case count typically requires at least one crucial assumption about the dependency between surveillance systems. However, such assumptions are generally unverifiable based on the observed data alone. In this paper, we advocate a modeling framework hinging on the choice of a key population-level parameter that reflects dependencies among surveillance streams. With the key dependency parameter as the focus, the proposed method offers the benefits of (a) incorporating expert opinion in the spirit of prior information to guide estimation; (b) providing accessible bias corrections, and (c) leveraging an adapted credible interval approach to facilitate inference. We apply the proposed framework to two real human immunodeficiency virus surveillance datasets exhibiting three-stream and four-stream capture-recapture-based case count estimation. Our approach enables estimation of the number of human immunodeficiency virus positive cases for both examples, under realistic assumptions that are under the investigator's control and can be readily interpreted. The proposed framework also permits principled uncertainty analyses through which a user can acknowledge their level of confidence in assumptions made about the key non-identifiable dependency parameter.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-05-20T11:16:44Z
      DOI: 10.1177/09622802241254217
       
  • Testing for marginal covariate effect when the subgroup size induced by
           the covariate is informative

    • Free pre-print version: Loading...

      Authors: Samuel Anyaso-Samuel, Somnath Datta
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      In many cluster-correlated data analyses, informative cluster size poses a challenge that can potentially introduce bias in statistical analyses. Different methodologies have been introduced in statistical literature to address this bias. In this study, we consider a complex form of informativeness where the number of observations corresponding to latent levels of a unit-level continuous covariate within a cluster is associated with the response variable. This type of informativeness has not been explored in prior research. We present a novel test statistic designed to evaluate the effect of the continuous covariate while accounting for the presence of informativeness. The covariate induces a continuum of latent subgroups within the clusters, and our test statistic is formulated by aggregating values from an established statistic that accounts for informative subgroup sizes when comparing group-specific marginal distributions. Through carefully designed simulations, we compare our test with four traditional methods commonly employed in the analysis of cluster-correlated data. Only our test maintains the size across all data-generating scenarios with informativeness. We illustrate the proposed method to test for marginal associations in periodontal data with this distinctive form of informativeness.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-05-20T11:14:45Z
      DOI: 10.1177/09622802241254196
       
  • Robust integration of secondary outcomes information into primary outcome
           analysis in the presence of missing data

    • Free pre-print version: Loading...

      Authors: Daxuan Deng, Vernon M Chinchilli, Hao Feng, Chixiang Chen, Ming Wang
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      In clinical and observational studies, secondary outcomes are frequently collected alongside the primary outcome for each subject, yet their potential to improve the analysis efficiency remains underutilized. Moreover, missing data, commonly encountered in practice, can introduce bias to estimates if not appropriately addressed. This article presents an innovative approach that enhances the empirical likelihood-based information borrowing method by integrating missing-data techniques, ensuring robust data integration. We introduce a plug-in inverse probability weighting estimator to handle missingness in the primary analysis, demonstrating its equivalence to the standard joint estimator under mild conditions. To address potential bias from missing secondary outcomes, we propose a uniform mapping strategy, imputing incomplete secondary outcomes into a unified space. Extensive simulations highlight the effectiveness of our method, showing consistent, efficient, and robust estimators under various scenarios involving missing data and/or misspecified secondary models. Finally, we apply our proposal to the Uniform Data Set from the National Alzheimer’s Coordinating Center, exemplifying its practical application.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-05-20T11:13:45Z
      DOI: 10.1177/09622802241254195
       
  • Quantifying proportion of treatment effect by surrogate endpoint
           under heterogeneity

    • Free pre-print version: Loading...

      Authors: Xinzhou Guo, Florence T Bourgeois, Tianxi Cai
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      When the primary endpoints in randomized clinical trials require long term follow-up or are costly to measure, it is often desirable to assess treatment effects on surrogate instead of clinical endpoints. Prior to adopting a surrogate endpoint for such purposes, the extent of its surrogacy on the primary endpoint must be assessed. There is a rich statistical literature on assessing surrogacy in the overall population, much of which is based on quantifying the proportion of treatment effect on the primary endpoint that is explained by the treatment effect on the surrogate endpoint. However, the surrogacy of an endpoint may vary across different patient subgroups according to baseline demographic characteristics, and limited methods are currently available to assess overall surrogacy in the presence of potential surrogacy heterogeneity. In this paper, we propose methods that incorporate covariates for baseline information, such as age, to improve overall surrogacy assessment. We use flexible semi-non-parametric modeling strategies to adjust for covariate effects and derive a robust estimate for the proportion of treatment effect of the covariate-adjusted surrogate endpoint. Simulation results suggest that the adjusted surrogate endpoint has greater proportion of treatment effect compared to the unadjusted surrogate endpoint. We apply the proposed method to data from a clinical trial of infliximab and assess the adequacy of the surrogate endpoint in the presence of age heterogeneity.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-05-08T02:21:21Z
      DOI: 10.1177/09622802241247719
       
  • Sample size and power calculation for testing treatment effect
           heterogeneity in cluster randomized crossover designs

    • Free pre-print version: Loading...

      Authors: Xueqi Wang, Xinyuan Chen, Keith S Goldfeld, Monica Taljaard, Fan Li
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      The cluster randomized crossover design has been proposed to improve efficiency over the traditional parallel-arm cluster randomized design. While statistical methods have been developed for designing cluster randomized crossover trials, they have exclusively focused on testing the overall average treatment effect, with little attention to differential treatment effects across subpopulations. Recently, interest has grown in understanding whether treatment effects may vary across pre-specified patient subpopulations, such as those defined by demographic or clinical characteristics. In this article, we consider the two-treatment two-period cluster randomized crossover design under either a cross-sectional or closed-cohort sampling scheme, where it is of interest to detect the heterogeneity of treatment effect via an interaction test. Assuming a patterned correlation structure for both the covariate and the outcome, we derive new sample size formulas for testing the heterogeneity of treatment effect with continuous outcomes based on linear mixed models. Our formulas also address unequal cluster sizes and therefore allow us to analytically assess the impact of unequal cluster sizes on the power of the interaction test in cluster randomized crossover designs. We conduct simulations to confirm the accuracy of the proposed methods, and illustrate their application in two real cluster randomized crossover trials.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-05-01T07:15:30Z
      DOI: 10.1177/09622802241247736
       
  • Causal rule ensemble method for estimating heterogeneous treatment effect
           with consideration of prognostic effects

    • Free pre-print version: Loading...

      Authors: Mayu Hiraishi, Ke Wan, Kensuke Tanioka, Hiroshi Yadohisa, Toshio Shimokawa
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      We propose a novel framework based on the RuleFit method to estimate heterogeneous treatment effect in randomized clinical trials. The proposed method estimates a rule ensemble comprising a set of prognostic rules, a set of prescriptive rules, as well as the linear effects of the original predictor variables. The prescriptive rules provide an interpretable description of the heterogeneous treatment effect. By including a prognostic term in the proposed model, the selected rule is represented as an heterogeneous treatment effect that excludes other effects. We confirmed that the performance of the proposed method was equivalent to that of other ensemble learning methods through numerical simulations and demonstrated the interpretation of the proposed method using a real data application.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-04-27T07:12:50Z
      DOI: 10.1177/09622802241247728
       
  • Bayesian analysis of joint quantile regression for multi-response
           longitudinal data with application to primary biliary cirrhosis sequential
           cohort study

    • Free pre-print version: Loading...

      Authors: Yu-Zhu Tian, Man-Lai Tang, Catherine Wong, Mao-Zai Tian
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      This article proposes a Bayesian approach for jointly estimating marginal conditional quantiles of multi-response longitudinal data with multivariate mixed effects model. The multivariate asymmetric Laplace distribution is employed to construct the working likelihood of the considered model. Penalization priors on regression parameters are incorporated into the working likelihood to conduct Bayesian high-dimensional inference. Markov chain Monte Carlo algorithm is used to obtain the fully conditional posterior distributions of all parameters and latent variables. Monte Carlo simulations are conducted to evaluate the sample performance of the proposed joint quantile regression approach. Finally, we analyze a longitudinal medical dataset of the primary biliary cirrhosis sequential cohort study to illustrate the real application of the proposed modeling method.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-04-27T07:10:33Z
      DOI: 10.1177/09622802241247725
       
  • The performance of marginal structural models for estimating risk
           differences and relative risks using weighted univariate generalized
           linear models

    • Free pre-print version: Loading...

      Authors: Peter C Austin
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      We used Monte Carlo simulations to compare the performance of marginal structural models (MSMs) based on weighted univariate generalized linear models (GLMs) to estimate risk differences and relative risks for binary outcomes in observational studies. We considered four different sets of weights based on the propensity score: inverse probability of treatment weights with the average treatment effect as the target estimand, weights for estimating the average treatment effect in the treated, matching weights and overlap weights. We considered sample sizes ranging from 500 to 10,000 and allowed the prevalence of treatment to range from 0.1 to 0.9. We examined both the robust variance estimator when using generalized estimating equations with an independent working correlation matrix and a bootstrap variance estimator for estimating the standard error of the risk difference and the log-relative risk. The performance of these methods was compared with that of direct weighting. Both the direct weighting approach and MSMs based on weighted univariate GLMs resulted in the identical estimates of risk differences and relative risks. When sample sizes were small to moderate, the use of an MSM with a bootstrap variance estimator tended to result in the most accurate estimates of standard errors. When sample sizes were large, the direct weighting approach and an MSM with a bootstrap variance estimator tended to produce estimates of standard error with similar accuracy. When using a MSM to estimate risk differences and relative risks, in general it is preferable to use a bootstrap variance estimator than the robust variance estimator. We illustrate the application of the different methods for estimating risks differences and relative risks using an observational study on the effect on mortality of discharge prescribing of a beta-blocker in patients hospitalized with acute myocardial infarction.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-04-24T10:08:08Z
      DOI: 10.1177/09622802241247742
       
  • Bayesian compositional models for ordinal response

    • Free pre-print version: Loading...

      Authors: Li Zhang, Xinyan Zhang, Justin M Leach, AKM F Rahman, Nengjun Yi
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Ordinal response is commonly found in medicine, biology, and other fields. In many situations, the predictors for this ordinal response are compositional, which means that the sum of predictors for each sample is fixed. Examples of compositional data include the relative abundance of species in microbiome data and the relative frequency of nutrition concentrations. Moreover, the predictors that are strongly correlated tend to have similar influence on the response outcome. Conventional cumulative logistic regression models for ordinal responses ignore the fixed-sum constraint on predictors and their associated interrelationships, and thus are not appropriate for analyzing compositional predictors.To solve this problem, we proposed Bayesian Compositional Models for Ordinal Response to analyze the relationship between compositional data and an ordinal response with a structured regularized horseshoe prior for the compositional coefficients and a soft sum-to-zero restriction on coefficients through the prior distribution. The method was implemented with R package rstan using efficient Hamiltonian Monte Carlo algorithm. We performed simulations to compare the proposed approach and existing methods for ordinal responses. Results revealed that our proposed method outperformed the existing methods in terms of parameter estimation and prediction. We also applied the proposed method to a microbiome study HMP2Data, to find microorganisms linked to ordinal inflammatory bowel disease levels. To make this work reproducible, the code and data used in this paper are available at https://github.com/Li-Zhang28/BCO.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-04-24T03:35:09Z
      DOI: 10.1177/09622802241247730
       
  • Estimating dynamic treatment regimes for ordinal outcomes with household
           interference: Application in household smoking cessation

    • Free pre-print version: Loading...

      Authors: Cong Jiang, Mary Thompson, Michael Wallace
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      The focus of precision medicine is on decision support, often in the form of dynamic treatment regimes, which are sequences of decision rules. At each decision point, the decision rules determine the next treatment according to the patient’s baseline characteristics, the information on treatments and responses accrued by that point, and the patient’s current health status, including symptom severity and other measures. However, dynamic treatment regime estimation with ordinal outcomes is rarely studied, and rarer still in the context of interference – where one patient’s treatment may affect another’s outcome. In this paper, we introduce the weighted proportional odds model: a regression based, approximate doubly-robust approach to single-stage dynamic treatment regime estimation for ordinal outcomes. This method also accounts for the possibility of interference between individuals sharing a household through the use of covariate balancing weights derived from joint propensity scores. Examining different types of balancing weights, we verify the approximate double robustness of weighted proportional odds model with our adjusted weights via simulation studies. We further extend weighted proportional odds model to multi-stage dynamic treatment regime estimation with household interference, namely dynamic weighted proportional odds model. Lastly, we demonstrate our proposed methodology in the analysis of longitudinal survey data from the Population Assessment of Tobacco and Health study, which motivates this work. Furthermore, considering interference, we provide optimal treatment strategies for households to achieve smoking cessation of the pair in the household.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-04-16T07:53:42Z
      DOI: 10.1177/09622802241242313
       
  • The “Why” behind including “Y” in your imputation
           model

    • Free pre-print version: Loading...

      Authors: Lucy D’Agostino McGowan, Sarah C Lotspeich, Staci A Hepler
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Missing data is a common challenge when analyzing epidemiological data, and imputation is often used to address this issue. Here, we investigate the scenario where a covariate used in an analysis has missingness and will be imputed. There are recommendations to include the outcome from the analysis model in the imputation model for missing covariates, but it is not necessarily clear if this recommendation always holds and why this is sometimes true. We examine deterministic imputation (i.e. single imputation with fixed values) and stochastic imputation (i.e. single or multiple imputation with random values) methods and their implications for estimating the relationship between the imputed covariate and the outcome. We mathematically demonstrate that including the outcome variable in imputation models is not just a recommendation but a requirement to achieve unbiased results when using stochastic imputation methods. Moreover, we dispel common misconceptions about deterministic imputation models and demonstrate why the outcome should not be included in these models. This article aims to bridge the gap between imputation in theory and in practice, providing mathematical derivations to explain common statistical recommendations. We offer a better understanding of the considerations involved in imputing missing covariates and emphasize when it is necessary to include the outcome variable in the imputation model.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-04-16T04:34:01Z
      DOI: 10.1177/09622802241244608
       
  • Non-stationary Bayesian spatial model for disease mapping based on
           sub-regions

    • Free pre-print version: Loading...

      Authors: Esmail Abdul-Fattah, Elias Krainski, Janet Van Niekerk, Håvard Rue
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      This paper aims to extend the Besag model, a widely used Bayesian spatial model in disease mapping, to a non-stationary spatial model for irregular lattice-type data. The goal is to improve the model’s ability to capture complex spatial dependence patterns and increase interpretability. The proposed model uses multiple precision parameters, accounting for different intensities of spatial dependence in different sub-regions. We derive a joint penalized complexity prior to the flexible local precision parameters to prevent overfitting and ensure contraction to the stationary model at a user-defined rate. The proposed methodology can be used as a basis for the development of various other non-stationary effects over other domains such as time. An accompanying R package fbesag equips the reader with the necessary tools for immediate use and application. We illustrate the novelty of the proposal by modeling the risk of dengue in Brazil, where the stationary spatial assumption fails and interesting risk profiles are estimated when accounting for spatial non-stationary. Additionally, we model different causes of death in Brazil, where we use the new model to investigate the spatial stationarity of these causes.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-04-10T05:48:32Z
      DOI: 10.1177/09622802241244613
       
  • Methods for non-proportional hazards in clinical trials: A systematic
           review

    • Free pre-print version: Loading...

      Authors: Maximilian Bardo, Cynthia Huber, Norbert Benda, Jonas Brugger, Tobias Fellinger, Vaidotas Galaune, Judith Heinz, Harald Heinzl, Andrew C Hooker, Florian Klinglmüller, Franz König, Tim Mathes, Martina Mittlböck, Martin Posch, Robin Ristl, Tim Friede
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      For the analysis of time-to-event data, frequently used methods such as the log-rank test or the Cox proportional hazards model are based on the proportional hazards assumption, which is often debatable. Although a wide range of parametric and non-parametric methods for non-proportional hazards has been proposed, there is no consensus on the best approaches. To close this gap, we conducted a systematic literature search to identify statistical methods and software appropriate under non-proportional hazard. Our literature search identified 907 abstracts, out of which we included 211 articles, mostly methodological ones. Review articles and applications were less frequently identified. The articles discuss effect measures, effect estimation and regression approaches, hypothesis tests, and sample size calculation approaches, which are often tailored to specific non-proportional hazard situations. Using a unified notation, we provide an overview of methods available. Furthermore, we derive some guidance from the identified articles.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-04-09T02:58:48Z
      DOI: 10.1177/09622802241242325
       
  • Variable selection for latent class analysis in the presence of missing
           data with application to record linkage

    • Free pre-print version: Loading...

      Authors: Huiping Xu, Xiaochun Li, Zuoyi Zhang, Shaun Grannis
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      The Fellegi-Sunter model is a latent class model widely used in probabilistic linkage to identify records that belong to the same entity. Record linkage practitioners typically employ all available matching fields in the model with the premise that more fields convey greater information about the true match status and hence result in improved match performance. In the context of model-based clustering, it is well known that such a premise is incorrect and the inclusion of noisy variables could compromise the clustering. Variable selection procedures have therefore been developed to remove noisy variables. Although these procedures have the potential to improve record matching, they cannot be applied directly due to the ubiquity of the missing data in record linkage applications. In this paper, we modify the stepwise variable selection procedure proposed by Fop, Smart, and Murphy and extend it to account for missing data common in record linkage. Through simulation studies, our proposed method is shown to select the correct set of matching fields across various settings, leading to better-performing algorithms. The improved match performance is also seen in a real-world application. We therefore recommend the use of our proposed selection procedure to identify informative matching fields for probabilistic record linkage algorithms.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-04-09T02:57:49Z
      DOI: 10.1177/09622802241242317
       
  • A Bayesian hierarchical model for the analysis of visual analogue scaling
           tasks

    • Free pre-print version: Loading...

      Authors: Eldon Sorensen, Jacob Oleson, Ethan Kutlu, Bob McMurray
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      In psychophysics and psychometrics, an integral method to the discipline involves charting how a person’s response pattern changes according to a continuum of stimuli. For instance, in hearing science, Visual Analog Scaling tasks are experiments in which listeners hear sounds across a speech continuum and give a numeric rating between 0 and 100 conveying whether the sound they heard was more like word “a” or more like word “b” (i.e. each participant is giving a continuous categorization response). By taking all the continuous categorization responses across the speech continuum, a parametric curve model can be fit to the data and used to analyze any individual’s response pattern by speech continuum. Standard statistical modeling techniques are not able to accommodate all of the specific requirements needed to analyze these data. Thus, Bayesian hierarchical modeling techniques are employed to accommodate group-level non-linear curves, individual-specific non-linear curves, continuum-level random effects, and a subject-specific variance that is predicted by other model parameters. In this paper, a Bayesian hierarchical model is constructed to model the data from a Visual Analog Scaling task study of mono-lingual and bi-lingual participants. Any nonlinear curve function could be used and we demonstrate the technique using the 4-parameter logistic function. Overall, the model was found to fit particularly well to the data from the study and results suggested that the magnitude of the slope was what most defined the differences in response patterns between continua.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-04-04T04:14:07Z
      DOI: 10.1177/09622802241242319
       
  • A Bayesian quasi-likelihood design for identifying the minimum effective
           dose and maximum utility dose in dose-ranging studies

    • Free pre-print version: Loading...

      Authors: Feng Tian, Ruitao Lin, Li Wang, Ying Yuan
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      Most existing dose-ranging study designs focus on assessing the dose–efficacy relationship and identifying the minimum effective dose. There is an increasing interest in optimizing the dose based on the benefit–risk tradeoff. We propose a Bayesian quasi-likelihood dose-ranging design that jointly considers safety and efficacy to simultaneously identify the minimum effective dose and the maximum utility dose to optimize the benefit–risk tradeoff. The binary toxicity endpoint is modeled using a beta-binomial model. The efficacy endpoint is modeled using the quasi-likelihood approach to accommodate various types of data (e.g. binary, ordinal or continuous) without imposing any parametric assumptions on the dose–response curve. Our design utilizes a utility function as a measure of benefit–risk tradeoff and adaptively assign patients to doses based on the doses’ likelihood of being the minimum effective dose and maximum utility dose. The design takes a group-sequential approach. At each interim, the doses that are deemed overly toxic or futile are dropped. At the end of the trial, we use posterior probability criteria to assess the strength of the dose–response relationship for establishing the proof-of-concept. If the proof-of-concept is established, we identify the minimum effective dose and maximum utility dose. Our simulation study shows that compared with some existing designs, the Bayesian quasi-likelihood dose-ranging design is robust and yields competitive performance in establishing proof-of-concept and selecting the minimum effective dose. Moreover, it includes an additional feature for further maximum utility dose selection.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-04-04T04:13:47Z
      DOI: 10.1177/09622802241239268
       
  • Isotonic design for single-arm biomarker stratified trials

    • Free pre-print version: Loading...

      Authors: Lang Li, Anastasia Ivanova
      Abstract: Statistical Methods in Medical Research, Ahead of Print.
      In single-arm trials with a predefined subgroup based on baseline biomarkers, it is often assumed that a biomarker defined subgroup, the biomarker positive subgroup, has the same or higher response to treatment compared to its complement, the biomarker negative subgroup. The goal is to determine if the treatment is effective in each of the subgroups or in the biomarker positive subgroup only or not effective at all. We propose the isotonic stratified design for this problem. The design has a joint set of decision rules for biomarker positive and negative subjects and utilizes joint estimation of response probabilities using assumed monotonicity of response between the biomarker negative and positive subgroups. The new design reduces the sample size requirement when compared to running two Simon's designs in each biomarker positive and negative. For example, the new design requires 23%–35% fewer patients than running two Simon's designs for scenarios we considered. Alternatively, the new design allows evaluating the response probability in both biomarker negative and biomarker positive subgroups using only 40% more patients needed for running Simon's design in the biomarker positive subgroup only.
      Citation: Statistical Methods in Medical Research
      PubDate: 2024-04-04T04:13:28Z
      DOI: 10.1177/09622802241238978
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 18.97.14.87
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-
JournalTOCs
 
 

 A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

              [Sort alphabetically]   [Restore default list]

  Subjects -> STATISTICS (Total: 130 journals)
Showing 1 - 151 of 151 Journals sorted by number of followers
Review of Economics and Statistics     Hybrid Journal   (Followers: 313)
Statistics in Medicine     Hybrid Journal   (Followers: 166)
Journal of Econometrics     Hybrid Journal   (Followers: 85)
Journal of the American Statistical Association     Full-text available via subscription   (Followers: 79, SJR: 3.746, CiteScore: 2)
Advances in Data Analysis and Classification     Hybrid Journal   (Followers: 53)
Biometrics     Hybrid Journal   (Followers: 52)
Sociological Methods & Research     Hybrid Journal   (Followers: 49)
Journal of the Royal Statistical Society, Series B (Statistical Methodology)     Hybrid Journal   (Followers: 43)
Journal of Business & Economic Statistics     Full-text available via subscription   (Followers: 42, SJR: 3.664, CiteScore: 2)
Computational Statistics & Data Analysis     Hybrid Journal   (Followers: 39)
Journal of the Royal Statistical Society Series C (Applied Statistics)     Hybrid Journal   (Followers: 36)
Journal of Risk and Uncertainty     Hybrid Journal   (Followers: 35)
Oxford Bulletin of Economics and Statistics     Hybrid Journal   (Followers: 35)
Journal of the Royal Statistical Society, Series A (Statistics in Society)     Hybrid Journal   (Followers: 31)
Journal of Urbanism: International Research on Placemaking and Urban Sustainability     Hybrid Journal   (Followers: 28)
The American Statistician     Full-text available via subscription   (Followers: 27)
Statistical Methods in Medical Research     Hybrid Journal   (Followers: 25)
Journal of Applied Statistics     Hybrid Journal   (Followers: 22)
Journal of Computational & Graphical Statistics     Full-text available via subscription   (Followers: 21)
Journal of Forecasting     Hybrid Journal   (Followers: 21)
Statistical Modelling     Hybrid Journal   (Followers: 19)
Journal of Statistical Software     Open Access   (Followers: 19, SJR: 13.802, CiteScore: 16)
Journal of Time Series Analysis     Hybrid Journal   (Followers: 18)
Computational Statistics     Hybrid Journal   (Followers: 17)
Journal of Biopharmaceutical Statistics     Hybrid Journal   (Followers: 17)
Risk Management     Hybrid Journal   (Followers: 16)
Decisions in Economics and Finance     Hybrid Journal   (Followers: 15)
Demographic Research     Open Access   (Followers: 15)
Statistics and Computing     Hybrid Journal   (Followers: 14)
Statistics & Probability Letters     Hybrid Journal   (Followers: 13)
Geneva Papers on Risk and Insurance - Issues and Practice     Hybrid Journal   (Followers: 13)
Australian & New Zealand Journal of Statistics     Hybrid Journal   (Followers: 12)
International Statistical Review     Hybrid Journal   (Followers: 12)
Journal of Statistical Physics     Hybrid Journal   (Followers: 12)
Structural and Multidisciplinary Optimization     Hybrid Journal   (Followers: 12)
Statistics: A Journal of Theoretical and Applied Statistics     Hybrid Journal   (Followers: 12)
Pharmaceutical Statistics     Hybrid Journal   (Followers: 10)
The Canadian Journal of Statistics / La Revue Canadienne de Statistique     Hybrid Journal   (Followers: 10)
Communications in Statistics - Theory and Methods     Hybrid Journal   (Followers: 10)
Advances in Complex Systems     Hybrid Journal   (Followers: 10)
Stata Journal     Full-text available via subscription   (Followers: 10)
Multivariate Behavioral Research     Hybrid Journal   (Followers: 9)
Scandinavian Journal of Statistics     Hybrid Journal   (Followers: 9)
Communications in Statistics - Simulation and Computation     Hybrid Journal   (Followers: 9)
Handbook of Statistics     Full-text available via subscription   (Followers: 9)
Fuzzy Optimization and Decision Making     Hybrid Journal   (Followers: 9)
Current Research in Biostatistics     Open Access   (Followers: 9)
Journal of Educational and Behavioral Statistics     Hybrid Journal   (Followers: 8)
Journal of Statistical Planning and Inference     Hybrid Journal   (Followers: 8)
Teaching Statistics     Hybrid Journal   (Followers: 8)
Law, Probability and Risk     Hybrid Journal   (Followers: 8)
Argumentation et analyse du discours     Open Access   (Followers: 8)
Research Synthesis Methods     Hybrid Journal   (Followers: 8)
Environmental and Ecological Statistics     Hybrid Journal   (Followers: 7)
Journal of Combinatorial Optimization     Hybrid Journal   (Followers: 7)
Journal of Global Optimization     Hybrid Journal   (Followers: 7)
Journal of Nonparametric Statistics     Hybrid Journal   (Followers: 7)
Queueing Systems     Hybrid Journal   (Followers: 7)
Asian Journal of Mathematics & Statistics     Open Access   (Followers: 7)
Biometrical Journal     Hybrid Journal   (Followers: 6)
Significance     Hybrid Journal   (Followers: 6)
International Journal of Computational Economics and Econometrics     Hybrid Journal   (Followers: 6)
Journal of Mathematics and Statistics     Open Access   (Followers: 6)
Applied Categorical Structures     Hybrid Journal   (Followers: 5)
Engineering With Computers     Hybrid Journal   (Followers: 5)
Lifetime Data Analysis     Hybrid Journal   (Followers: 5)
Optimization Methods and Software     Hybrid Journal   (Followers: 5)
Statistical Methods and Applications     Hybrid Journal   (Followers: 5)
CHANCE     Hybrid Journal   (Followers: 5)
ESAIM: Probability and Statistics     Open Access   (Followers: 4)
Mathematical Methods of Statistics     Hybrid Journal   (Followers: 4)
Metrika     Hybrid Journal   (Followers: 4)
Statistical Papers     Hybrid Journal   (Followers: 4)
Monthly Statistics of International Trade - Statistiques mensuelles du commerce international     Full-text available via subscription   (Followers: 4)
TEST     Hybrid Journal   (Followers: 3)
Journal of Algebraic Combinatorics     Hybrid Journal   (Followers: 3)
Journal of Theoretical Probability     Hybrid Journal   (Followers: 3)
Statistical Inference for Stochastic Processes     Hybrid Journal   (Followers: 3)
Handbook of Numerical Analysis     Full-text available via subscription   (Followers: 3)
Sankhya A     Hybrid Journal   (Followers: 3)
AStA Advances in Statistical Analysis     Hybrid Journal   (Followers: 2)
Extremes     Hybrid Journal   (Followers: 2)
Optimization Letters     Hybrid Journal   (Followers: 2)
Stochastic Models     Hybrid Journal   (Followers: 2)
Stochastics An International Journal of Probability and Stochastic Processes: formerly Stochastics and Stochastics Reports     Hybrid Journal   (Followers: 2)
IEA World Energy Statistics and Balances -     Full-text available via subscription   (Followers: 2)
Building Simulation     Hybrid Journal   (Followers: 2)
Technology Innovations in Statistics Education (TISE)     Open Access   (Followers: 2)
Measurement Interdisciplinary Research and Perspectives     Hybrid Journal   (Followers: 1)
Statistica Neerlandica     Hybrid Journal   (Followers: 1)
Sequential Analysis: Design Methods and Applications     Hybrid Journal   (Followers: 1)
Journal of the Korean Statistical Society     Hybrid Journal   (Followers: 1)
Wiley Interdisciplinary Reviews - Computational Statistics     Hybrid Journal   (Followers: 1)
Statistics and Economics     Open Access  
Review of Socionetwork Strategies     Hybrid Journal  
SourceOECD Measuring Globalisation Statistics - SourceOCDE Mesurer la mondialisation - Base de donnees statistiques     Full-text available via subscription  

              [Sort alphabetically]   [Restore default list]

Similar Journals
Similar Journals
HOME > Browse the 73 Subjects covered by JournalTOCs  
SubjectTotal Journals
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 18.97.14.87
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-