Subjects -> MATHEMATICS (Total: 1013 journals)
    - APPLIED MATHEMATICS (92 journals)
    - GEOMETRY AND TOPOLOGY (23 journals)
    - MATHEMATICS (714 journals)
    - MATHEMATICS (GENERAL) (45 journals)
    - NUMERICAL ANALYSIS (26 journals)
    - PROBABILITIES AND MATH STATISTICS (113 journals)

PROBABILITIES AND MATH STATISTICS (113 journals)                     

Showing 1 - 85 of 85 Journals sorted alphabetically
Advances in Statistics     Open Access   (Followers: 10)
Afrika Statistika     Open Access   (Followers: 1)
American Journal of Applied Mathematics and Statistics     Open Access   (Followers: 13)
American Journal of Mathematics and Statistics     Open Access   (Followers: 9)
Annals of Data Science     Hybrid Journal   (Followers: 15)
Applied Medical Informatics     Open Access   (Followers: 12)
Asian Journal of Mathematics & Statistics     Open Access   (Followers: 7)
Asian Journal of Probability and Statistics     Open Access  
Austrian Journal of Statistics     Open Access   (Followers: 4)
Biostatistics & Epidemiology     Hybrid Journal   (Followers: 6)
Calcutta Statistical Association Bulletin     Hybrid Journal  
Communications in Mathematics and Statistics     Hybrid Journal   (Followers: 3)
Communications in Statistics - Simulation and Computation     Hybrid Journal   (Followers: 9)
Communications in Statistics: Case Studies, Data Analysis and Applications     Hybrid Journal  
Comunicaciones en Estadística     Open Access  
Econometrics and Statistics     Hybrid Journal   (Followers: 2)
Forecasting     Open Access   (Followers: 1)
Foundations and Trends® in Optimization     Full-text available via subscription   (Followers: 2)
Geoinformatics & Geostatistics     Hybrid Journal   (Followers: 10)
Geomatics, Natural Hazards and Risk     Open Access   (Followers: 14)
Indonesian Journal of Applied Statistics     Open Access  
International Game Theory Review     Hybrid Journal  
International Journal of Advanced Statistics and IT&C for Economics and Life Sciences     Open Access  
International Journal of Advanced Statistics and Probability     Open Access   (Followers: 7)
International Journal of Applied Mathematics and Statistics     Full-text available via subscription   (Followers: 4)
International Journal of Ecological Economics and Statistics     Full-text available via subscription   (Followers: 4)
International Journal of Game Theory     Hybrid Journal   (Followers: 3)
International Journal of Mathematics and Statistics     Full-text available via subscription   (Followers: 2)
International Journal of Multivariate Data Analysis     Hybrid Journal  
International Journal of Probability and Statistics     Open Access   (Followers: 3)
International Journal of Statistics & Economics     Full-text available via subscription   (Followers: 6)
International Journal of Statistics and Applications     Open Access   (Followers: 2)
International Journal of Statistics and Probability     Open Access   (Followers: 3)
International Journal of Statistics in Medical Research     Hybrid Journal   (Followers: 2)
International Journal of Testing     Hybrid Journal   (Followers: 1)
Iraqi Journal of Statistical Sciences     Open Access  
Japanese Journal of Statistics and Data Science     Hybrid Journal  
Journal of Biometrics & Biostatistics     Open Access   (Followers: 4)
Journal of Cost Analysis and Parametrics     Hybrid Journal   (Followers: 5)
Journal of Environmental Statistics     Open Access   (Followers: 4)
Journal of Game Theory     Open Access   (Followers: 1)
Journal of Mathematical Economics and Finance     Full-text available via subscription  
Journal of Mathematics and Statistics Studies     Open Access  
Journal of Modern Applied Statistical Methods     Open Access   (Followers: 1)
Journal of Official Statistics     Open Access   (Followers: 2)
Journal of Quantitative Economics     Hybrid Journal  
Journal of Social and Economic Statistics     Open Access   (Followers: 4)
Journal of Statistical Theory and Practice     Hybrid Journal   (Followers: 2)
Journal of Statistics and Data Science Education     Open Access   (Followers: 2)
Journal of Survey Statistics and Methodology     Hybrid Journal   (Followers: 6)
Journal of the Indian Society for Probability and Statistics     Full-text available via subscription  
Jurnal Biometrika dan Kependudukan     Open Access   (Followers: 1)
Lietuvos Statistikos Darbai     Open Access   (Followers: 1)
Mathematics and Statistics     Open Access   (Followers: 2)
Methods, Data, Analyses     Open Access   (Followers: 1)
METRON     Hybrid Journal   (Followers: 2)
Nepalese Journal of Statistics     Open Access   (Followers: 1)
North American Actuarial Journal     Hybrid Journal   (Followers: 2)
Open Journal of Statistics     Open Access   (Followers: 3)
Open Mathematics, Statistics and Probability Journal     Open Access  
Pakistan Journal of Statistics and Operation Research     Open Access   (Followers: 1)
Physica A: Statistical Mechanics and its Applications     Hybrid Journal   (Followers: 7)
Probability, Uncertainty and Quantitative Risk     Open Access   (Followers: 2)
Research & Reviews : Journal of Statistics     Open Access   (Followers: 4)
Revista Brasileira de Biometria     Open Access  
Revista Colombiana de Estadística     Open Access  
RMS : Research in Mathematics & Statistics     Open Access   (Followers: 1)
Sankhya B - Applied and Interdisciplinary Statistics     Hybrid Journal  
SIAM Journal on Mathematics of Data Science     Hybrid Journal   (Followers: 6)
SIAM/ASA Journal on Uncertainty Quantification     Hybrid Journal   (Followers: 3)
Spatial Statistics     Hybrid Journal   (Followers: 2)
Stat     Hybrid Journal   (Followers: 1)
Stata Journal     Full-text available via subscription   (Followers: 10)
Statistica     Open Access   (Followers: 6)
Statistical Analysis and Data Mining     Hybrid Journal   (Followers: 23)
Statistical Theory and Related Fields     Hybrid Journal  
Statistics and Public Policy     Open Access   (Followers: 3)
Statistics in Transition New Series : An International Journal of the Polish Statistical Association     Open Access  
Statistics Research Letters     Open Access   (Followers: 1)
Statistics, Optimization & Information Computing     Open Access   (Followers: 5)
Stats     Open Access  
Theory of Probability and its Applications     Hybrid Journal   (Followers: 2)
Theory of Probability and Mathematical Statistics     Full-text available via subscription   (Followers: 2)
Turkish Journal of Forecasting     Open Access   (Followers: 1)
Zeitschrift für die gesamte Versicherungswissenschaft     Hybrid Journal  

           

Similar Journals
Journal Cover
Stats
Number of Followers: 0  

  This is an Open Access Journal Open Access journal
ISSN (Online) 2571-905X
Published by MDPI Homepage  [258 journals]
  • Stats, Vol. 7, Pages 576-591: A Comparison of Limited Information
           Estimation Methods for the Two-Parameter Normal-Ogive Model with Locally
           Dependent Items

    • Authors: Alexander Robitzsch
      First page: 576
      Abstract: The two-parameter normal-ogive (2PNO) model is one of the most popular item response theory (IRT) models for analyzing dichotomous items. Consistent parameter estimation of the 2PNO model using marginal maximum likelihood estimation relies on the local independence assumption. However, the assumption of local independence might be violated in practice. Likelihood-based estimation of the local dependence structure is often computationally demanding. Moreover, many IRT models that model local dependence do not have a marginal interpretation of item parameters. In this article, limited information estimation methods are reviewed that allow the convenient and straightforward handling of local dependence in estimating the 2PNO model. In detail, pairwise likelihood, weighted least squares, and normal-ogive harmonic analysis robust method (NOHARM) estimation are compared with marginal maximum likelihood estimation that ignores local dependence. A simulation study revealed that item parameters can be consistently estimated with limited information methods. At the same time, marginal maximum likelihood estimation resulted in biased item parameter estimates in the presence of local dependence. From a practical perspective, there were only minor differences regarding the statistical quality of item parameter estimates of the different estimation methods. Differences between the estimation methods are also compared for two empirical datasets.
      Citation: Stats
      PubDate: 2024-06-21
      DOI: 10.3390/stats7030035
      Issue No: Vol. 7, No. 3 (2024)
       
  • Stats, Vol. 7, Pages 592-612: Estimation of Standard Error, Linking Error,
           and Total Error for Robust and Nonrobust Linking Methods in the
           Two-Parameter Logistic Model

    • Authors: Alexander Robitzsch
      First page: 592
      Abstract: The two-parameter logistic (2PL) item response theory model is a statistical model for analyzing multivariate binary data. In this article, two groups are brought onto a common metric using the 2PL model using linking methods. The linking methods of mean–mean linking, mean–geometric–mean linking, and Haebara linking are investigated in nonrobust and robust specifications in the presence of differential item functioning (DIF). M-estimation theory is applied to derive linking errors for the studied linking methods. However, estimated linking errors are prone to sampling error in estimated item parameters, thus resulting in artificially increased the linking error estimates in finite samples. For this reason, a bias-corrected linking error estimate is proposed. The usefulness of the modified linking error estimate is demonstrated in a simulation study. It is shown that a simultaneous assessment of the standard error and linking error in a total error must be conducted to obtain valid statistical inference. In the computation of the total error, using the bias-corrected linking error estimate instead of the usually employed linking error provides more accurate coverage rates.
      Citation: Stats
      PubDate: 2024-06-21
      DOI: 10.3390/stats7030036
      Issue No: Vol. 7, No. 3 (2024)
       
  • Stats, Vol. 7, Pages 613-626: Investigating Risk Factors for Racial
           Disparity in E-Cigarette Use with PATH Study

    • Authors: Amy Liu, Kennedy Dorsey, Almetra Granger, Ty-Runet Bryant, Tung-Sung Tseng, Michael Celestin, Qingzhao Yu
      First page: 613
      Abstract: Background: Previous research has identified differences in e-cigarette use and socioeconomic factors between different racial groups However, there is little research examining specific risk factors contributing to the racial differences. Objective: This study sought to identify racial disparities in e-cigarette use and to determine risk factors that help explain these differences. Methods: We used Wave 5 (2018–2019) of the Adult Population Assessment of Tobacco and Health (PATH) Study. First, we conducted descriptive statistics of e-smoking across our risk factor variables. Next, we used multiple logistic regression to check the risk effects by adjusting all covariates. Finally, we conducted a mediation analysis to determine whether identified factors showed evidence of influencing the association between race and e-cigarette use. All analyses were performed in R or SAS. The R package mma was used for the mediation analysis. Results: Between Hispanic and non-Hispanic White populations, our potential risk factors collectively explain 17.5% of the racial difference, former cigarette smoking explains 7.6%, receiving e-cigarette advertising 2.6%, and perception of e-cigarette harm explains 27.8% of the racial difference. Between non-Hispanic Black and non-Hispanic White populations, former cigarette smoking, receiving e-cigarette advertising, and perception of e-cigarette harm explain 5.2%, 1.8%, and 6.8% of the racial difference, respectively. E-cigarette use is most prevalent in the non-Hispanic White population compared to non-Hispanic Black and Hispanic populations, which may be explained by former cigarette smoking, exposure to e-cigarette advertising, and e-cigarette harm perception. Conclusions: These findings suggest that racial differences in e-cigarette use may be reduced by increasing knowledge of the dangers associated with e-cigarette use and reducing exposure to e-cigarette advertisements. This comprehensive analysis of risk factors can be used to significantly guide smoking cessation efforts and address potential health burden disparities arising from differences in e-cigarette usage.
      Citation: Stats
      PubDate: 2024-06-21
      DOI: 10.3390/stats7030037
      Issue No: Vol. 7, No. 3 (2024)
       
  • Stats, Vol. 7, Pages 627-646: Impact of Brexit on STOXX Europe 600
           Constituents: A Complex Network Analysis

    • Authors: Anna Maria D’Arcangelis, Arianna Pierdomenico, Giulia Rotundo
      First page: 627
      Abstract: Political events play a significant role in exerting their influence on financial markets globally. This paper aims to investigate the long term effect of Brexit on European stock markets using Complex Network methods as a starting point. The media has heavily emphasized the connection between this major political event and its economic and financial impact. To analyse this, we created two samples of companies based on the geographical allocation of their revenues to the UK. The first sample consists of companies that are either British or financially linked to the United Kingdom. The second sample serves as a control group and includes other European companies that are conveniently matched in terms of economic sector and firm size to those in the first sample. Each analysis is repeated over three non-overlapping periods: before the 2016 Referendum, between the Referendum and the 2019 General Elections, and after the 2019 General Elections. After an event study aimed at verifying the short-term response of idiosyncratic daily returns to the referendum result, we analysed the topological evolution of the networks through the MST (Minimum Spanning Trees) of the various samples. Finally, after the computation of the centrality measures pertaining to each network, our attention was directed towards the examination of the persistence of the levels of degree and eigenvector centralities over time. Our target was the investigation on whether the events that determined the evolution of the MST had also brought about structural modifications to the centrality of the most connected companies within the network. The findings demonstrate the unexpected impact of the referendum outcome, which is more noticeable on European equities compared to those of the UK, and the lack of influence from the elections that marked the beginning of the hard Brexit phase in 2019. The modifications in the MST indicate a restructuring of the network of British companies, particularly evident in the third period with a repositioning of the UK nodes. The dynamics of the MSTs around the referendum date is associated with the persistence in the relative rank of the centrality measures (relative to the median). Conversely, the arrival of hard Brexit does alter the relative ranking of the nodes in accord to the the degree centrality. The ranking in accord to the eigenvector centrality keeps the persistence. However, such movements are not statistically significant. An analysis of this kind points out relevant insights for investors, as it equips them to have a comprehensive view of political events, while also assisting policymakers in their endeavour to uphold stability by closely monitoring the ever-changing influence and interconnectedness of global stock markets during similar political events.
      Citation: Stats
      PubDate: 2024-06-27
      DOI: 10.3390/stats7030038
      Issue No: Vol. 7, No. 3 (2024)
       
  • Stats, Vol. 7, Pages 647-670: Hierarchical Time Series Forecasting of Fire
           Spots in Brazil: A Comprehensive Approach

    • Authors: Ana Caroline Pinheiro, Paulo Canas Rodrigues
      First page: 647
      Abstract: This study compares reconciliation techniques and base forecast methods to forecast a hierarchical time series of the number of fire spots in Brazil between 2011 and 2022. A three-level hierarchical time series was considered, comprising fire spots in Brazil, disaggregated by biome, and further disaggregated by the municipality. The autoregressive integrated moving average (ARIMA), the exponential smoothing (ETS), and the Prophet models were tested for baseline forecasts, and nine reconciliation approaches, including top-down, bottom-up, middle-out, and optimal combination methods, were considered to ensure coherence in the forecasts. Due to the need for transformation to ensure positive forecasts, two data transformations were considered: the logarithm of the number of fire spots plus one and the square root of the number of fire spots plus 0.5. To assess forecast accuracy, the data were split into training data for estimating model parameters and test data for evaluating forecast accuracy. The results show that the ARIMA model with the logarithmic transformation provides overall better forecast accuracy. The BU, MinT(s), and WLS(v) yielded the best results among the reconciliation techniques.
      Citation: Stats
      PubDate: 2024-06-27
      DOI: 10.3390/stats7030039
      Issue No: Vol. 7, No. 3 (2024)
       
  • Stats, Vol. 7, Pages 671-684: Estimator Comparison for the Prediction of
           Election Results

    • Authors: Miltiadis S. Chalikias, Georgios X. Papageorgiou, Dimitrios P. Zarogiannis
      First page: 671
      Abstract: Cluster randomized experiments and estimator comparisons are well-documented topics. In this paper, using the datasets of the popular vote in the presidential elections of the United States of America (2012, 2016, 2020), we evaluate the properties (SE, MSE) of three cluster sampling estimators: Ratio estimator, Horvitz–Thompson estimator and the linear regression estimator. While both the Ratio and Horvitz–Thompson estimators are widely used in cluster analysis, we propose a linear regression estimator defined for unequal cluster sizes, which, in many scenarios, performs better than the other two. The main objective of this paper is twofold. Firstly, to indicate which estimator is most suited for predicting the outcome of the popular vote in the United States of America. We do so by applying the single-stage cluster sampling technique to our data. In the first partition, we use the 50 states plus the District of Columbia as primary sampling units, whereas in the second one, we use 3112 counties instead. Secondly, based on the results of the aforementioned procedure, we estimate the number of clusters in a sample for a set standard error while also considering the diminishing returns from increasing the number of clusters in the sample. The linear regression estimator is best in the majority of the examined cases. This type of comparison can also be used for the estimation of any other country’s elections if prior voting results are available.
      Citation: Stats
      PubDate: 2024-07-01
      DOI: 10.3390/stats7030040
      Issue No: Vol. 7, No. 3 (2024)
       
  • Stats, Vol. 7, Pages 350-360: Combined Permutation Tests for Pairwise
           Comparison of Scale Parameters Using Deviances

    • Authors: Scott J. Richter, Melinda H. McCann
      First page: 350
      Abstract: Nonparametric combinations of permutation tests for pairwise comparison of scale parameters, based on deviances, are examined. Permutation tests for comparing two or more groups based on the ratio of deviances have been investigated, and a procedure based on Higgins’ RMD statistic was found to perform well, but two other tests were sometimes more powerful. Thus, combinations of these tests are investigated. A simulation study shows a combined test can be more powerful than any single test.
      Citation: Stats
      PubDate: 2024-03-28
      DOI: 10.3390/stats7020021
      Issue No: Vol. 7, No. 2 (2024)
       
  • Stats, Vol. 7, Pages 361-372: Bayesian Mediation Analysis with an
           Application to Explore Racial Disparities in the Diagnostic Age of Breast
           Cancer

    • Authors: Wentao Cao, Joseph Hagan, Qingzhao Yu
      First page: 361
      Abstract: A mediation effect refers to the effect transmitted by a mediator intervening in the relationship between an exposure variable and a response variable. Mediation analysis is widely used to identify significant mediators and to make inferences on their effects. The Bayesian method allows researchers to incorporate prior information from previous knowledge into the analysis, deal with the hierarchical structure of variables, and estimate the quantities of interest from the posterior distributions. This paper proposes three Bayesian mediation analysis methods to make inferences on mediation effects. Our proposed methods are the following: (1) the function of coefficients method; (2) the product of partial difference method; and (3) the re-sampling method. We apply these three methods to explore racial disparities in the diagnostic age of breast cancer patients in Louisiana. We found that African American (AA) patients are diagnosed at an average of 4.37 years younger compared with Caucasian (CA) patients (57.40 versus 61.77, p< 0.0001). We also found that the racial disparity can be explained by patients’ insurance (12.90%), marital status (17.17%), cancer stage (3.27%), and residential environmental factors, including the percent of the population under age 18 (3.07%) and the environmental factor of intersection density (9.02%).
      Citation: Stats
      PubDate: 2024-04-19
      DOI: 10.3390/stats7020022
      Issue No: Vol. 7, No. 2 (2024)
       
  • Stats, Vol. 7, Pages 373-388: New Goodness-of-Fit Tests for the
           Kumaraswamy Distribution

    • Authors: David E. Giles
      First page: 373
      Abstract: The two-parameter distribution known as the Kumaraswamy distribution is a very flexible alternative to the beta distribution with the same (0,1) support. Originally proposed in the field of hydrology, it has subsequently received a good deal of positive attention in both the theoretical and applied statistics literatures. Interestingly, the problem of testing formally for the appropriateness of the Kumaraswamy distribution appears to have received little or no attention to date. To fill this gap, in this paper, we apply a “biased transformation” methodology to several standard goodness-of-fit tests based on the empirical distribution function. A simulation study reveals that these (modified) tests perform well in the context of the Kumaraswamy distribution, in terms of both their low size distortion and respectable power. In particular, the “biased transformation” Anderson–Darling test dominates the other tests that are considered.
      Citation: Stats
      PubDate: 2024-04-22
      DOI: 10.3390/stats7020023
      Issue No: Vol. 7, No. 2 (2024)
       
  • Stats, Vol. 7, Pages 389-401: On Non-Occurrence of the Inspection Paradox

    • Authors: Diana Rauwolf, Udo Kamps
      First page: 389
      Abstract: The well-known inspection paradox or waiting time paradox states that, in a renewal process, the inspection interval is stochastically larger than a common interarrival time having a distribution function F, where the inspection interval is given by the particular interarrival time containing the specified time point of process inspection. The inspection paradox may also be expressed in terms of expectations, where the order is strict, in general. A renewal process can be utilized to describe the arrivals of vehicles, customers, or claims, for example. As the inspection time may also be considered a random variable T with a left-continuous distribution function G independent of the renewal process, the question arises as to whether the inspection paradox inevitably occurs in this general situation, apart from in some marginal cases with respect to F and G. For a random inspection time T, it is seen that non-trivial choices lead to non-occurrence of the paradox. In this paper, a complete characterization of the non-occurrence of the inspection paradox is given with respect to G. Several examples and related assertions are shown, including the deterministic time situation.
      Citation: Stats
      PubDate: 2024-04-24
      DOI: 10.3390/stats7020024
      Issue No: Vol. 7, No. 2 (2024)
       
  • Stats, Vol. 7, Pages 402-433: Contrastive Learning Framework for Bitcoin
           Crash Prediction

    • Authors: Zhaoyan Liu, Min Shu, Wei Zhu
      First page: 402
      Abstract: Due to spectacular gains during periods of rapid price increase and unpredictably large drops, Bitcoin has become a popular emergent asset class over the past few years. In this paper, we are interested in predicting the crashes of Bitcoin market. To tackle this task, we propose a framework for deep learning time series classification based on contrastive learning. The proposed framework is evaluated against six machine learning (ML) and deep learning (DL) baseline models, and outperforms them by 15.8% in balanced accuracy. Thus, we conclude that the contrastive learning strategy significantly enhance the model’s ability of extracting informative representations, and our proposed framework performs well in predicting Bitcoin crashes.
      Citation: Stats
      PubDate: 2024-05-08
      DOI: 10.3390/stats7020025
      Issue No: Vol. 7, No. 2 (2024)
       
  • Stats, Vol. 7, Pages 434-444: Bayesian Inference for Multiple Datasets

    • Authors: Renata Retkute, William Thurston, Christopher A. Gilligan
      First page: 434
      Abstract: Estimating parameters for multiple datasets can be time consuming, especially when the number of datasets is large. One solution is to sample from multiple datasets simultaneously using Bayesian methods such as adaptive multiple importance sampling (AMIS). Here, we use the AMIS approach to fit a von Mises distribution to multiple datasets for wind trajectories derived from a Lagrangian Particle Dispersion Model driven from 3D meteorological data. A posterior distribution of parameters can help to characterise the uncertainties in wind trajectories in a form that can be used as inputs for predictive models of wind-dispersed insect pests and the pathogens of agricultural crops for use in evaluating risk and in planning mitigation actions. The novelty of our study is in testing the performance of the method on a very large number of datasets (>11,000). Our results show that AMIS can significantly improve the efficiency of parameter inference for multiple datasets.
      Citation: Stats
      PubDate: 2024-05-10
      DOI: 10.3390/stats7020026
      Issue No: Vol. 7, No. 2 (2024)
       
  • Stats, Vol. 7, Pages 445-461: Multivariate and Matrix-Variate Logistic
           Models in the Real and Complex Domains

    • Authors: A. M. Mathai
      First page: 445
      Abstract: Several extensions of the basic scalar variable logistic density to the multivariate and matrix-variate cases, in the real and complex domains, are given where the extended forms end up in extended zeta functions. Several cases of multivariate and matrix-variate Bayesian procedures, in the real and complex domains, are also given. It is pointed out that there are a range of applications of Gaussian and Wishart-based matrix-variate distributions in the complex domain in multi-look data from radar and sonar. It is hoped that the distributions derived in this paper will be highly useful in such applications in physics, engineering, statistics and communication problems, because, in the real scalar case, a logistic model is seen to be more appropriate compared to a Gaussian model in many industrial applications. Hence, logistic-based multivariate and matrix-variate distributions, especially in the complex domain, are expected to perform better where Gaussian and Wishart-based distributions are currently used.
      Citation: Stats
      PubDate: 2024-05-11
      DOI: 10.3390/stats7020027
      Issue No: Vol. 7, No. 2 (2024)
       
  • Stats, Vol. 7, Pages 462-480: Multivariate Time Series Change-Point
           Detection with a Novel Pearson-like Scaled Bregman Divergence

    • Authors: Tong Si, Yunge Wang, Lingling Zhang, Evan Richmond, Tae-Hyuk Ahn, Haijun Gong
      First page: 462
      Abstract: Change-point detection is a challenging problem that has a number of applications across various real-world domains. The primary objective of CPD is to identify specific time points where the underlying system undergoes transitions between different states, each characterized by its distinct data distribution. Precise identification of change points in time series omics data can provide insights into the dynamic and temporal characteristics inherent to complex biological systems. Many change-point detection methods have traditionally focused on the direct estimation of data distributions. However, these approaches become unrealistic in high-dimensional data analysis. Density ratio methods have emerged as promising approaches for change-point detection since estimating density ratios is easier than directly estimating individual densities. Nevertheless, the divergence measures used in these methods may suffer from numerical instability during computation. Additionally, the most popular α-relative Pearson divergence cannot measure the dissimilarity between two distributions of data but a mixture of distributions. To overcome the limitations of existing density ratio-based methods, we propose a novel approach called the Pearson-like scaled-Bregman divergence-based (PLsBD) density ratio estimation method for change-point detection. Our theoretical studies derive an analytical expression for the Pearson-like scaled Bregman divergence using a mixture measure. We integrate the PLsBD with a kernel regression model and apply a random sampling strategy to identify change points in both synthetic data and real-world high-dimensional genomics data of Drosophila. Our PLsBD method demonstrates superior performance compared to many other change-point detection methods.
      Citation: Stats
      PubDate: 2024-05-13
      DOI: 10.3390/stats7020028
      Issue No: Vol. 7, No. 2 (2024)
       
  • Stats, Vol. 7, Pages 481-491: Testing for Level–Degree Interaction
           Effects in Two-Factor Fixed-Effects ANOVA When the Levels of Only One
           Factor Are Ordered

    • Authors: J. C. W. Rayner, G. C. Livingston
      First page: 481
      Abstract: In testing for main effects, the use of orthogonal contrasts for balanced designs with the factor levels not ordered is well known. Here, we consider two-factor fixed-effects ANOVA with the levels of one factor ordered and one not ordered. The objective is to extend the idea of decomposing the main effect to decomposing the interaction. This is achieved by defining level–degree coefficients and testing if they are zero using permutation testing. These tests give clear insights into what may be causing a significant interaction, even for the unbalanced model.
      Citation: Stats
      PubDate: 2024-05-15
      DOI: 10.3390/stats7020029
      Issue No: Vol. 7, No. 2 (2024)
       
  • Stats, Vol. 7, Pages 492-508: Residual Analysis for Poisson-Exponentiated
           Weibull Regression Models with Cure Fraction

    • Authors: Cleanderson R. Fidelis, Edwin M. M. Ortega, Gauss M. Cordeiro
      First page: 492
      Abstract: The use of cure-rate survival models has grown in recent years. Even so, proposals to perform the goodness of fit of these models have not been so frequent. However, residual analysis can be used to check the adequacy of a fitted regression model. In this context, we provide Cox–Snell residuals for Poisson-exponentiated Weibull regression with cure fraction. We developed several simulations under different scenarios for studying the distributions of these residuals. They were applied to a melanoma dataset for illustrative purposes.
      Citation: Stats
      PubDate: 2024-05-20
      DOI: 10.3390/stats7020030
      Issue No: Vol. 7, No. 2 (2024)
       
  • Stats, Vol. 7, Pages 508-520: A Spatial Gaussian-Process Boosting Analysis
           of Socioeconomic Disparities in Wait-Listing of End-Stage Kidney Disease
           Patients across the United States

    • Authors: Sounak Chakraborty, Tanujit Dey, Lingwei Xiang, Joel T. Adler
      First page: 508
      Abstract: In this study, we employed a novel approach of combining Gaussian processes (GPs) with boosting techniques to model the spatial variability inherent in End-Stage Kidney Disease (ESKD) data. Our use of the Gaussian processes boosting, or GPBoost, methodology underscores the efficacy of this hybrid method in capturing intricate spatial dynamics and enhancing predictive accuracy. Specifically, our analysis demonstrates a notable improvement in out-of-sample prediction accuracy regarding the percentage of the population remaining on the wait list within geographic regions. Furthermore, our investigation unveils race and gender-based factors that significantly influence patient wait-listing. By leveraging the GPBoost approach, we identify these pertinent factors, shedding light on the complex interplay between demographic variables and access to kidney transplantation services. Our findings underscore the imperative for a multifaceted strategy aimed at reducing spatial disparities in kidney transplant wait-listing. Key components of such an approach include mitigating gender disparities, bolstering access to healthcare services, fostering greater awareness of transplantation options, and dismantling structural barriers to care. By addressing these multifactorial challenges, we can strive towards a more equitable and inclusive landscape in kidney transplantation.
      Citation: Stats
      PubDate: 2024-06-07
      DOI: 10.3390/stats7020031
      Issue No: Vol. 7, No. 2 (2024)
       
  • Stats, Vol. 7, Pages 521-536: An Optimal Design through a Compound
           Criterion for Integrating Extra Preference Information in a Choice
           Experiment: A Case Study on Moka Ground Coffee

    • Authors: Rossella Berni, Nedka Dechkova Nikiforova, Patrizia Pinelli
      First page: 521
      Abstract: In this manuscript, we propose an innovative approach to studying consumers’ preferences for coffee, which integrates a choice experiment with consumer sensory tests and chemical analyses (caffeine contents obtained through a High-Performance Liquid Chromatography (HPLC) method). The same choice experiment is administered on two consecutive occasions, i.e., before and after the guided tasting session, to analyze the role of tasting and awareness about coffee composition in the consumers’ preferences. To this end, a Bayesian optimal design, based on a compound design criterion, is applied in order to build the choice experiment; the compound criterion allows for addressing two main issues related to the efficient estimation of the attributes and the evaluation of the sensorial part, e.g., the HPLC effects and the scores obtained through the consumer sensory test. All these elements, e.g., the attributes involved in the choice experiment, the scores obtained for each coffee through the sensory tests, and the HPLC quantitative evaluation of caffeine, are analyzed through suitable Random Utility Models. The initial results are promising, confirming the validity of the proposed approach.
      Citation: Stats
      PubDate: 2024-06-08
      DOI: 10.3390/stats7020032
      Issue No: Vol. 7, No. 2 (2024)
       
  • Stats, Vol. 7, Pages 537-548: Redefining Significance: Robustness and
           Percent Fragility Indices in Biomedical Research

    • Authors: Thomas F. Heston
      First page: 537
      Abstract: The p-value has long been the standard for statistical significance in scientific research, but this binary approach often fails to consider the nuances of statistical power and the potential for large sample sizes to show statistical significance despite trivial treatment effects. Including a statistical fragility assessment can help overcome these limitations. One common fragility metric is the fragility index, which assesses statistical fragility by incrementally altering the outcome data in the intervention group until the statistical significance flips. The robustness index takes a different approach by maintaining the integrity of the underlying data distribution while examining changes in the p-value as the sample size changes. The percent fragility index is another useful alternative that is more precise than the fragility index and is more uniformly applied to both the intervention and control groups. Incorporating these fragility metrics into routine statistical procedures could address the reproducibility crisis and increase research efficacy. Using these fragility indices can be seen as a step toward a more mature phase of statistical reasoning, where significance is a multi-faceted and contextually informed judgment.
      Citation: Stats
      PubDate: 2024-06-17
      DOI: 10.3390/stats7020033
      Issue No: Vol. 7, No. 2 (2024)
       
  • Stats, Vol. 7, Pages 549-575: Assessing Spillover Effects of Medications
           for Opioid Use Disorder on HIV Risk Behaviors among a Network of People
           Who Inject Drugs

    • Authors: Joseph Puleo, Ashley Buchanan, Natallia Katenka, M. Elizabeth Halloran, Samuel R. Friedman, Georgios Nikolopoulos
      First page: 549
      Abstract: People who inject drugs (PWID) have an increased risk of HIV infection partly due to injection behaviors often related to opioid use. Medications for opioid use disorder (MOUD) have been shown to reduce HIV infection risk, possibly by reducing injection risk behaviors. MOUD may benefit individuals who do not receive it themselves but are connected through social, sexual, or drug use networks with individuals who are treated. This is known as spillover. Valid estimation of spillover in network studies requires considering the network’s community structure. Communities are groups of densely connected individuals with sparse connections to other groups. We analyzed a network of 277 PWID and their contacts from the Transmission Reduction Intervention Project. We assessed the effect of MOUD on reductions in injection risk behaviors and the possible benefit for network contacts of participants treated with MOUD. We identified communities using modularity-based methods and employed inverse probability weighting with community-level propensity scores to adjust for measured confounding. We found that MOUD may have beneficial spillover effects on reducing injection risk behaviors. The magnitudes of estimated effects were sensitive to the community detection method. Careful consideration should be paid to the significance of community structure in network studies evaluating spillover.
      Citation: Stats
      PubDate: 2024-06-19
      DOI: 10.3390/stats7020034
      Issue No: Vol. 7, No. 2 (2024)
       
  • Stats, Vol. 7, Pages 23-33: Predicting Random Walks and a Data-Splitting
           Prediction Region

    • Authors: Mulubrhan G. Haile, Lingling Zhang, David J. Olive
      First page: 23
      Abstract: Perhaps the first nonparametric, asymptotically optimal prediction intervals are provided for univariate random walks, with applications to renewal processes. Perhaps the first nonparametric prediction regions are introduced for vector-valued random walks. This paper further derives nonparametric data-splitting prediction regions, which are underpinned by very simple theory. Some of the prediction regions can be used when the data distribution does not have first moments, and some can be used for high-dimensional data, where the number of predictors is larger than the sample size. The prediction regions can make use of many estimators of multivariate location and dispersion.
      Citation: Stats
      PubDate: 2024-01-08
      DOI: 10.3390/stats7010002
      Issue No: Vol. 7, No. 1 (2024)
       
  • Stats, Vol. 7, Pages 34-53: Precise Tensor Product Smoothing via Spectral
           Splines

    • Authors: Nathaniel E. Helwig
      First page: 34
      Abstract: Tensor product smoothers are frequently used to include interaction effects in multiple nonparametric regression models. Current implementations of tensor product smoothers either require using approximate penalties, such as those typically used in generalized additive models, or costly parameterizations, such as those used in smoothing spline analysis of variance models. In this paper, I propose a computationally efficient and theoretically precise approach for tensor product smoothing. Specifically, I propose a spectral representation of a univariate smoothing spline basis, and I develop an efficient approach for building tensor product smooths from marginal spectral spline representations. The developed theory suggests that current tensor product smoothing methods could be improved by incorporating the proposed tensor product spectral smoothers. Simulation results demonstrate that the proposed approach can outperform popular tensor product smoothing implementations, which supports the theoretical results developed in the paper.
      Citation: Stats
      PubDate: 2024-01-10
      DOI: 10.3390/stats7010003
      Issue No: Vol. 7, No. 1 (2024)
       
  • Stats, Vol. 7, Pages 54-64: On the (Apparently) Paradoxical Role of Noise
           in the Recognition of Signal Character of Minor Principal Components

    • Authors: Alessandro Giuliani, Alessandro Vici
      First page: 54
      Abstract: The usual method of separating signal and noise principal components on the sole basis of their eigenvalues has evident drawbacks when semantically relevant information ‘hides’ in minor components, explaining a very small part of the total variance. This situation is common in biomedical experimentation when PCA is used for hypothesis generation: the multi-scale character of biological regulation typically generates a main mode explaining the major part of variance (size component), squashing potentially interesting (shape) components into the noise floor. These minor components should be erroneously discarded as noisy by the usual selection methods. Here, we propose a computational method, tailored for the chemical concept of ‘titration’, allowing for the unsupervised recognition of the potential signal character of minor components by the analysis of the presence of a negative linear relation between added noise and component invariance.
      Citation: Stats
      PubDate: 2024-01-11
      DOI: 10.3390/stats7010004
      Issue No: Vol. 7, No. 1 (2024)
       
  • Stats, Vol. 7, Pages 65-78: Directional Differences in Thematic Maps of
           Soil Chemical Attributes with Geometric Anisotropy

    • Authors: Dyogo Lesniewski Ribeiro, Tamara Cantú Maltauro, Luciana Pagliosa Carvalho Guedes, Miguel Angel Uribe-Opazo, Gustavo Henrique Dalposso
      First page: 65
      Abstract: In the study of the spatial variability of soil chemical attributes, the process is considered anisotropic when the spatial dependence structure differs in relation to the direction. Anisotropy is a characteristic that influences the accuracy of the thematic maps that represent the spatial variability of the phenomenon. Therefore, the linear anisotropic Gaussian spatial model is important for spatial data that present anisotropy, and incorporating this as an intrinsic characteristic of the process that describes the spatial dependence structure improves the accuracy of the spatial estimation of the values of a georeferenced variable in unsampled locations. This work aimed at quantifying the directional differences existing in the thematic map of georeferenced variables when incorporating or not incorporating anisotropy into the spatial dependence structure through directional spatial autocorrelation. For simulated data and soil chemical properties (carbon, calcium and potassium), the Moran directional index was calculated, considering the predicted values at unsampled locations, and taking into account estimated isotropic and anisotropic geostatistical models. The directional spatial autocorrelation was effective in evidencing the directional difference between thematic maps elaborated with estimated isotropic and anisotropic geostatistical models. This measure evidenced the existence of an elliptical format of the subregions presented by thematic maps in the direction of anisotropy that indicated a greater spatial continuity for greater distances between pairs of points.
      Citation: Stats
      PubDate: 2024-01-16
      DOI: 10.3390/stats7010005
      Issue No: Vol. 7, No. 1 (2024)
       
  • Stats, Vol. 7, Pages 79-94: Ecosystem Degradation in Romania: Exploring
           the Core Drivers

    • Authors: Alexandra-Nicoleta Ciucu-Durnoi, Camelia Delcea
      First page: 79
      Abstract: The concept of sustainable development appeared as a response to the attempt to improve the quality of human life, simultaneously with the preservation of the environment. For this reason, two of the 17 Sustainable Development Goals are dedicated to life below water (SDG14) and on land (SDG15). In the course of this research, comprehensive information on the extent of degradation in Romania’s primary ecosystems was furnished, along with an exploration of the key factors precipitating this phenomenon. This investigation delves into the perspectives of 42 counties, scrutinizing the level of degradation in forest ecosystems, grasslands, lakes and rivers. The analysis commences with a presentation of descriptive statistics pertaining to each scrutinized system, followed by an elucidation of the primary causes contributing to its degradation. Subsequently, a cluster analysis is conducted on the counties of the country. One of these causes is the presence of intense industrial activity in certain areas, so it is even more important to accelerate the transition to a green economy in order to help the environment regenerate.
      Citation: Stats
      PubDate: 2024-01-18
      DOI: 10.3390/stats7010006
      Issue No: Vol. 7, No. 1 (2024)
       
  • Stats, Vol. 7, Pages 95-109: Statistical Framework: Estimating the
           Cumulative Shares of Nobel Prizes from 1901 to 2022

    • Authors: Xu Zhang, Bruce Golden, Edward Wasil
      First page: 95
      Abstract: Studying trends in the geographical distribution of the Nobel Prize is an interesting topic that has been examined in the academic literature. To track the trends, we develop a stochastic estimate for the cumulative shares of Nobel Prizes awarded to recipients in four geographical groups: North America, Europe, Asia, Other. Specifically, we propose two models to estimate how cumulative shares change over time in the four groups. We estimate parameters, develop a prediction interval for each model, and validate our models. Finally, we apply our approach to estimate the distribution of the cumulative shares of Nobel Prizes for the four groups from 1901 to 2022.
      Citation: Stats
      PubDate: 2024-01-19
      DOI: 10.3390/stats7010007
      Issue No: Vol. 7, No. 1 (2024)
       
  • Stats, Vol. 7, Pages 110-137: Active Learning for Stacking and
           AdaBoost-Related Models

    • Authors: Qun Sui, Sujit K. Ghosh
      First page: 110
      Abstract: Ensemble learning (EL) has become an essential technique in machine learning that can significantly enhance the predictive performance of basic models, but it also comes with an increased cost of computation. The primary goal of the proposed approach is to present a general integrative framework that allows for applying active learning (AL) which makes use of only limited budget by selecting optimal instances to achieve comparable predictive performance within the context of ensemble learning. The proposed framework is based on two distinct approaches: (i) AL is implemented following a full scale EL, which we call the ensemble learning on top of active learning (ELTAL), and (ii) apply the AL while using the EL, which we call the active learning during ensemble learning (ALDEL). Various algorithms for ELTAL and ALDEL are presented using Stacking and Boosting with various algorithm-specific query strategies. The proposed active learning algorithms are numerically illustrated with the Support Vector Machine (SVM) model using simulated data and two real-world applications, evaluating their accuracy when only a small number instances are selected as compared to using full data. Our findings demonstrate that: (i) the accuracy of a boosting or stacking model, using the same uncertainty sampling, is higher than that of the SVM model, highlighting the strength of EL; (ii) AL can enable the stacking model to achieve comparable accuracy to the SVM model using the full dataset, with only a small fraction of carefully selected instances, illustrating the strength of active learning.
      Citation: Stats
      PubDate: 2024-01-24
      DOI: 10.3390/stats7010008
      Issue No: Vol. 7, No. 1 (2024)
       
  • Stats, Vol. 7, Pages 138-159: On Estimation of Shannon’s Entropy of
           Maxwell Distribution Based on Progressively First-Failure Censored Data

    • Authors: Kapil Kumar, Indrajeet Kumar, Hon Keung Tony Ng
      First page: 138
      Abstract: Shannon’s entropy is a fundamental concept in information theory that quantifies the uncertainty or information in a random variable or data set. This article addresses the estimation of Shannon’s entropy for the Maxwell lifetime model based on progressively first-failure-censored data from both classical and Bayesian points of view. In the classical perspective, the entropy is estimated using maximum likelihood estimation and bootstrap methods. For Bayesian estimation, two approximation techniques, including the Tierney-Kadane (T-K) approximation and the Markov Chain Monte Carlo (MCMC) method, are used to compute the Bayes estimate of Shannon’s entropy under the linear exponential (LINEX) loss function. We also obtained the highest posterior density (HPD) credible interval of Shannon’s entropy using the MCMC technique. A Monte Carlo simulation study is performed to investigate the performance of the estimation procedures and methodologies studied in this manuscript. A numerical example is used to illustrate the methodologies. This paper aims to provide practical values in applied statistics, especially in the areas of reliability and lifetime data analysis.
      Citation: Stats
      PubDate: 2024-02-08
      DOI: 10.3390/stats7010009
      Issue No: Vol. 7, No. 1 (2024)
       
  • Stats, Vol. 7, Pages 160-171: Sensitivity Analysis of Start Point of
           Extreme Daily Rainfall Using CRHUDA and Stochastic Models

    • Authors: Martin Muñoz-Mandujano, Alfonso Gutierrez-Lopez, Jose Alfredo Acuña-Garcia, Mauricio Arturo Ibarra-Corona, Isaac Carpintero Aguilar, José Alejandro Vargas-Diaz
      First page: 160
      Abstract: Forecasting extreme precipitation is one of the basic actions of warning systems in Latin America and the Caribbean (LAC). With thousands of economic losses and severe damage caused by floods in urban areas, hydrometeorological monitoring is a priority in most countries in the LAC region. The monitoring of convective precipitation, cold fronts, and hurricane tracks are the most demanded technological developments for early warning systems in the region. However, predicting and forecasting the onset time of extreme precipitation is a subject of life-saving scientific research. Developed in 2019, the CRHUDA (Crossing HUmidity, Dew point, and Atmospheric pressure) model provides insight into the onset of precipitation from the Clausius–Clapeyron relationship. With access to a historical database of more than 600 storms, the CRHUDA model provides a prediction with a precision of six to eight hours in advance of storm onset. However, the calibration is complex given the addition of ARMA(p,q)-type models for real-time forecasting. This paper presents the calibration of the joint CRHUDA+ARMA(p,q) model. It is concluded that CRHUDA is significantly more suitable and relevant for the forecast of precipitation and a possible future development for an early warning system (EWS).
      Citation: Stats
      PubDate: 2024-02-08
      DOI: 10.3390/stats7010010
      Issue No: Vol. 7, No. 1 (2024)
       
  • Stats, Vol. 7, Pages 172-184: Importance and Uncertainty of
           λ-Estimation for Box–Cox Transformations to Compute and
           Verify Reference Intervals in Laboratory Medicine

    • Authors: Frank Klawonn, Neele Riekeberg, Georg Hoffmann
      First page: 172
      Abstract: Reference intervals play an important role in medicine, for instance, for the interpretation of blood test results. They are defined as the central 95% values of a healthy population and are often stratified by sex and age. In recent years, so-called indirect methods for the computation and validation of reference intervals have gained importance. Indirect methods use all values from a laboratory, including the pathological cases, and try to identify the healthy sub-population in the mixture of values. This is only possible under certain model assumptions, i.e., that the majority of the values represent non-pathological values and that the non-pathological values follow a normal distribution after a suitable transformation, commonly a Box–Cox transformation, rendering the parameter λ of the Box–Cox transformation as a nuisance parameter for the estimation of the reference interval. Although indirect methods put high effort on the estimation of λ, they come to very different estimates for λ, even though the estimated reference intervals are quite coherent. Our theoretical considerations and Monte-Carlo simulations show that overestimating λ can lead to intolerable deviations of the reference interval estimates, whereas λ=0 produces usually acceptable estimates. For λ close to 1, its estimate has limited influence on the estimate for the reference interval, and with reasonable sample sizes, the uncertainty for the λ-estimate remains quite high.
      Citation: Stats
      PubDate: 2024-02-09
      DOI: 10.3390/stats7010011
      Issue No: Vol. 7, No. 1 (2024)
       
  • Stats, Vol. 7, Pages 185-202: Utility in Time Description in Priority
           Best–Worst Discrete Choice Models: An Empirical Evaluation Using
           Flynn’s Data

    • Authors: Sasanka Adikari, Norou Diawara
      First page: 185
      Abstract: Discrete choice models (DCMs) are applied in many fields and in the statistical modelling of consumer behavior. This paper focuses on a form of choice experiment, best–worst scaling in discrete choice experiments (DCEs), and the transition probability of a choice of a consumer over time. The analysis was conducted by using simulated data (choice pairs) based on data from Flynn’s (2007) ‘Quality of Life Experiment’. Most of the traditional approaches assume the choice alternatives are mutually exclusive over time, which is a questionable assumption. We introduced a new copula-based model (CO-CUB) for the transition probability, which can handle the dependent structure of best–worst choices while applying a very practical constraint. We used a conditional logit model to calculate the utility at consecutive time points and spread it to future time points under dynamic programming. We suggest that the CO-CUB transition probability algorithm is a novel way to analyze and predict choices in future time points by expressing human choice behavior. The numerical results inform decision making, help formulate strategy and learning algorithms under dynamic utility in time for best–worst DCEs.
      Citation: Stats
      PubDate: 2024-02-19
      DOI: 10.3390/stats7010012
      Issue No: Vol. 7, No. 1 (2024)
       
  • Stats, Vol. 7, Pages 203-219: New Vessel Extraction Method by Using Skew
           Normal Distribution for MRA Images

    • Authors: Tohid Bahrami, Hossein Jabbari Khamnei, Mehrdad Lakestani, B. M. Golam Kibria
      First page: 203
      Abstract: Vascular-related diseases pose significant public health challenges and are a leading cause of mortality and disability. Understanding the complex structure of the vascular system and its processes is crucial for addressing these issues. Recent advancements in medical imaging technology have enabled the generation of high-resolution 3D images of vascular structures, leading to a diverse array of methods for vascular extraction. While previous research has often assumed a normal distribution of image data, this paper introduces a novel vessel extraction method that utilizes the skew normal distribution for more accurate probability distribution modeling. The proposed method begins with a preprocessing step to enhance vessel structures and reduce noise in Magnetic Resonance Angiography (MRA) images. The skew normal distribution, known for its ability to model skewed data, is then employed to characterize the intensity distribution of vessels. By estimating the parameters of the skew normal distribution using the Expectation-Maximization (EM) algorithm, the method effectively separates vessel pixels from the background and non-vessel regions. To extract vessels, a thresholding technique is applied based on the estimated skew normal distribution parameters. This segmentation process enables accurate vessel extraction, particularly in detecting thin vessels and enhancing the delineation of vascular edges with low contrast. Experimental evaluations on a diverse set of MRA images demonstrate the superior performance of the proposed method compared to previous approaches in terms of accuracy and computational efficiency. The presented vessel extraction method holds promise for improving the diagnosis and treatment of vascular-related diseases. By leveraging the skew normal distribution, it provides accurate and efficient vessel segmentation, contributing to the advancement of vascular imaging in the field of medical image analysis.
      Citation: Stats
      PubDate: 2024-02-23
      DOI: 10.3390/stats7010013
      Issue No: Vol. 7, No. 1 (2024)
       
  • Stats, Vol. 7, Pages 220-234: Generation of Scale-Free Assortative
           Networks via Newman Rewiring for Simulation of Diffusion Phenomena

    • Authors: Laura Di Lucchio, Giovanni Modanese
      First page: 220
      Abstract: By collecting and expanding several numerical recipes developed in previous work, we implement an object-oriented Python code, based on the networkX library, for the realization of the configuration model and Newman rewiring. The software can be applied to any kind of network and “target” correlations, but it is tested with focus on scale-free networks and assortative correlations. In order to generate the degree sequence we use the method of “random hubs”, which gives networks with minimal fluctuations. For the assortative rewiring we use the simple Vazquez-Weigt matrix as a test in the case of random networks; since it does not appear to be effective in the case of scale-free networks, we subsequently turn to another recipe which generates matrices with decreasing off-diagonal elements. The rewiring procedure is also important at the theoretical level, in order to test which types of statistically acceptable correlations can actually be realized in concrete networks. From the point of view of applications, its main use is in the construction of correlated networks for the solution of dynamical or diffusion processes through an analysis of the evolution of single nodes, i.e., beyond the Heterogeneous Mean Field approximation. As an example, we report on an application to the Bass diffusion model, with calculations of the time tmax of the diffusion peak. The same networks can additionally be exported in environments for agent-based simulations like NetLogo.
      Citation: Stats
      PubDate: 2024-02-24
      DOI: 10.3390/stats7010014
      Issue No: Vol. 7, No. 1 (2024)
       
  • Stats, Vol. 7, Pages 235-268: Two-Stage Limited-Information Estimation for
           Structural Equation Models of Round-Robin Variables

    • Authors: Terrence D. Jorgensen, Aditi M. Bhangale, Yves Rosseel
      First page: 235
      Abstract: We propose and demonstrate a new two-stage maximum likelihood estimator for parameters of a social relations structural equation model (SR-SEM) using estimated summary statistics (Σ^) as data, as well as uncertainty about Σ^ to obtain robust inferential statistics. The SR-SEM is a generalization of a traditional SEM for round-robin data, which have a dyadic network structure (i.e., each group member responds to or interacts with each other member). Our two-stage estimator is developed using similar logic as previous two-stage estimators for SEM, developed for application to multilevel data and multiple imputations of missing data. We demonstrate out estimator on a publicly available data set from a 2018 publication about social mimicry. We employ Markov chain Monte Carlo estimation of Σ^ in Stage 1, implemented using the R package rstan. In Stage 2, the posterior mean estimates of Σ^ are used as input data to estimate SEM parameters with the R package lavaan. The posterior covariance matrix of estimated Σ^ is also calculated so that lavaan can use it to calculate robust standard errors and test statistics. Results are compared to full-information maximum likelihood (FIML) estimation of SR-SEM parameters using the R package srm. We discuss how differences between estimators highlight the need for future research to establish best practices under realistic conditions (e.g., how to specify empirical Bayes priors in Stage 1), as well as extensions that would make 2-stage estimation particularly advantageous over single-stage FIML.
      Citation: Stats
      PubDate: 2024-02-28
      DOI: 10.3390/stats7010015
      Issue No: Vol. 7, No. 1 (2024)
       
  • Stats, Vol. 7, Pages 269-283: Comments on the Bernoulli Distribution and
           Hilbe’s Implicit Extra-Dispersion

    • Authors: Daniel A. Griffith
      First page: 269
      Abstract: For decades, conventional wisdom maintained that binary 0–1 Bernoulli random variables cannot contain extra-binomial variation. Taking an unorthodox stance, Hilbe actively disagreed, especially for correlated observation instances, arguing that the universally adopted diagnostic Pearson or deviance dispersion statistics are insensitive to a variance anomaly in a binary context, and hence simply fail to detect it. However, having the intuition and insight to sense the existence of this departure from standard mathematical statistical theory, but being unable to effectively isolate it, he classified this particular over-/under-dispersion phenomenon as implicit. This paper explicitly exposes his hidden quantity by demonstrating that the variance in/deflation it represents occurs in an underlying predicted beta random variable whose real number values are rounded to their nearest integers to convert to a Bernoulli random variable, with this discretization masking any materialized extra-Bernoulli variation. In doing so, asymptotics linking the beta-binomial and Bernoulli distributions show another conventional wisdom misconception, namely a mislabeling substitution involving the quasi-Bernoulli random variable; this undeniably is not a quasi-likelihood situation. A public bell pepper disease dataset exhibiting conspicuous spatial autocorrelation furnishes empirical examples illustrating various features of this advocated proposition.
      Citation: Stats
      PubDate: 2024-03-05
      DOI: 10.3390/stats7010016
      Issue No: Vol. 7, No. 1 (2024)
       
  • Stats, Vol. 7, Pages 284-300: Cumulative Histograms under Uncertainty: An
           Application to Dose–Volume Histograms in Radiotherapy Treatment
           Planning

    • Authors: Flavia Gesualdi, Niklas Wahl
      First page: 284
      Abstract: In radiotherapy treatment planning, the absorbed doses are subject to executional and preparational errors, which propagate to plan quality metrics. Accurately quantifying these uncertainties is imperative for improved treatment outcomes. One approach, analytical probabilistic modeling (APM), presents a highly computationally efficient method. This study evaluates the empirical distribution of dose–volume histogram points (a typical plan metric) derived from Monte Carlo sampling to quantify the accuracy of modeling uncertainties under different distribution assumptions, including Gaussian, log-normal, four-parameter beta, gamma, and Gumbel distributions. Since APM necessitates the bivariate cumulative distribution functions, this investigation also delves into approximations using a Gaussian or an Ali–Mikhail–Haq Copula. The evaluations are performed in a one-dimensional simulated geometry and on patient data for a lung case. Our findings suggest that employing a beta distribution offers improved modeling accuracy compared to a normal distribution. Moreover, the multivariate Gaussian model outperforms the Copula models in patient data. This investigation highlights the significance of appropriate statistical distribution selection in advancing the accuracy of uncertainty modeling in radiotherapy treatment planning, extending an understanding of the analytical probabilistic modeling capacities in this crucial medical domain.
      Citation: Stats
      PubDate: 2024-03-06
      DOI: 10.3390/stats7010017
      Issue No: Vol. 7, No. 1 (2024)
       
  • Stats, Vol. 7, Pages 301-316: Wilcoxon-Type Control Charts Based on
           Multiple Scans

    • Authors: Ioannis S. Triantafyllou
      First page: 301
      Abstract: In this article, we establish new distribution-free Shewhart-type control charts based on rank sum statistics with signaling multiple scans-type rules. More precisely, two Wilcoxon-type chart statistics are considered in order to formulate the decision rule of the proposed monitoring scheme. In order to enhance the performance of the new nonparametric control charts, multiple scans-type rules are activated, which make the proposed chart more sensitive in detecting possible shifts of the underlying distribution. The appraisal of the proposed monitoring scheme is accomplished with the aid of the corresponding run length distribution under both in- and out-of-control cases. Thereof, exact formulae for the variance of the run length distribution and the average run length (ARL) of the proposed monitoring schemes are derived. A numerical investigation is carried out and depicts that the proposed schemes acquire better performance towards their competitors.
      Citation: Stats
      PubDate: 2024-03-07
      DOI: 10.3390/stats7010018
      Issue No: Vol. 7, No. 1 (2024)
       
  • Stats, Vol. 7, Pages 317-332: The Flexible Gumbel Distribution: A New
           Model for Inference about the Mode

    • Authors: Qingyang Liu, Xianzheng Huang, Haiming Zhou
      First page: 317
      Abstract: A new unimodal distribution family indexed via the mode and three other parameters is derived from a mixture of a Gumbel distribution for the maximum and a Gumbel distribution for the minimum. Properties of the proposed distribution are explored, including model identifiability and flexibility in capturing heavy-tailed data that exhibit different directions of skewness over a wide range. Both frequentist and Bayesian methods are developed to infer parameters in the new distribution. Simulation studies are conducted to demonstrate satisfactory performance of both methods. By fitting the proposed model to simulated data and data from an application in hydrology, it is shown that the proposed flexible distribution is especially suitable for data containing extreme values in either direction, with the mode being a location parameter of interest. Using the proposed unimodal distribution, one can easily formulate a regression model concerning the mode of a response given covariates. We apply this model to data from an application in criminology to reveal interesting data features that are obscured by outliers.
      Citation: Stats
      PubDate: 2024-03-13
      DOI: 10.3390/stats7010019
      Issue No: Vol. 7, No. 1 (2024)
       
  • Stats, Vol. 7, Pages 333-349: A Note on Simultaneous Confidence Intervals
           for Direct, Indirect and Synthetic Estimators

    • Authors: Christophe Quentin Valvason, Stefan Sperlich
      First page: 333
      Abstract: Direct, indirect and synthetic estimators have a long history in official statistics. While model-based or model-assisted approaches have become very popular, direct and indirect estimators remain the predominant standard and are therefore important tools in practice. This is mainly due to their simplicity, including low data requirements, assumptions and straightforward inference. With the increasing use of domain estimates in policy, the demands on these tools have also increased. Today, they are frequently used for comparative statistics. This requires appropriate tools for simultaneous inference. We study devices for constructing simultaneous confidence intervals and show that simple tools like the Bonferroni correction can easily fail. In contrast, uniform inference based on max-type statistics in combination with bootstrap methods, appropriate for finite populations, work reasonably well. We illustrate our methods with frequently applied estimators of totals and means.
      Citation: Stats
      PubDate: 2024-03-20
      DOI: 10.3390/stats7010020
      Issue No: Vol. 7, No. 1 (2024)
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 3.235.182.206
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-
JournalTOCs
 
 
  Subjects -> MATHEMATICS (Total: 1013 journals)
    - APPLIED MATHEMATICS (92 journals)
    - GEOMETRY AND TOPOLOGY (23 journals)
    - MATHEMATICS (714 journals)
    - MATHEMATICS (GENERAL) (45 journals)
    - NUMERICAL ANALYSIS (26 journals)
    - PROBABILITIES AND MATH STATISTICS (113 journals)

PROBABILITIES AND MATH STATISTICS (113 journals)                     

Showing 1 - 85 of 85 Journals sorted alphabetically
Advances in Statistics     Open Access   (Followers: 10)
Afrika Statistika     Open Access   (Followers: 1)
American Journal of Applied Mathematics and Statistics     Open Access   (Followers: 13)
American Journal of Mathematics and Statistics     Open Access   (Followers: 9)
Annals of Data Science     Hybrid Journal   (Followers: 15)
Applied Medical Informatics     Open Access   (Followers: 12)
Asian Journal of Mathematics & Statistics     Open Access   (Followers: 7)
Asian Journal of Probability and Statistics     Open Access  
Austrian Journal of Statistics     Open Access   (Followers: 4)
Biostatistics & Epidemiology     Hybrid Journal   (Followers: 6)
Calcutta Statistical Association Bulletin     Hybrid Journal  
Communications in Mathematics and Statistics     Hybrid Journal   (Followers: 3)
Communications in Statistics - Simulation and Computation     Hybrid Journal   (Followers: 9)
Communications in Statistics: Case Studies, Data Analysis and Applications     Hybrid Journal  
Comunicaciones en Estadística     Open Access  
Econometrics and Statistics     Hybrid Journal   (Followers: 2)
Forecasting     Open Access   (Followers: 1)
Foundations and Trends® in Optimization     Full-text available via subscription   (Followers: 2)
Geoinformatics & Geostatistics     Hybrid Journal   (Followers: 10)
Geomatics, Natural Hazards and Risk     Open Access   (Followers: 14)
Indonesian Journal of Applied Statistics     Open Access  
International Game Theory Review     Hybrid Journal  
International Journal of Advanced Statistics and IT&C for Economics and Life Sciences     Open Access  
International Journal of Advanced Statistics and Probability     Open Access   (Followers: 7)
International Journal of Applied Mathematics and Statistics     Full-text available via subscription   (Followers: 4)
International Journal of Ecological Economics and Statistics     Full-text available via subscription   (Followers: 4)
International Journal of Game Theory     Hybrid Journal   (Followers: 3)
International Journal of Mathematics and Statistics     Full-text available via subscription   (Followers: 2)
International Journal of Multivariate Data Analysis     Hybrid Journal  
International Journal of Probability and Statistics     Open Access   (Followers: 3)
International Journal of Statistics & Economics     Full-text available via subscription   (Followers: 6)
International Journal of Statistics and Applications     Open Access   (Followers: 2)
International Journal of Statistics and Probability     Open Access   (Followers: 3)
International Journal of Statistics in Medical Research     Hybrid Journal   (Followers: 2)
International Journal of Testing     Hybrid Journal   (Followers: 1)
Iraqi Journal of Statistical Sciences     Open Access  
Japanese Journal of Statistics and Data Science     Hybrid Journal  
Journal of Biometrics & Biostatistics     Open Access   (Followers: 4)
Journal of Cost Analysis and Parametrics     Hybrid Journal   (Followers: 5)
Journal of Environmental Statistics     Open Access   (Followers: 4)
Journal of Game Theory     Open Access   (Followers: 1)
Journal of Mathematical Economics and Finance     Full-text available via subscription  
Journal of Mathematics and Statistics Studies     Open Access  
Journal of Modern Applied Statistical Methods     Open Access   (Followers: 1)
Journal of Official Statistics     Open Access   (Followers: 2)
Journal of Quantitative Economics     Hybrid Journal  
Journal of Social and Economic Statistics     Open Access   (Followers: 4)
Journal of Statistical Theory and Practice     Hybrid Journal   (Followers: 2)
Journal of Statistics and Data Science Education     Open Access   (Followers: 2)
Journal of Survey Statistics and Methodology     Hybrid Journal   (Followers: 6)
Journal of the Indian Society for Probability and Statistics     Full-text available via subscription  
Jurnal Biometrika dan Kependudukan     Open Access   (Followers: 1)
Lietuvos Statistikos Darbai     Open Access   (Followers: 1)
Mathematics and Statistics     Open Access   (Followers: 2)
Methods, Data, Analyses     Open Access   (Followers: 1)
METRON     Hybrid Journal   (Followers: 2)
Nepalese Journal of Statistics     Open Access   (Followers: 1)
North American Actuarial Journal     Hybrid Journal   (Followers: 2)
Open Journal of Statistics     Open Access   (Followers: 3)
Open Mathematics, Statistics and Probability Journal     Open Access  
Pakistan Journal of Statistics and Operation Research     Open Access   (Followers: 1)
Physica A: Statistical Mechanics and its Applications     Hybrid Journal   (Followers: 7)
Probability, Uncertainty and Quantitative Risk     Open Access   (Followers: 2)
Research & Reviews : Journal of Statistics     Open Access   (Followers: 4)
Revista Brasileira de Biometria     Open Access  
Revista Colombiana de Estadística     Open Access  
RMS : Research in Mathematics & Statistics     Open Access   (Followers: 1)
Sankhya B - Applied and Interdisciplinary Statistics     Hybrid Journal  
SIAM Journal on Mathematics of Data Science     Hybrid Journal   (Followers: 6)
SIAM/ASA Journal on Uncertainty Quantification     Hybrid Journal   (Followers: 3)
Spatial Statistics     Hybrid Journal   (Followers: 2)
Stat     Hybrid Journal   (Followers: 1)
Stata Journal     Full-text available via subscription   (Followers: 10)
Statistica     Open Access   (Followers: 6)
Statistical Analysis and Data Mining     Hybrid Journal   (Followers: 23)
Statistical Theory and Related Fields     Hybrid Journal  
Statistics and Public Policy     Open Access   (Followers: 3)
Statistics in Transition New Series : An International Journal of the Polish Statistical Association     Open Access  
Statistics Research Letters     Open Access   (Followers: 1)
Statistics, Optimization & Information Computing     Open Access   (Followers: 5)
Stats     Open Access  
Theory of Probability and its Applications     Hybrid Journal   (Followers: 2)
Theory of Probability and Mathematical Statistics     Full-text available via subscription   (Followers: 2)
Turkish Journal of Forecasting     Open Access   (Followers: 1)
Zeitschrift für die gesamte Versicherungswissenschaft     Hybrid Journal  

           

Similar Journals
Similar Journals
HOME > Browse the 73 Subjects covered by JournalTOCs  
SubjectTotal Journals
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 3.235.182.206
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-