Similar Journals
Mathematics and Statistics
Number of Followers: 2 Open Access journal ISSN (Print) 2332-2071 - ISSN (Online) 2332-2144 Published by Horizon Research Publishing [51 journals] |
- Integral Graph Spectrum and Energy of Interconnected Balanced Multi-star
Graphs
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 B. I. Andrew and A. Anuradha Balanced multi-star graph is a specialized type of graph formed by connecting apex vertices of star graphs to create a cohesive structure known as a clique. These graphs comprise r star graphs, where each star graph has an apex vertex connected to n pendant vertices. Balanced multistar graphs offer benefits in scenarios requiring equal distances between peripheral nodes, such as sensor networks, distributed computing, traffic engineering, telecommunications, supply chain management, and power distribution. The integral graph spectrum derived from the adjacency matrix of balanced multistar graphs holds significance across various domains. It aids in network analysis to understand connectivity patterns, facilitates efficient computation of structural properties through graph algorithms, and enables graph partitioning and community detection. Spectral graph theory assists in identifying connectivity patterns in network visualization, supports modeling biological networks in biomedical research, aids in generating personalized recommendations in recommendation systems and contributes to graph-based segmentation and scene analysis tasks in image processing. This paper aims to characterize the integral graph spectrum of balanced multi-star graphs by focusing on spectral parameters of double-star graphs (r=2), triple-star graphs (r=3), and quadruple-star graphs (r=4). This spectrum serves as an important tool across disciplines, providing insights into graph structure and facilitating tasks ranging from network analysis to computational biology and image processing.
PubDate: Mar 2024
- Variations of Rigidity for Abelian Groups
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 Inessa I. Pavlyuk and Sergey V. Sudoplatov A series of basic characteristics of structures and of elementary theories reflects their complexity and richness. Among these characteristics, four kinds of degrees of rigidity and the index of rigidity are considered as measures of how far the given structure is situated from rigid one, both with respect to the automorphism group and to the definable closure, for some or any subset of the universe, which has the given finite cardinality. Thus, a natural question arises on a classification of model-theoretic objects with respect to rigidity characteristics. We apply a general approach of studying the rigidity values and related classification to abelian groups and their theories. We describe possibilities of degrees and indexes of rigidity for finite abelian groups and for standard infinite abelian groups. This description is based both on general consideration of rigidity, on its application for finite structures, and on their specificity for abelian groups including Szmielew invariants, combinatorial formulas for cardinalities of orbits, links with dimensions, and on their combinations. It shows how characteristics of infinite abelian groups are related to them with respect to finite ones. Some applications for non-standard abelian groups are discussed.
PubDate: Mar 2024
- Emerging Frameworks: 2-Multiplicative Metric and Normed Linear Spaces
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 B. Surender Reddy S. Vijayabalaji N. Thillaigovindan and K. Punniyamoorthy This new study helps us understand 2-multiplicative or product metric spaces and normed linear spaces (NDLS) better than before, going beyond what we already know. Seeing a gap in existing research, our main aim is to thoroughly explore the natural properties of 2-multiplicative NDLS. Using a careful approach that looks at continuity, compactness, and convergence properties, our research finds results that point out the special features of these spaces and show the connections between their algebraic and topological sides. The importance of our findings goes beyond just theory, affecting practical uses and encouraging collaboration across different fields. Our research builds a strong base in mathematical analysis, giving useful insights for making nuanced decisions. Acknowledging some limitations in our study opens the door for future improvements, creating promising paths for further exploration. In real-world terms, what we learn from this thorough study not only informs but also changes how we make decisions in mathematical analysis. In research community, our work makes people appreciate the connection between algebraic and topological spaces more deeply, sparking curiosity and inspiring future research. In essence, this research acts as a guiding light, showcasing the unique features of 2-multiplicative NDLS and paving the way for a deeper understanding of mathematical structures and their flexible uses in both theory and practice. Furthermore, our exploration motivates future researchers to dive into the details of 2-multiplicative NDLS, expanding their knowledge and looking into broader implications in the field of mathematical analysis.
PubDate: Mar 2024
- A Class of Efficient Shrinkage Estimators for Modelling the Reliability of
Burr XII Distribution
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 Zuhair A. Al-Hemyari Alaa Khlaif Jiheel and Iman Jalil Atewi For the purpose of modelling the Reliability of Burr XII Distribution, a family of shrinkage estimators is proposed for any parameter of any distribution when a prior guess value of is available from the past. In addition, two sub-models of the shrinkage type estimators for estimating the reliability and parameters of the Burr XII Distribution using two types of shrinkage weight functions with the preliminary test of the hypothesis against the alternative have been proposed and studied. The criteria for studying the properties of two sub-models of the reliability estimators which are the Bias, Bias ratio, Mean Squared Error and Relative Efficiency were derived and computed numerically for each sub-model for the purpose of studying the behavior of the estimators for the Burr XII Distribution because they are complicated and contain many complex functions. The numerical results showed the usefulness of the proposed two sub-models of the reliability estimators of Burr XII Distribution relative to the classical estimators for both of the shrinkage functions when the value of the a priori guess value is close to the true value of . In addition, the comparison between the proposed two sub-models of the shrinkag
PubDate: Mar 2024
- Product Signed Domination in Probabilistic Neural Networks
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 T. M. Velammal A. Nagarajan and K. Palani Domination plays a very important role in graph theory. It has a lot of applications in various fields like communication, social science, engineering, etc. Let be a simple graph. A function is said to be a product signed dominating function if each vertex in satisfies the condition where denotes the closed neighborhood of . The weight of a function is defined as . The product signed domination number of a graph is the minimum positive weight of a product signed dominating function and is denoted as . Product signed dominating function assigns 1 or -1 to the nodes of the graph. This variation of dominating function has applications in social networks of people or organizations. Probabilistic Neural Network (PNN) was first proposed by Specht. This is a classifier that maps input patterns in a number of class levels and estimates the probability of a sample being part of learned theory. This paper studies the existence of product signed dominating functions in probabilistic neural networks and calculates the accurate values of product signed domination numbers of three layered and four layered probabilistic neural networks.
PubDate: Mar 2024
- Forecasts with SPR Model Using Bootstrap-Reversible Jump MCMC
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 Suparman Eviana Hikamudin Hery Suharna Aryanti In Hi Abdullah and Rina Heryani Polynomial regression (PR) is a stochastic model that has been widely used in forecasting in various fields. Stationary stochastic models play a very important role in forecasting. Generally, PR model parameter estimation methods have been developed for non-stationary PR models. This article aims to develop an algorithm to estimate the parameters of a stationary polynomial regression (SPR) model. The SPR model parameters are estimated using the Bayesian method. The Bayes estimator cannot be determined analytically because the posterior distribution for the SPR model parameters has a complex structure. The complexity of the posterior distribution is caused by the SPR model parameters which have a variable dimensional space. Therefore, this article uses the reversible jump MCMC algorithm which is suitable for estimating the parameters of variable-dimensional models. Applying the reversible jump MCMC algorithm to big data requires many iterations. To reduce the number of iterations, the reversible jump MCMC algorithm is combined with the Bootstrap algorithm via the resampling method. The performance of the Bootstrap-reversible jump MCMC algorithm is validated using 2 simulated data sets. These findings show that the Bootstrap-reversible jump MCMC algorithm can estimate the SPR model parameters well. These findings contribute to the development of SPR models and SPR model parameter estimation methods. In addition, these findings contribute to big data modeling. Further research can be done by replacing Gaussian noise in SPR with non-Gaussian noise.
PubDate: Mar 2024
- Decision Making with Parametric Reduction and Graphical Representation of
Neutrosophic Soft Set
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 Sonali Priyadarsini Ajay Vikram Singh and Said Broumi The neutrosophic soft set is one of the most significant mathematical approaches for uncertainty description, and it has a multitude of practical applications in the realm of decision making. On the other hand, the decision-making process is often made more difficult and complex since these situations contain criteria that are less significant and more redundant. In neutrosophic soft set-based decision-making problems, parameter reduction is an efficient method for cutting down on redundant and superfluous factors, and it does so without damaging the decision-makers' ability to make decisions. In this work, a parametric reduction strategy has been proposed. This approach lessens the difficulties associated with decision making while maintaining the existing order of available options. Because the decision sequence is maintained while the process of reduction is streamlined, utilizing this tactic results in an experience that is both less difficult and more convenient. This article demonstrates the applicability of this method by outlining a decision-making dilemma that was taken from the actual world and providing a solution for it. This article discusses a novel method for dealing with neutrosophic soft graphs by merging graph theory with neutrosophic soft set theory. An illustration of a graphical depiction of a neutrosophic soft set is provided alongside an explanation of neutrosophic graphs and neutrosophic soft set graphs in this article.
PubDate: Mar 2024
- Recursive Estimation of the Multidimensional Distribution Function Using
Bernstein Polynomial
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 D. A. N. Njamen B. Baldagaï G. T. Nguefack and A. Y. Nana The recursive method known as the stochastic approximation method, can be used among other things, for constructing recursive nonparametric estimators. Its aim is to ease the updating of the estimator when moving from a sample of size n to n + 1. Some authors have used it to estimate the density and distribution functions, as well as univariate regression using Bernstein's polynomials. In this paper, we propose a nonparametric approach to the multidimensional recursive estimators of the distribution function using Bernstein's polynomial by the stochastic approximation method. We determine an asymptotic expression for the first two moments of our estimator of the distribution function, and then give some of its properties, such as first- and second-order moments, the bias, the mean square error (MSE), and the integrated mean square error (IMSE). We also determine the optimal choice of parameters for which the MSE is minimal. Numerical simulations are carried out and show that, under certain conditions, the estimator obtained converges to the usual laws and is faster than other methods in the case of distribution function. However, there is still a lot of work to be done on this issue. These include the studies of the convergence properties of the proposed estimator and also the estimation of the recursive regression function; the developments of a new estimator based on Bernstein polynomials of a regression function using the semi-recursive estimation method; and also a new recursive estimator of the distribution function, density and regression functions; when the variables are dependent.
PubDate: Mar 2024
- Applications of Onto Functions in Cryptography
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 K Krishna Sowmya and V Srinivas The concept of onto functions plays a very important role in the theory of Analysis and has got rich applications in many engineering and scientific techniques. Here in this paper, we are proposing a new application in the field of cryptography by using onto functions on the algebraic structures like rings and fields to get a strong encryption technique. A new symmetric cryptographic system based on Hill ciphers is developed using onto functions with two keys- Primary and Secondary, to enhance the security. This is the first algorithm in cryptography developed using onto functions which ensures a strong security for the system while maintaining the simplicity of the existing Hill cipher. The concept of using two keys is also novel in the symmetric key cryptography. The usage of onto functions in the encryption technique eventually gives the highest security to the algorithm which has been discussed through different examples. The original Hill cipher is obsolete in the present-day technology and serves as pedagogical purpose but whereas this newly proposed algorithm can be safely used in the present-day technology. Vulnerability from different types of attacks of the algorithm and the cardinality of key spaces have also been discussed.
PubDate: Mar 2024
- A Pivotal Operation on Triangular Fuzzy Number for Solving Fuzzy Nonlinear
Programming Problems
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 D. Bharathi and A. Saraswathi Fuzzy nonlinear programming plays a vital role in decision-making where uncertainties and nonlinearity significantly impact outcomes. Real-world situations often involve imprecise or vague information. Fuzzy nonlinear programming allows for the representation of uncertainty through fuzzy sets, enabling more accurate modeling of real-world complexities. Many optimization problems exhibit nonlinear relationships among variables. Fuzzy nonlinear programming addresses these complex relationships, providing solutions that linear programming methods cannot accommodate. The objective of this research article proposes Fuzzy Non-Linear Programming Problems (FNLPP) under environment of triangular Fuzzy numbers. This paper proposed a method based on the pivotal operation with aid of Wolfe's technique. Fuzzy non-linear programming is an area of study that deals with optimization problems in which the objective function and constraints involve fuzzy numbers, which represent uncertainty or vagueness in real-world data. We claim that the proposed method is easier to understand and apply compared to existing methods for solving similar problems that arise in real-life situations. To demonstrate the effectiveness of the method, the authors have solved a numerical example and provided illustrations in the paper. This proposed method in the paper aims to address such complexities and find solutions to these problems more efficiently.
PubDate: Mar 2024
- Convergence of Spectral-Grid Method for Burgers Equation with
Initial-Boundary Conditions
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 Chori Normurodov Akbar Toyirov Shakhnoza Ziyakulova and K. K. Viswanathan In this study, initial-boundary value problem for the Burgers equation is solved using the theoretical substantiation of the spectral-grid method. Using the theory of Green's functions, an operator equation of the second kind is obtained with the corresponding initial-boundary conditions for a continuous problem. To approximately solve the differential problem, the spectral grid method is used, i.e. a grid is introduced on the integration interval, and approximate solutions of the differential problem on each of the grid elements are presented as a finite series in Chebyshev polynomials of the first kind. At the internal nodes of the grid, the requirement for the continuity of the approximate solution and its first derivative is imposed. The corresponding boundary conditions are satisfied at the boundary nodes. A discrete analogue of the operator equation of the second kind is obtained using the spectral-grid method. The convergence theorems for the spectral-grid method are proven and estimates for the method's convergence rate are obtained. To discretize the Burgers equation in time on the interval [0,T], a grid with a uniform step of is introduced, i.e. , where - given number. Numerical calculations have been carried out at sufficiently low values of viscosity, which cannot be obtained by other numerical methods. The high accuracy and efficiency of the spectral-grid method used in solving the initial-boundary value problem for the Burgers equation is shown.
PubDate: Mar 2024
- Mixture of Ailamujia and Size Biased Ailamujia Distributions: Estimation
and Application
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 Bader Alruwaili In this article, we introduce a new model entitled a mixture of the Ailamujia and size biased Ailamujia distributions. We present and discuss some statistical properties of this mixture of the Ailamujia and size biased Ailamujia distributions, such as moments, skewness, and kurtosis. We also provide some graphical results on the mixture of the Ailamujia and size biased Ailamujia distributions and provide some numerical results to understand the behavior of the proposed mixture and its properties. Also, we provide some reliability analysis results on the proposed mixture. The parameters of the Ailamujia and size biased Ailamujia distributions are estimated by using the maximum likelihood method. The usefulness of the proposed combination is illustrated by using a real-life dataset. We use the Ailamujia distribution and the size biased Ailamujia distribution, in addition to the mixture of the Ailamujia and size biased Ailamujia distributions to fit the real-life dataset. We use different criteria in this comparison; the results show that the proposed mixture fits the dataset better than the use of the Ailamujia distribution and the size biased Ailamujia distribution alone.
PubDate: Jan 2024
- The ARCH Model for Analyzing and Forecasting Temperature Data
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 Ali Sadig Mohommed Bager The chaotic nature of the earth's atmosphere and the significant impact of weather on various fields necessitate accurate weather forecasting. Time series analysis plays a crucial role in predicting future values based on past data. The Autoregressive Conditional Heteroscedasticity (ARCH) model is widely used for forecasting, especially in the field of temperature analysis. This study focuses on the ARCH model for analyzing and forecasting temperature changes. The ARCH model is selected based on its ability to capture the regular variations in the predictability of meteorological variables. The methodology section explains the ARCH model and various statistical tests used, such as the heteroscedasticity test (ARCH test), Jarque-Bera test, and Augmented Dickey-Fuller test (ADF). A sample study is conducted on monthly average temperature data from Athenry, Ireland, over a period of four years. The study utilizes the ARCH model to calculate temperature series volatility and assesses the model's performance using goodness-of-fit measures and predictive accuracy. The results show that the ARCH model successfully predicts temperature changes for three years, as indicated by the forecasted temperature series. The statistical performance of the ARCH model is evaluated using in-sample and out-of-sample analyses, demonstrating its effectiveness in capturing temperature variations. The study highlights the importance of time series forecasting and the significant impact of the ARCH model in temperature analysis.
PubDate: Jan 2024
- Moments of Gaussian Distributions for Small and Large Sample Sizes
Revisited
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 Florian Heiser and E W Knapp Central moments of statistical samples provide coarse-grained information on width, symmetry and shape of the underlying probability distribution. They need appropriate corrections for fulfilling two conditions: (1) yielding correct limiting values for large samples; (2) yielding these values also, if averaged over many samples of the same size. We provide correct expressions of unbiased central moments up to the fourth and provide an unbiased expression for the kurtosis, which generally is available in a biased form only. We have verified the derived general expressions by applying them to the Gaussian probability distribution (GPD) and we show how unbiased central moments and kurtosis behave for finite samples. For this purpose, we evaluated precise distributions of all four moments for finite samples of the GPD. These distributions are based on up to 3.2*108 randomly generated samples of specific sizes. For large samples, these moment distributions become Gaussians whose second moments decay with the inverse sample size. We parameterized the corresponding decay laws. Based on these moment distributions, we demonstrate how p-values can be computed to compare the values of mean and variance evaluated from a sample with the corresponding expected values. We also show how one can use p-values for the third moment to investigate the symmetry and for the fourth moment to investigate the shape of the underlying probability distribution, certifying or ruling out a Gaussian distribution. This all provides new power for the usage of statistical moments. Finally, we apply the evaluation of p-values for a dataset of the percent of people of age 65 and above in the 50 different states of the USA.
PubDate: Jan 2024
- Functional Continuum Regression Approach to Wavelet Transformation Data in
a Non-Invasive Glucose Measurement Calibration Model
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 Ismah Ismah Erfiani Aji Hamim Wigena and Bagus Sartono Functional data has a data structure with large dimensions and is a broad source of information, but it is very possible that there are problems in analyzing functional data. Functional continuum regression is an alternative method that can be used to overcome calibration modeling with functional data. This study aimed to determine the robustness of Functional continuum regression in overcoming multicollinearity problems or the number of independent variables greater than the number of observations, with functional data. The research method used in this study is the analysis of the Functional continuum regression method on the results of the Wavelet transform of blood glucose measurements with noninvasive techniques in the calibration model, and making comparisons with non-functional methods, namely Principal component regression, partial least square regression, least square regression, and functional method namely functional regression. The results of the analysis using the five methods obtained the root mean square error prediction (RMSEP), the correlation between the observed data and the estimated observation data, and the mean absolute error (MAE). The results of the analysis can be said that reduction methods such as Functional continuum regression, Principal component regression, and partial least square regression are superior methods when used when multicollinearity occurs or the number of independent variables is greater than the number of observations. In the case of functional data analysis, the application of Functional continuum regression is better because it does not eliminate data patterns. Thus it can be said that Functional continuum regression is an effective approach in analyzing calibration models which generally have functional data, and there are several problems which include multicollinearity or the number of independent variables is greater than the number of observations.
PubDate: Jan 2024
- Derivation and Evaluation of Monte Carlo Estimators of the Scattering
Equation Using the Ward BRDF and Different Sample Allocation Strategies
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 Carlos Lopez Garces and Nayeong Kong This paper investigates three distinct Monte Carlo estimators derived from the research of Sbert et al. These estimators are specifically tailored to the scattering equations using the Ward Bidirectional Reflectance Distribution Function (BRDF) integrated with a designed cosine-weighted environment map. In this paper, we have two goals. First, to bridge the gap between theoretical foundations and practical applicability to understand how these estimators can be seamlessly integrated as extensions to the acclaimed PBRT renderer. And the second is to measure their real-world performance. We aim to validate our methodology by comparing rendered images with varying convergence rates and deviations to the results of Sbert et al. This validation will ensure the robustness and reliability of our approaches. We analyze the analytical structure of these estimators to derive their precise form. We then implement the three estimators as extensions to the PBRT renderer, subjecting them to a numerical evaluation. We further evaluate the results of the estimator set and sampling strategy by utilizing another pair of incident radiance functions and BRDFs. The final step is to generate rendered images from the implementation to verify the results observed by Svart et el. and extend them with this new pair of functions.
PubDate: Jan 2024
- On Intuitionistic Hesitancy Fuzzy Graphs
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 Sunil M.P. and J. Suresh Kumar A graph is a basic representation of relationship between vertices and edges. This can be used when the relationships are normal and straight forward. But most of the real life situations are rather complex and it calls for advance development in graph theory. The concept of fuzzy graph addresses uncertainity to a certain extent. But, situations arise when we have to address complex hesitant situations such as taking major decisions regarding merging of companies. Intuitionistic fuzzy graph (IFG) and Hesitancy fuzzy graph (HFG) were developed to resolve this uncertainity. But it also fell short in resolving problems related to hesitant situations. In this paper, we present the concepts of IFG and HFG, which serve as the foundation for introducing, defining and analysing Intuitionistic hesitancy fuzzy graph (IHFG). We explore the concepts such as λ-strong, δ-strong and ρ-strong IHFGs. Also, we make a detailed comparative study on the cartesian product and composition of HFGs and IHFGs, establishing essential theorems related to the properties of such products. We prove that the cartesian product and composition of two strong HFGs need not be a strong HFG, but the cartesian product and composition of two strong IHFGs is a strong IHFG. Also we prove that if the cartesian product of two IHFGs is strong, then, at least one of the IHFG will be strong and if the composition of two IHFGs is strong, then at least one of the IHFG will be strong. IHFG models provide exact and accurate results for taking apt decisions in problems involving hesitant situations.
PubDate: Jan 2024
- Complex Neutrosophic Fuzzy Set
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 V. Kalaiyarasan and K. Muthunagai Complex number system is an extension of the real number system which came into existence during the attempts to find a solution for cubic equations. A set characterized by a membership (characteristic) function which assigns to each object a grade of membership ranging between zero and one is called a Fuzzy set. A new development of Fuzzy system is a Complex Fuzzy system in which the membership function is complex- valued and the range of which is represented by the unit disk. The fuzzy similarity measure helps us to find the closeness among the Fuzzy sets. Due to the wide range of applications to various fields, Fuzzy Multi Criteria Decision Making (FMCDM) has gained its importance in Fuzzy set theory. A combination of Complex Fuzzy set, Fuzzy similarity measure and Fuzzy Multi Criteria Decision Making has resulted in this research contribution. In this article, we have introduced and investigated Complex neutrosophic fuzzy set, which involves complex- valued neutrosophic component. We have discussed two real life examples, one on selecting the best variety of a seed that gives the maximum yield and profit in a short period of time and another on choosing the best company to invest. Similarity measure between Complex neutrosophic fuzzy sets has been used to take a decision.
PubDate: Jan 2024
- A New Robust Interval Estimation for the Median of An Exponential
Population When Some of the Observations are Extreme Values
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 Faris Muslim Al-Athari The issue of obtaining accurate interval estimates for the median of an exponential population when some of the observations are extreme values is an important issue for researchers in the fields of reliability applications and survival analysis. In this research paper, a new method is proposed for obtaining a robust confidence interval which is a substitute for the known ordinary (classical) confidence interval when there are extreme values in the sample. The proposed method is simply a result of changing the sample mean by a constant multiple of a sample median and adjusting the upper percentile point of the chi-square of the ordinary confidence interval formula. Further, the performance of the proposed method is evaluated and compared with the ordinary one by using Monte Carlo simulation based on 100,000 trials for each sample size with 5% and 10% extreme values showing that the proposed method, under the contaminated exponential distribution, is always performing better than the ordinary method in the sense of having simulated confidence probability quite close to the aimed confidence level with shorter width and smaller standard error. The use and the application of the proposed method to real-life data are presented and compared with the simulation results.
PubDate: Jan 2024
- Communications to the Pseudo-Additive Probability Measure and the Induced
Probability Measure Realized by
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 Dhurata Valera Bederiana Shyti and Silvana Paralloj The Theory of Pseudo-Additive Measures has been studied by analyzing and evaluating significant results. The system of pseudo-arithmetic operations (SPAO) as a system generated by the generator is shown directly by taking results of Rybárik and Pap, but is a further development of . Using the meaning of entropy as a logarithmic measure in information theory. Through examples we present the relation between the and the entropy, realized by the , i.e. a . The paper studies the construction of relationships between entropy and supported by and the connection with Shannon Entropy. For the pseudo-additive probabilistic measure , using as well as in the system generated by , the problem of modification of this measure by is addressed. The modifications of the Pseudo-Additive Probability Measure and the Induced Probability Measure supported by are presented, showing the relationships between the two modifications of the Pseudo-Additive Probability Measure (PAPM) and the Induced Probability Measure (IPM). Further, the Bi-Pseudo-Integral for and the Lebesgue Integral are represented in a relationship.
PubDate: Jan 2024
- Other New Versions of Generalized Neutrosophic Connectedness and
Compactness and Their Applications
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 Alaa. M. F. AL. Jumaili The concepts of neutrosophic connectedness and compactness between neutrosophic sets find extensive applications in various fields, including sensor networks, physics, mechanical engineering, robotics and data analysis involving numerous variables. Neutrosophic set theory also plays a pivotal role in addressing complex problems in engineering, environment science, economics, and advanced mathematical disciplines. Hence, this paper aims to extend the classical definitions of neutrosophic connectedness and compactness within neutrosophic topological spaces. We introduce new classes of neutrosophic connectedness and compactness, specifically, neutrosophic δ-ß-connectedness and neutrosophic δ-ß-compactness, defined using a generalized neutrosophic open set known as "neutrosophic δ-ß-open sets". We explore several essential properties and characterizations of these spaces and introduce new notions of neutrosophic covers, which lead to the concept of neutrosophic compact spaces. Additionally, we present characterizations related to neutrosophic δ-ß-separated sets. A noteworthy feature of these concepts is their ability to model intricate connectedness networks and facilitate optimal solutions for problems involving a multitude of variables, each with degrees of acceptance, rejection, and indeterminacy. We provide relevant examples to illustrate our main findings.
PubDate: Jan 2024
- Some New Kind of Contra Continuous Functions in Nano Ideal Topological
Spaces
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 S. Manicka Vinayagam L. Meenakshi Sundaram and C. Devamanoharan The main objective of this paper is to introduce a new type of contra continuous function namely based on the concept of set and function in Nano Ideal Topological Spaces. The conceptualisation of contra continuous functions, which is an alteration of continuity that requires inverse images of open sets to be closed rather than open. We compare function with function and establish the independent relation between and functions by providing suitable counter examples. Fundamental properties of with and are investigated. We study the behaviour of with . We define space and describe its relation upon space and space. Characterizations of based on space, space and graph function namely are explored. As like the continuity, the preserves the property that it maps and sets to the same type of sets in co-domain. We defined space and described its nature over . Also we have introduced functions with an example and discussed its relation with and analysed its basic properties. Composition of functions under , and are examined.
PubDate: Jan 2024
- Homomorphism of Neutrosophic Fuzzy
Subgroup over a Finite Group
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 V Dhanya M Selvarathi and M Ambika Neutrosophic fuzzy sets are an extension of fuzzy sets. Fuzzy sets can only handle vague information, and it cannot deal with incomplete and inconsistent information. But neutrosophic fuzzy sets and their combinations are one technique for handling incomplete and inconsistent information. Neutrosophic fuzzy set theory provides the groundwork for a whole group of new mathematical theories and summarizes both the traditional and fuzzy counterparts. Following this, the area of neutrosophic fuzzy sets is being developed intensively, with the goal of strengthening the foundations of the theory, creating new applications, and enhancing its practicality in a range of real-life scenarios. Further, neutrosophic fuzzy sets are characterized by three components. One is truth (), the second is indeterminacy (), and the third is falsity (). In this paper, we have examined the idea of homomorphism of implication-based () neutrosophic fuzzy subgroups over a finite group. Then, neutrosophic fuzzy subgroups over a finite group and neutrosophic fuzzy normal subgroups over a finite group were defined. Finally, we have demonstrated some basic properties of homomorphism of neutrosophic fuzzy subgroups over a finite group in this study.
PubDate: Jan 2024
- Properties and Applications of Klongdee Distribution in Actuarial Science
Abstract: Publication date: Sep 2023
Source:Mathematics and Statistics Volume 11 Number 5 Adisak Moumeesri and Weenakorn Ieosanurak We have introduced a novel continuous distribution known as the Klongdee distribution, which is a combination of the exponential distribution with parameter and the gamma distribution with parameters . We thoroughly examined various statistical properties that provide insights into probability distributions. These properties encompass measures such as the cumulative distribution function, moments about the origin, and the moment-generating function. Additionally, we explored other important measures including skewness, kurtosis, C.V., and reliability measures. Furthermore, we explore parameter estimation using nonlinear least squares methods. The numerical results presented compare the unweighted and weighted least squares (UWLS and WLS) methods, maximum likelihood estimation (MLE), and method of moments (MOM). Based on our findings, the MLE demonstrates superior performance compared to other parameter estimation methods. Moreover, we demonstrate the application of this distribution within an actuarial context, specifically in the analysis of collective risk models using a mixed Poisson framework. By incorporating the proposed distribution into the mixed Poisson model and analyzing a real-life dataset, it has been determined that the Poisson-Klongdee model outperforms alternative models in terms of performance. Highlighting its capability to mitigate the problem of overcharges, the Poisson-Klongdee model has been proven to be a valuable tool.
PubDate: Sep 2023
- Convergence of the Jordan Neutrosophic Ideal in Neutrosophic Normed Spaces
Abstract: Publication date: Sep 2023
Source:Mathematics and Statistics Volume 11 Number 5 R. Muthuraj K. Nachammal M. Jeyaraman and D. Poovaragavan In the context of the Neutrosophic Norm, the essay explores the challenge of constructing precise sequence spaces whose elements' convergence is a generalised form of the Cauchy convergence. It has proven to be a crucial tool, opening the door to the theory of functions and the law of large numbers applications. Numerous authors, including those who investigated the Euler totient matrix operator, have studied the strategy for building new sequence spaces that are specified as the domain of matrix operators. Recently, the Jordan totient function generalised the Euler totient function . In the context of neutrosophic Norm spaces, we establish some sequence spaces, specifically , and as a domain of the triangular Jordan totient matrix operator, and investigate the ideal convergence of these sequences. These concepts serve as an introduction to a new sort of convergence that Fast and Steinhaus presented as more general than normal convergence and statistical convergence. According to Kostyrko et al., this form is known as ideal convergence. In order to arrive at a finite limit, the Jordan totient operator, an infinite matrix operator, is used. We also construct a number of inclusion connections between the spaces as we explain various topological and algebraic properties. The Jordan totient operator, an infinite matrix operator, is used to accomplish the task of reaching a finite limit. As we discuss various topological and algebraic features, we also create several inclusion relations between the spaces.
PubDate: Sep 2023
- Resolution of Linear Systems Using Interval Arithmetic and Cholesky
Decomposition
Abstract: Publication date: Sep 2023
Source:Mathematics and Statistics Volume 11 Number 5 Benhari Mohamed amine and Kaicer Mohammed This article presents an innovative approach to solving linear systems with interval coefficients efficiently. The use of intervals allows the uncertainty and measurement errors inherent in many practical applications to be considered. We focus on the solution algorithm based on the Cholesky decomposition applied to positive symmetric matrices and illustrate its efficiency by applying it to the Leontief economic model. First, we use Sylvester's criterion to check whether a symmetric matrix is positive, which is an essential condition for the Cholesky decomposition to be applicable. It guarantees the validity of our solution algorithm and avoids undesirable errors. Using theoretical analyses and numerical simulations, we show that our algorithm based on the Cholesky decomposition performs remarkably well in terms of accuracy. To evaluate our method in concrete terms, we apply it to the Leontief economic model. This model is widely used to analyze the economic interdependencies between different sectors of an economy. By considering the uncertainty in the coefficients, our approach offers a more realistic and reliable solution to the Leontief model. The results obtained demonstrate the relevance and effectiveness of our algorithm for solving linear systems with interval coefficients, as well as its successful application to the Leontief model. These advances are crucial for fields such as economics, engineering, and the social sciences, where data uncertainty can greatly affect the results of analyses. In summary, this article highlights the importance of interval arithmetic and Cholesky's method in solving linear systems with interval coefficients. Applying these tools to the Leontief model can help you better understand the impact of uncertainty and make informed decisions in a variety of fields, including economics and engineering.
PubDate: Sep 2023
- On the Problem of Solution of Non-Linear (Exponential) Diophantine
Equation
Abstract: Publication date: Sep 2023
Source:Mathematics and Statistics Volume 11 Number 5 Sudhanshu Aggarwal Shahida A. T. Ekta Pandey and Aakansha Vyas Diophantine equations have great importance in research and thus among researchers. Algebraic equations with integer coefficients having integer solutions are Diophantine equations. For tackling the Diophantine equations, there is no universal method available. So, researchers are keenly interested in developing new methods for solving these equations. While handling any such equation, three issue arises, that is whether the problem is solvable or not; if solvable, possible number of solutions and lastly to find the complete solutions. Fermat's equation and Pell's equation are most popularly known as Diophantine equations. Diophantine equations are most often used in the field of algebra, coordinate geometry, group theory, linear algebra, trigonometry, cryptography and apart from them, one can even define the number of rational points on circle. In the present manuscript, the authors demonstrated the problem of existence of a solution of a non-linear (exponential) Diophantine equation , where are non-negative integers and are primes such that has the form of a natural number n. After it, authors also discussed some corollaries as special cases of the equation in detail. Results of the present manuscript depict that the equation of the study is not satisfied by the non-negative integer values of the unknowns and . The present methodology of this paper suggests a new way of solving the Diophantine equation especially for academicians, researchers and people interested in the same field.
PubDate: Sep 2023
- Neutrosophic Generalized Pareto Distribution
Abstract: Publication date: Sep 2023
Source:Mathematics and Statistics Volume 11 Number 5 Nahed I. Eassa Hegazy M. Zaher and Noura A. T. Abu El-Magd The purpose of this paper is to present a neutrosophic form of the generalized Pareto distribution (NGPD) which is more flexible than the existing classical distribution and deals with indeterminate, incomplete and imprecise data in a flexible manner. In addition to this, NGPD will be obtained as a generalization of the neutrousophic Pareto distribution. Also, the paper introduces its special cases as neutrosophic Lomax distribution. The mathematical properties of the proposed distributions, such as mean, variance and moment generating function are derived. Additionally, the analysis of reliability properties, including survival and hazard rate functions, is mentioned. Furthermore, neutrosophic random variable for Pareto distribution was presented and recommended using it when data in the interval form follow a Pareto distribution and have some sort of indeterminacy. This research deals the statistical problems that have inaccurate and vague data. The proposed model NGPD is widely used in finance to model low probability events. So, it is applied to a real-world data set to modelling the public debt in Egypt for the purpose of dealing with neutrosophic scale and shape parameters, finally the conclusions are discussed.
PubDate: Sep 2023
- Aspects of Algebraic Structure of Rough Sets
Abstract: Publication date: Sep 2023
Source:Mathematics and Statistics Volume 11 Number 5 S. Sangeetha and Shakeela Sathish Rough sets are extensions of classical sets characterized by vagueness and imprecision. The main idea of rough set theory is to use incomplete information to approximate the concept of imprecision or uncertainty, or to treat ambiguous phenomena and problems based on observation and measurement. In Pawlak rough set model, equivalence relations are a key concept, and equivalence classes are the foundations for lower and upper approximations. Developing an algebraic structure for rough sets will allow us to study set theoretic properties in detail. Several researchers studied rough sets from an algebraic perspective and a number of structures have been developed in recent years, including rough semigroups, rough groups, rough rings, rough modules, and rough vector spaces. The purpose of this study is to demonstrate the usefulness of rough set theory in group theory. There have been several papers investigating the roughness in algebraic structures by substituting an algebraic structure for the universe set. In this paper, rough groups are defined using upper and lower approximations of rough sets from a finite universe instead of considering the whole universe. Here we have considered a finite universe along with a relation which classifies the universe into equivalence classes. We have identified all rough sets with respect to this relation. The upper and lower approximated sets have been taken separately and these form a rough group equivalence relation () and it partitions the group () into equivalence classes. In this paper, the rough group approximation space () has been defined along with upper and lower approximations and properties of subsets of with respect to rough group equivalence relations have been illustrated.
PubDate: Sep 2023
- MID-units in Right Duo-seminearrings
Abstract: Publication date: Sep 2023
Source:Mathematics and Statistics Volume 11 Number 5 S. Senthil and R. Perumal In this paper, we focus on a subclass of duo-seminearrings called as right duo-seminearrings. We also focus on the algebraic properties and peculiarities of mid-units within this class. As a logical extension of the concept of mid-identities in semirings, the concept of mid-units in right pair seminearings is introduced. Mid-units are elements with both left and right invertibility, making them essential for understanding the structure and behaviour of right duo-seminearrings. In particular, we examine the interaction between idempotents and seminearring mid-units. We have also investigated regular right duo-seminearring which is a semilattice of subseminearrings with mid-units. In order to have a mid-unit in duo-seminearrings, we have established the necessary and sufficient conditions. The aim of this work is to carry out an extensive study on algebraic structure of right duo-seminearrings and the major objective is to further enhance the theory of right duo-seminearrings in order to find special structures of right duo-seminearrings. Throughout the research, rigorous proofs are provided to support the theoretical developments and ensure the validity of the findings. Concrete examples are also presented to illustrate the concepts and facilitate a better understanding of the algebraic structures associated with duo seminearings and mid-units. These examples serve as valuable tools for researchers and practitioners interested in the application of right duo-seminearrings and mid-units in their respective fields. Due to their applicability in domains such as computer science, cryptography, and coding theory, the topic of duo seminearrings, which generalise both semirings and duo-rings, have received substantial attention in algebraic research.
PubDate: Sep 2023
- Generalization of Riemann-Liouville Fractional Operators in Bicomplex
Space and Applications
Abstract: Publication date: Sep 2023
Source:Mathematics and Statistics Volume 11 Number 5 Mahesh Puri Goswami and Raj Kumar In this article, we generalize the Riemann-Liouville fractional differential and integral operators that can be applied to the functions of a bicomplex variable. For this purpose, we consider the bicomplex Cauchy integral formula and some contours in bicomplex space. We elaborate these operators through some examples. Also, we contemplate some significant properties of these operators which include a discussion of bicomplex analytical behavior of generalized bicomplex functions through Pochhammer contours, the law of exponents, generalized Leibniz rule along with a depiction of the region of convergence, and generalized chain rule for Riemann-Liouville fractional operators of bicomplex order. We give an application of our work in the construction of fractional Maxwell's type equations in vacuum and sourcefree domains equipped with the Riemann-Liouville derivative operator. For this, we define bicomplex grad, div, and curl operator with the help of these newly defined operators. The advantage of this fractional construction of Maxwell's equation is that it may be used to build fractional non-local electronics in bicomplex space. By considering bicomplex vector fields for the respective domains, we reduce the number of these fractional Maxwell's type equations by half, which makes it easier to extract electric and magnetic fields from the bicomplex vector fields.
PubDate: Sep 2023
- Jacobson Graph of Matrix Rings
Abstract: Publication date: Sep 2023
Source:Mathematics and Statistics Volume 11 Number 5 Siti Humaira Pudji Astuti Intan Muchtadi Alamsyah and Edy Tri Baskoro Some researchers have studied some properties of the Jacobson graph of commutative rings. In this study, we expand these results by examining the Jacobson graph of a non-commutative ring with identity, where we focus on the case of matrix rings. Initially, we update the definition of the Jacobson graph of non-commutative rings as a directed graph. Then we find that the Jacobson graph of the matrix rings case is undirected. We can classify matrices based on rank by viewing the matrix as a linear transformation. The main result is that the order of the matrix rank values will be proportional to the order of the matrix degrees as vertices of the graph. So that one can identify the maximum and minimum degrees in this graph. Sequentially, we describe the graph properties starting from the Jacobson graph of matrices over fields, then expanding to the Jacobson graph of matrices over local commutative rings and the Jacobson graph of matrices over non-local rings. In the end, we also give different results on the Jacobson graph of triangular matrices. The main contribution of this paper is to review the relationship between the aspects of linear algebra in the form of matrix rings and combinatorics in the form of diameter and vertex degree on this graph.
PubDate: Sep 2023
- The Generalized Inverse of Picture Fuzzy Matrices
Abstract: Publication date: Sep 2023
Source:Mathematics and Statistics Volume 11 Number 5 V.Kamalakannan P.Murugadas and M.Kavitha The generalized inverse is crucial in matrix theory. In many applications, such as control systems, robotics, and signal processing, the generalized inverse of matrices is critical. The generalized inverse of a picture fuzzy matrix is critical to solving a variety of real-world problems. Because of their ability to handle uncertain and imprecise medical data, applications of the generalized inverse of picture fuzzy matrix have gained significant attention in the medical field. Numerous researchers have investigated generalized inverses in fuzzy matrices and intuitionistic matrices. The fuzzy picture is an effective mathematical model for dealing with uncertain realworld issues. The picture fuzzy matrix is a generalization of the classical fuzzy matrix and the intuitionistic fuzzy matrix. In this research, a method for determining the generalized inverse (g-inverse) of a picture fuzzy matrix is implemented. In addition, the concept of a standard basis for picture fuzzy vectors is established. A few results related to the g-inverse of a fuzzy picture matrix are premeditated with relevant examples. An algorithm for evaluating the generalized inverse of a fuzzy picture matrix is provided. This study concludes with an application of the g-inverse of a picture fuzzy matrix.
PubDate: Sep 2023
- A Joint Chance Constrained Programming with Bivariate Dagum Distribution
Abstract: Publication date: Sep 2023
Source:Mathematics and Statistics Volume 11 Number 5 Khalid M. El-khabeary Afaf El-Dash Nada M. Hafez and Samah M. Abo-El-hadid A Joint chance-constrained programming (JCCP) technique is regarded as one of the most useful applicable techniques of stochastic programming techniques. It is more suitable for solving uncertain real problems, especially in economics and social problems, where some of model parameters are positive dependent random variables and follow well-known probability distributions. In this paper, we take into account a linear JCCP problem where some right-hand side random parameters are dependent and follow the Dagum distributions. So, firstly we derive a bivariate Dagum distribution with seven parameters with marginals following the Dagum distribution with three parameters. This proposed bivariate Dagum distribution is based on the Farlie-Gumbel-Morgensten copula (as presented in theorem (2.1)). Secondly, the proposed bivariate distribution is used in the context of JCCP technique to transform a linear JCCP model into an exact equivalent deterministic nonlinear programming model through theorem (3.1). Thirdly, through theorem (3.2), we prove that the obtained exact equivalent deterministic nonlinear programming model is a convex model, hence any nonlinear programming method can be used to solve it and find the global optimal solution. Finally, in order to demonstrate how to convert a linear JCCP model into an equivalent deterministic nonlinear programming model and solve it using the cutting plane method, a numerical example is included.
PubDate: Sep 2023
- A Study on Tripled Fixed Point Results in G_{JS}-Metric Space
Abstract: Publication date: Sep 2023
Source:Mathematics and Statistics Volume 11 Number 5 D. Srilatha and V. Kiran Generalization of metric spaces is always an evergreen topic of interest to many researchers. In order to generalize a metric space, researchers proposed various methods like by weakening any one condition from the definition of a metric or combining the notion of one metric space with the notion of one or more other metric spaces. Recently, SJS–metric space and Sb-metric space have been introduced by combining the notion of S - metric space with JS-metric space and b-metric space respectively. Similarly, Gb–metric space has been introduced as the generalization of G–metric space using b–metric. This notion motivated the present study. The purpose of this article is to introduce GJS-metric space and to present some fixed point theorems in GJS-metric space. By introducing the idea of GJS-metric space, we combined the notions of two metric spaces namely, G-metric space and JS-metric space. First, we begin with some basic definitions which are useful for the introduction of GJS–metric and then proceed with the necessary standard topological concepts of GJS-metric space. Then, using these topological concepts, we achieve some specific and principal results on GJS-metric space. Further, by providing suitable examples where ever required, we demonstrate the independent nature of G-metric space, GJS-metric space and JS-metric space. Further, we validate the conditions for the presence of tripled fixed point and verify its uniqueness by considering various cases on GJS-metric space.
PubDate: Sep 2023
- Study of Intuitionistic Fuzzy Super Matrices and Its Application in
Decision Making
Abstract: Publication date: Sep 2023
Source:Mathematics and Statistics Volume 11 Number 5 Siddharth Shah Rudraharsh Tewary Manoj Sahni Ritu Sahni Ernesto Leon Castro and Jose Merigo Lindahl Recent developments in fuzzy theory have been of great use in providing a framework for the understanding of situations involving decision-making. However, these tools have limitations, such as the fact that multi-attribute decision-making problems cannot be described in a single matrix. Fuzzy and intuitionistic fuzzy matrices are important tools for these types of problems since they can help to solve them. We presented a new super matrix theory in the intuitionistic fuzzy environment in order to overcome these restrictions. This theory is able to readily cope with problems that include numerous attributes while addressing belongingness and non-belonging criteria. Hence, it introduces a fresh perspective into our thinking, which in turn enables us to generalize our findings and arrive at more sound conclusions. For the purpose of theoretical development, we define a variety of different kinds of intuitionistic fuzzy super matrices and present a number of essential algebraic operations in order to make it more applicable to situations that take place in the real world. One multi-criteria decision-making problem based on super matrix theory is discussed here for the sake of validating and illustrating the applicability of the established findings. In addition to this, we suggest a general multi-criteria decision-making algorithm that makes use of intuitionistic fuzzy super matrix theory. This algorithm is more dynamic than both intuitionistic fuzzy matrix and fuzzy super matrix theories, and it can be applied to the resolution of a wide range of issues. The validation of the proposed theory is done by taking a real-world example to show its importance.
PubDate: Sep 2023
- On Nash Equilibrium Solutions for Rough Differential Games
Abstract: Publication date: Nov 2023
Source:Mathematics and Statistics Volume 11 Number 6 Abd El-Monem A. Megahed Mohamed R. Zeen El Deen and Asmaa A. Ahmed The purpose of this paper is to investigate the Nash equilibrium concept for differential games when there is uncertainty in the available information for the players. Our study involves examining the problem of uncertainty in player information during the game using the "rough sets" concept, which is widely used for many such problems. Furthermore, we also explore the possible alliance between continuous differential games and the rough programming approach. Our primary aim is to ascertain the Nash equilibrium for a differential game in situations where the players have uncertain information, so they are exerting rough control, along with the trajectory of the system state being rough as well. We derive the necessary and sufficient conditions for the open-loop Nash equilibrium of the rough differential game. Additionally, we make use of the expected value operator and trust measure of rough interval to convert the rough problem into a crisp problem, allowing us to calculate the expected Nash equilibrium strategies and α-trust Nash equilibrium strategies for the game. Finally, a numerical example that outlines the steps involved in producing the rough interval of the Nash equilibrium and system state trajectory for the rough differential game is given. Moreover, this example demonstrates how to obtain each crisp problem from a rough one and then determines its Nash equilibrium and the corresponding state trajectory.
PubDate: Nov 2023
- Development and Isometry of Surfaces Galilean Space G3
Abstract: Publication date: Nov 2023
Source:Mathematics and Statistics Volume 11 Number 6 B.M. Sultanov A. Kurudirek and Sh.Sh. Ismoilov Currently, the study of the geometry of semi-Euclidean spaces is an urgent task of geometry. In the singular parts of pseudo-Euclidean spaces, a geometry associated with a degenerate metric appears. A special case of this geometry is the geometry of Galileo. The basic concepts of the geometry of Galilean space are given in the monograph by A. Artykbaev. Here the differential geometry "in the small" is studied, the first and second fundamental forms of surfaces and geometric characteristics of surfaces are determined. The derivational equations of surfaces, analogs of the Peterson-Codazzi and Gauss formulas are calculated. This paper studies the development and isometry of surfaces in Galilean space. Moreover, the isometry of surfaces in Galilean space is divided into three types: semi-isometry, isometry and completely isometry. This separation is due to the degeneracy of the Galilean space metric. The existence of a development of a surface projecting uniquely onto a plane in general position is proved, as well as the conditions for isometric and completely isometric surfaces of Galilean space. We present the conditions associated with the analog of the Christoffel symbol, providing isometries of the surfaces of Galilean space. An example of isometric, but not completely isometric surfaces in G3 is given. The concept of surface development for Galilean space is generalized. A development of the surface is obtained, which is uniquely projected onto the plane of the general position. In addition, the Gaussian curvature of the surface has been shown to be completely defined by Christoffel symbols.
PubDate: Nov 2023
- Limit Theorems for Functionals of Random Convex Hulls in a Unit Disk
Abstract: Publication date: Nov 2023
Source:Mathematics and Statistics Volume 11 Number 6 Isakjan Khamdamov and Azam Imomov In this article, we study the functionals of the convex hull generated by independent observations over two-dimensional random points. When the random points are given in the polar coordinate system, their components are independent of each other, the angular coordinate is distributed uniformly, and the tail of the distribution of the radial coordinate is a regularly varying function near the circle of the unit disk – support. Here, with the approximation of the binomial point process by an inhomogeneous Poisson one, it is possible to study the asymptotic properties of the main functionals of the convex hull. Using the independence property of the increment of Poisson processes, we find an asymptotic expression for the mean values and variances for the main functionals of the convex hull. Uniform boundedness of exponential moments is proved for the same functionals, in the case when the convex hull is generated from an inhomogeneous Poisson point process inside the disk. The indicated independence property of the increment of the Poisson process allows us to express the area of the convex hull as a sum of independent identically distributed random variables, with which we prove the central limiting theorem for the number of vertices and the area of the convex hull. From the results obtained, we can conclude that if the tail of the distribution near the boundary is heavier, then there are many elements of the sample near the support boundary, and this means that there are many vertices of the convex hull, but the area bounded by the perimeter of the convex hull and the circle, as well as the difference between the perimeter of the convex hull and the circle, becomes negligible.
PubDate: Nov 2023
- On The Metric Dimension for The Line Graphs of Hammer and Triangular
Benzene Structures
Abstract: Publication date: Nov 2023
Source:Mathematics and Statistics Volume 11 Number 6 R. Nithya Raj R. Sundara Rajan Haewon Byeon CT. Nagaraj and G. Kokila The metric dimension of a chemical graph is a fundamental parameter in the study of molecular structures and their properties. This metric dimension is a numerical measure of the smallest set of atoms required to uniquely determine the location of all other atoms within the molecule. In this abstract, we explore the concept of metric dimension in chemical graphs, discussing its theoretical foundations and its applications in various fields such as navigation, network theory, drug design, optimization, pattern recognition, and other related fields computational chemistry, and material science. Understanding the metric dimension of chemical graphs enables the identification of crucial atoms or bonds that significantly impact the properties and behavior of molecules, aiding in the design of more effective drugs, catalysts, and materials. Finding the metric dimension of any given graph poses a computational challenge classified within the NP-complete problem category. A group of nodes, denoted as , is regarded as a locating set if, every pair of nodes and within the graph , there is a minimum of one node in such a way that the separation between and is not the same as the separation between and . The metric dimension is represented by and corresponds to the minimum size of a locating set for . The primary objective of this effort is to establish the proof that, for , the metric dimensions of the line network for the Hammer and triangular benzene structures are 2 and 3, respectively. We also established the existence of a constant metric dimension for this class of line graphs, which includes Hammer and triangular benzene structures.
PubDate: Nov 2023
- The Number of Games to Win by Two Points
Abstract: Publication date: Nov 2023
Source:Mathematics and Statistics Volume 11 Number 6 Nahathai Rerkruthairat and Noppadon Wichitsongkram Sometimes draws or ties occur in sports. Tiebreakers are the forms of competition that break ties and decide the winner when a draw or a tie occurs. Depending on types of tiebreakers, some take shorter and some take longer to end the competition. In this article, we are interested in calculating the expectation and variance of the number of games that will continue after a draw from types of tiebreakers that require players to win by two points. We focus on three types of win by two points that are used in many popular sports, such as tennis, volleyball and racquetball. By calculating the expected number of games, we can compare the number of games in each type of tiebreakers that will approximately be taken to end the game. In these kinds of sports, the rules to gain each point are usually the same. This means that there are the same finite states that the players or teams can reach in each point and each possible state depends only on the previous state. Since we know that a Markov chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event, we can use an application of Markov chains to solve the problems.
PubDate: Nov 2023
- Generalized Half-Logistic Distribution Using Linear Regression Model
Abstract: Publication date: Nov 2023
Source:Mathematics and Statistics Volume 11 Number 6 Ahmed Al-Adilee and Wasan Al-Shemmari In this study, the generalized half-logistic distribution (GHLD) was expanded by replacing the shape parameter with a linear model, denoted by the notation . This model involves a vector of explanatory variables denoted by , where with a vector of coefficients of each one of those explanatory variables, denoted by . The linear model represents several explanatory variables with their coefficients that represent effects on some items. Briefly, the proposed distribution is denoted by, LM-GHLD. Afterward, by finding the pdf, and cdf of LM-GHLD, many mathematical and statistical characteristics were investigated, such as the survival function, the hazard function, the moments, the moment generating function, quantiles, the Rényi entropy, and the order statistic function. The unknown parameters of the modern distribution were estimated with the non-Bayesian method, which is known as the Maximum Likelihood Estimate (MLE). An important part of such a study is related to the simulation, which is shown within a generation of different sample sizes. A goodness-of-fit measure has been implemented on real data sets to compare the classical distribution (GHLD) and the proposed distribution (LM-GHLD) enabling us to determine which distribution is better. Eventually, we provide some conclusions and summarize our findings.
PubDate: Nov 2023
- Adomian Decomposition Method for Solving Fuzzy Hilfer Fractional
Differential Equations
Abstract: Publication date: Nov 2023
Source:Mathematics and Statistics Volume 11 Number 6 V. Padmapriya and M. Kaliyappan The field of fractional calculus is mainly concerned with the differentiation as well as integration of arbitrary orders. This concept is obviously present in various domains of science and engineering. Most people are familiar with the Caputo and Riemann-Liouville fractional definitions. Recently, Hilfer has related the Caputo and Riemann-Liouville derivatives by a general formula; this connection is referred to as the Hilfer or generalized Riemann-Liouville derivative. The Hilfer fractional derivative serves as an intermediary between the Riemann-Liouville and Caputo fractional derivatives, providing a means of interpolation. Parameters in the Hilfer derivative provide more degrees of freedom. Adomian decomposition method (ADM) is widely regarded as a highly effective mathematical technique for solving both linear and nonlinear differential equations. ADM provides an analytical solution in the form of a series solution. Motivated by the growing number of real-life applications for fractional calculus, the objective of this work is to explore the solutions of Hilfer fractional differential equations in a fuzzy sense using the ADM. The efficiency and accuracy of the proposed method are demonstrated by the solution of numerical examples. Graphical representations are provided to visualize the solutions' behavior. This shows that as the number of terms in the series goes up, the numerical results get closer and closer to the exact solutions.
PubDate: Nov 2023
- Σ-uniserial Modules and Their Properties
Abstract: Publication date: Nov 2023
Source:Mathematics and Statistics Volume 11 Number 6 Ayazul Hasan and Jules Clement Mba The close association of abelian group theory and the theory of modules have been extensively studied in the literatures. In fact, the theory of abelian groups is one of the principal motives of new research in module theory. As it is well-known, module theory can only be processed by generalizing the theory of abelian groups that provide novel viewpoints of various structures for torsion abelian groups. The theory of torsion abelian groups is significant as it generates the natural problems in QTAG-module theory. The notion of QTAG (torsion abelian group like) module is one of the most important tools in module theory. Its importance lies behind the fact that this module can be applied in order to generalized torsion abelian group accurately. Significant work on QTAG-module was produced by many authors, concentrating on establishing when torsion abelian groups are actually QTAG-modules. There are two rather natural problems which arise in connection with the Σ-uniserial modules. Namely: The QTAG-module M is Σ-uniserial if and only if all N-high submodules of M are Σ-uniserial, for some basic submodules N of M, and M is not a Σ-uniserial module if and only if it contains a proper (ω + 1)-projective submodule. The current work explores these two problems for QTAG-modules. Some related concepts and problems are also considered. Our global aim here is to review the relationship between the aspects of group theory in the form of torsion abelian groups and theory of modules in the form of QTAG-modules.
PubDate: Nov 2023
- A New Wavelet-based Galerkin Method of Weighted Residual Function for The
Numerical Solution of One-dimensional Differential Equations
Abstract: Publication date: Nov 2023
Source:Mathematics and Statistics Volume 11 Number 6 Iweobodo D. C. Njoseh I. N. and Apanapudor J. S. In this paper, we developed a new wavelet-based Galerkin method of weighted residual function. In order to achieve this, we considered the wavelet transform as it relates to orthogonal polynomials, developed new wavelets using the Mamadu-Njoseh Polynomials, and formulated a base function with the newly developed wavelets. We considered the method of implementing solutions with the newly developed wavelet-based Galerkin method of weighted residual function, and applied it in obtaining approximate solutions of some one-dimensional differential equations having the Dirichlet boundary conditions. The results obtained from the newly developed method were compared with the results obtained from the exact solution and that from the classical Finite Difference Method (FDM) in literature. It was observed that the newly developed wavelet-based Galerkin method of weighted residual function demonstrated a high efficiency in providing approximate solutions to differential equations. The study revealed that the newly developed wavelet-based Galerkin method of weighted residual function converges at a good pace to the exact solution, and iterated the accuracy and effectiveness of its solutions. We used the MAPLE 18 software in carrying out all computations in this work.
PubDate: Nov 2023
- An Optimal Approach to Identify the Importance of Variables in Machine
Learning Using Cuckoo Search Algorithm
Abstract: Publication date: Nov 2023
Source:Mathematics and Statistics Volume 11 Number 6 Asep Rusyana Aji Hamim Wigena I Made Sumertajaya and Bagus Sartono Different machine learning algorithms may produce different orders of the variable importance measures even though they use an identical dataset. The measures raise the difficulty of concluding which predictor variables are the most important. Therefore, there is a requirement to unify those scores into a single order so that the analyst can withdraw a conclusive decision more easily. This research applied the Cuckoo Search algorithm approach to obtain the unification of those orders into a single one. A simulation study was conducted to justify that the approach could work well in several circumstances of data. We implemented the algorithm to identify the importance of the variables where the correlations among them are low, moderate, and high. The result of the paper shows that the proposed variable importance measure is the best if it is applied to predictors independent of each other. Generally, it is more accurate than variable importance measures of machine learning. The algorithm was also applied to identify the proposed important variable measure for recognizing food insecurity in households in Indonesia. The proposed variable importance has good accuracy. The accuracy is higher if the number of variables is greater than ten.
PubDate: Nov 2023
- Identifying and Estimating Seasonal Moving Average Models by Mathematical
Programming
Abstract: Publication date: Nov 2023
Source:Mathematics and Statistics Volume 11 Number 6 Rasha A. Farghali Hemat M. Abd-Elgaber and Essam A. Ahmed In this paper, a novel method is presented for simultaneously identifying and estimating Seasonal Moving Average (SMA) models, which are considered a special case of Seasonal Autoregressive Integrated Moving Average (SARIMA) models introduced by Box and Jenkins. To accomplish this, we utilize a mixed-integer nonlinear programming (MINLP) model, which falls within the class of optimization problems involving integer and continuous decision variables, as well as non-linear objective functions and/or constraints. The advantage of employing MINLP lies in its ability to provide a more flexible representation of real-world problems. The aim of employing the MINLP is to identify and estimate the appropriate SMA model, specifically determining whether it is Multiplicative or Non-multiplicative. To evaluate the effectiveness of the proposed MINLP approach, we conducted both a simulation study and real-world applications. In the simulation study, we generate 1000 time series datasets from each of the twelve SMA models, which comprised six multiplicative SMA models and six non-multiplicative SMA models, with different orders. Additionally, we examine the effectiveness of MINLP through two real-world applications: Carbon Dioxide Levels data and College Enrollment data. The results obtained from both the simulation study and real-world applications consistently demonstrate the effectiveness of MINLP in accurately identifying the appropriate SMA model. These findings support the applicability and reliability of the proposed method in practical scenarios. Overall, our research contributes to the field of time series analysis by providing a new approach for identifying and estimating SMA models using MINLP, paving the way for improved forecasting and decision-making in various domains.
PubDate: Nov 2023
- Some Properties of Cyclic and Dihedral Homology for Schemes
Abstract: Publication date: Nov 2023
Source:Mathematics and Statistics Volume 11 Number 6 Samar A. A. Quota Faten. R. Kara O. H. Fathy and W. M. Mahmoud A scheme is a type of mathematical construction that extends the concept of algebraic variety in a number of ways, including accounting for multiplicities and being defined over any commutative ring. In this article, we study some properties of the cyclic and dihedral homology theory in schemes. We study the long exact sequence of cyclic homology of scheme and prove some results. So, we introduce and study Morita-equivalence in cyclic homology of schemes and proof the main relation between trace map and inclusion map. Our goal is to explain product structures on cyclic homology groups . Especially, we show of algebra. We give the relations between dihedral homology and cyclic homology of schemes, therefore: . We explain the trace map and inclusion map of cyclic homology for scheme algebra which takes form: and . For the shuffle map , we obtain the long exact sequence of cyclic homology for scheme: . We give the long exact sequence of dihedral homology for scheme: . For any three and algebra, we write the next long exact sequence as a commutative diagram: . For all and schemes, we give the long exact sequence of dihedral homology as: .
PubDate: Nov 2023
- Some Convergence Properties of a Random Closed Set Sequence
Abstract: Publication date: Nov 2023
Source:Mathematics and Statistics Volume 11 Number 6 Bourakadi Ahssaine Baraka Achraf Chakir and Khalifi Hamid In this article, we have discussed the properties of the probability law "T" called functional capacity and other closely related functionals "Q and C" pertaining to random closed sets. We are interested in the most widely used functional in random set theory "T". We have established the belonging of "T" to the interval [0,1], and proven that it is increasing in the sense of inclusion, and its sub-additivity property through probability techniques. Moreover, we have explored the various types of convergences of a sequence of random closed sets, such as weak convergence, strong convergence (almost surely in the sense of Hausdorff), convergence in the sense of Painlevé-Kuratowski and Wijsman-Mosco, as well as convergence in probability. In the second part of our work, we have proven a new corollary which states that the strong convergence in the sense of Hausdorff implies the convergence in probability of a sequence of random closed sets at infinity. Our proof involves the definition of mathematical expectation for a discrete variable and the indicator variable, which is a random variable that takes two possible values, 0 or 1.
PubDate: Nov 2023
- Product Properties for Generalized Pairwise Lindelöf Spaces
Abstract: Publication date: May 2023
Source:Mathematics and Statistics Volume 11 Number 3 Zabidin Salleh Muzafar Nurillaev and Che Mohd Imran Che Taib In topological spaces, although compactness is satisfying the product invariant properties, but for the Lindelöffness, it is not preserved by the product unless one or more factors are assumed to satisfy additional conditions. Similar results yield for the bitopological spaces, that is, the property of pairwise Lindelöf bitopological spaces is not preserved under the product unless one or more factors are assumed to be satisfy additional conditions, for instance, -spaces. The Cartesian product for arbitrarily many bitopological spaces was defined by Datta in 1972. Since then, many researchers have begun their study for the product bitopological spaces for their reason and direction. In this paper, we shall study finite product of pairwise nearly Lindelöf, pairwise almost Lindelöf and pairwise weakly Lindelöf spaces. We show that, all these generalized pairwise Lindelöf spaces are not preserved under a product by some counterexamples provided. Furthermore, we give some necessary conditions for these three bitopological spaces to be preserved under a finite product. Such condition is that one or more of the spaces has to be -space or the product have to be pairwise weak -space. Another interesting result is that the projection of these generalized pairwise Lindelöf spaces product with -space is a closed map.
PubDate: May 2023
- Isomorphism Criteria for A Subclass of Filiform Leibniz Algebras
Abstract: Publication date: May 2023
Source:Mathematics and Statistics Volume 11 Number 3 I.S. Rakhimov In the paper, we propose three isomorphism criteria for a subclass of finite-dimensional Leibniz algebras. Isomorphism Criterion 1 has been given earlier (see [5]). We introduce notations for new structure constants. Using the new notation, we state the isomorphism criterion 2. To formulate Isomorphism Criterion 3, we introduce "semi-invariant functions" needed. We prove that these three Isomorphism Criteria are equivalent. The isomorphism criterion 3 is convenient to find the invariant functions to represent isomorphism classes. The proof of the isomorphism criteria in the general case is computational and is based on hypothetic convolution identities given in [11]. Therefore, we give details in the ten-dimensional case.
PubDate: May 2023
- Monte Carlo Algorithms for the Solution of Quasi-Linear Dirichlet Boundary
Value Problems of Elliptical Type
Abstract: Publication date: May 2023
Source:Mathematics and Statistics Volume 11 Number 3 Abdujabar Rasulov The application of Monte Carlo methods in various fields is constantly growing due to increases in computer capabilities. Increasing speed and memory, and the wide availability of multiprocessor computers, allow us to solve many problems using the "method of statistical sampling", better known as the Monte Carlo method. Monte Carlo methods are known to have particular strengths. These include: Algorithmic simplicity with a strong analogy to the underlying physical processes, solve complex realistic problems that include sophisticated geometry and many physical processes, solve problems with high dimensions, the ability to obtain point solutions or evaluation linear functional of the solution, error estimates can be empirically obtained for all types of problems in parallel way, and ease of efficient parallel implementation. A shortcoming of the method is slow rate of convergence of the error, namely ) where is the number of numerical experiments or realizations of the random variable. In this paper, we will propose Monte Carlo algorithms for the solution of the interior Dirichlet boundary value problem (BVP) for the Helmholtz operator with a polynomial nonlinearity on the right-hand side. The statistical algorithm is justified and complexity of the proposed algorithms is investigated, also the ways of decreasing the computational work are considered.
PubDate: May 2023
- Statistical Convergence on Intuitionistic Fuzzy Normed Spaces over
Non-Archimedean Fields
Abstract: Publication date: May 2023
Source:Mathematics and Statistics Volume 11 Number 3 N. Saranya and K. Suja This paper aims to explore the fundamental properties of statistical convergence sequences within non-Archimedean fields. In pure mathematics, statistical convergence plays a fundamental role. The idea of statistical convergence is an extension of the concept of convergence. Statistical convergence has been discussed in various fields of mathematics namely ergodic theory , fuzzy set theory, approximation theory, measure theory, probability theory, trigonometric series, number theory, and banach spaces, where problems were resolved using the concept of statistical convergence. Summability theory and functional analysis are two disciplines that heavily rely on the idea of statistical convergence. The study of analysis over non-Archimedean fields is called non-Archimedean analysis. The theory of statistical convergence plays a significant role in the functional analysis and summability theory. The objective of this paper is to expand upon the concepts of statistical convergence and statistically Cauchy sequences in non-Archimedean intuitionistic fuzzy normed spaces, and obtain some relevant results related to them. This article proves that some properties of statistically convergent sequences, which are not true classically, are true in a non-Archimedean field. Furthermore, in these spaces, we defined statistically complete and statistically continuous and established some fundamental facts. Throughout this paper, denotes a complete, non-trivially valued, non-Archimedean field.
PubDate: May 2023
- Upper Bound for Partition Dimension of Comb Product of a Wheel Graph and
Tree
Abstract: Publication date: May 2023
Source:Mathematics and Statistics Volume 11 Number 3 Faisal and Andreas Martin The concept of partition dimension in graph theory was first introduced by Chartrand et al. [1] as a variation of metric dimension. Since then, numerous studies have attempted to determine the partition dimensions of various types of graphs. However, for many types of graphs, their partition dimensions remain unknown as determining a general graph's partition dimension is an NP-complete problem. In this study, we aim to determine the partition dimension of a specific graph, namely the comb product of a wheel and a tree. One approach to finding the partition dimension of a graph is to determine its upper and lower bounds. In this article, we propose an upper bound for the partition dimension of the comb product using number representation for certain bases. We divide the problem into two cases based on the path graph. For the first case, which is the comb product with a path of a single vertex, Tomescu et al. [2] have already provided an upper bound. In the other case, we utilize the bijection property of a number system on the number copy of the tree to find an upper bound. Our results show that the partition dimension of the second case has a smaller upper bound compared to the general upper bound proposed by Chartrand et al. [1].
PubDate: May 2023
- Numerical Approximation of Volterra Integro-Differential Equations of the
Second Kind Using Boole's Quadrature Rule Method
Abstract: Publication date: May 2023
Source:Mathematics and Statistics Volume 11 Number 3 Muhammad Ashraf Darus Nurul Huda Abdul Aziz Deraman F. Asi Salina M. S. Anuar and Zakaria H. L. This article presents the numerical approximation of Volterra integro-differential equations (VIDEs) of the second kind using the quadrature rule in the modified block method. The new implementation of new block method which considers the closest point to approximate two solutions of and concurrently was taken into account. This method is said to have an advantage in reducing the number of total steps and function evaluations compared to the classical multistep method. The techniques of quadrature rule which consist of the trapezoidal rule, Simpson's 1/3 rule, Simpson's 3/8 rule and Boole's quadrature rule have been used to approximate the integral parts of Kernel function, for for the case of . The analysis of the order, error constant, consistency and convergence of VIDEs in the proposed method has also been presented. The stability analysis is derived using the specified linear test equation for both approximate solutions until obtained the stability polynomial. To validate the efficiency of the developed method, some of the numerical results are presented and compared with the existing method. It is shown that the modified block method has given better accuracy and efficiency in terms of maximum error and number of steps and function calls.
PubDate: May 2023
- Bounded Autocatalytic Set and Its Basic Properties
Abstract: Publication date: May 2023
Source:Mathematics and Statistics Volume 11 Number 3 Sumarni Abu Bakar Noor Syamsiah Mohd Noor Tahir Ahmad and Siti Salwana Mamat Autocatalytic Set (ACS) is one of the areas of study that can be modelled using graph theory. An Autocatalytic Set (ACS) is defined as a graph, in which there is at least one incoming link for every node in the graph. Past research on ACS tremendously solved many applications including modelling complex systems through integration of ACS with fuzzy theory. Recently, a restricted form of ACS known as Weak Autocatalytic Set (WACS) was established and used to solve multi-criteria decision-making problems (MCDM), in which the related graph is transitive and involves non-cyclic triads. Though, in scenarios that occur in the real world, there exist MCDM problems, in which the related graph is intransitive, involving cyclic triads. Thus, it creates a limitation to used WACS to solve decision-making problems over cyclic triads. This paper introduced another class of ACS known as Bounded Autocatalytic Set (BACS). The concept of BACS provides the ability to represent a relation between one criterion to each other criterion, and the graph involves cyclic triads. Here, the definition of BACS is formed and introduced for the first time, and its basic properties related to edges, paths, and cycles in the form of theorem and propositions are established and presented.
PubDate: May 2023
- A Time Truncated New Group Chain Sampling Plan Based on Log-Logistic
Distribution
Abstract: Publication date: May 2023
Source:Mathematics and Statistics Volume 11 Number 3 Nazrina Aziz Seu Wen Fei Waqar Hafeez Shazlyn Milleana Shaharudin and Javid Shabbir The acceptance sampling is a technique for ensuring that both producers and consumers are satisfied with the product's quality. This paper proposes a new group chain sampling plan (NGChSP) using Log-logistic distribution when the life test is truncated at a predetermined time. The minimum number of groups, and the probability of lot acceptance, are determined through satisfying the consumer's risk, under the specified design parameter. This paper shows that the minimum number of groups, decreases when the value of design parameters such as and increases. With the same design parameters, the minimum increases when the shape parameter increases. Moreover, the increases as shape parameter and minimum increases. An illustrative example for NGChSP is provided. The findings suggest that as the test time termination constant decreases, the minimum increases. Furthermore, as the mean ratio, increases, the increases as well. In comparison to GChSP, the NGChSP requires a smaller number of groups, indicating that using the NGChSP for inspection will contribute to lower inspection time and costs. The NGChSP provides a higher probability of lot acceptance than GChSP. This paper concludes that the NGChSP performed better than the GChSP. Therefore, the NGChSP is better equipped for lot inspection in the manufacturing industry.
PubDate: May 2023
- Strong Form of Nano Ideal Set in Nano Ideal Topological Spaces
Abstract: Publication date: May 2023
Source:Mathematics and Statistics Volume 11 Number 3 S. Manicka Vinayagam L. Meenakshi Sundaram and C. Devamanoharan The purpose of this article is to define and analyse certain new types of a strongly open set namely () in nano ideal topological space and compare it with the other existing sets in nano ideal topology. Here the author uses the lower approximation, upper approximation and boundary region to define nano topology. To emphasize the inclusive relationship of this particular nano ideal set with other existing familiar nano ideal sets like , , and , some counter examples are provided. We have also established the independence of this set with both set and set in nano ideal topological spaces. In addition, , , are introduced, investigated with its basic results and fundamental properties. The Exterior operator plays a vital role in topological spaces. Unless like the interior operator, the exterior operator varies in some cases, for example it reverses inclusions when it comes to the subset property in topological spaces. In the next section, we have defined and analysed some of its basic properties. We have also introduced and discussed its correlation between and . The paper finally concludes with the definition of and describes the relationship of with , and .
PubDate: May 2023
- 3-Equitable and Prime Labeling of Some Classes of Graphs
Abstract: Publication date: May 2023
Source:Mathematics and Statistics Volume 11 Number 3 Sangeeta A. Parthiban and P. Selvaraju Researchers have constructed a model to transform "word motion problems into an algorithmic form" in order to be processed by an intelligent tutoring system (ITS). This process has the following steps. Step 1: Categorizing the characteristics of motion problems, step 2: suggesting a model for the categories. "In order to solve all categories of problems, graph theory including backward and forward chaining techniques of artificial intelligence can be utilized". The adoption of graph theory into motion problems has evidence that the model solves almost all of motion problems. Graph labeling is sub field of graph theory which has become the area of interest due to its diversified applications. Formally, if the nodes are labeled under some constraint, the resulting labeling is known as vertex labeling and it will be an edge labeling if the labels are assigned to edges under some conditions. Graph labeling nowadays is one of the rapid growing areas in applied mathematics which has shown its presence in almost every field. The known applications are in Computer Science, Physics, Chemistry, Radar, Coding Theory, Connectomics, Socioloy, x-ray crystallography, Astronomy etc. "For a graph G(V,E) and k> 0, give node labels from {0, 1, . . . , k − 1} such that when the edge labels are induced by the absolute value of the difference of the node labels, the count of nodes labeled with i and the count of nodes labeled with j differ by at most one and the number of lines labeled with i and with j differ by at most 1. So G with such an allocation of labels is k−equitable and becomes 3-equitable labeling, when k = 3". In this paper, the existence and non-existence of 3-equitable labeling of certain graphs are established.
PubDate: May 2023
- Some Convergence Results for the Strong Versions of Order-integrals in
Lattice Spaces
Abstract: Publication date: May 2023
Source:Mathematics and Statistics Volume 11 Number 3 Mimoza Shkembi Stela Ceno and John Shkembi Integration in Riesz spaces has received significant attention in recent papers. The existing body of literature provides comprehensive analyses of the concepts related to order-type integrals for functions that are defined in ordered vector spaces and Banach lattices, as indicated by the studies covered in [3], [4], [5], [7], [8], [9], and [10]. In our work on strongly order-McShane (Henstock-Kurzweil) equiintegration, we have drawn upon the earlier works of Candeloro and Sambucini [6], as well as Boccuto et al. [1-2], who have conducted investigations in the field of order-type integrals. We have expanded upon their research to develop our own findings. This paper focuses on studying the (o)-McShane integral in ordered spaces, where we emphasize the important fact that investigating the (o)-McShane integral is essential in addition to the (o)-Henstock integral. We highlight that the (o)-McShane integration in Banach lattices has richer properties and is more convenient compared to the (o)-Henstock integral. The properties of (o)-convergence exhibited by ordered McShane integrals are prominently featured in our study. By using (o)-convergence, we have obtained valuable results related to the (o)-McShane integral. We arrive at the same results in Banach lattices as on McShane (Henstock-Kurzweil) norm-integrals, and we demonstrate that the (o)-McShane integral opens up a wide field of study where similar results with Henstock integration can be obtained. The outcomes demonstrate the benefits of utilizing this integration technique in ordered spaces, with potentially significant implications for diverse areas of mathematics and related fields.
PubDate: May 2023
- Maximum Likelihood Estimation of the Weighted Mixture Generalized Gamma
Distribution
Abstract: Publication date: May 2023
Source:Mathematics and Statistics Volume 11 Number 3 Wikanda Phaphan Teerawat Simmachan and Ibrahim Abdullahi The three-parameter weighted mixture generalized gamma (WMGG) distribution was developed from the four-parameter mixture generalized gamma (MGG) distribution since the parameter estimation of MGG distribution faced with the problem. The estimate of the weighted parameter p was out of the interval [0, 1]. The previous study proposed the maximum likelihood estimators (MLEs) of the WMGG distribution. However, their MLEs were written in nonlinear equations, and certain iterative methods were necessarily needed to solve numerically. The three parameters λ, β, and α were estimated by the quasi-Newton method. Nevertheless, this method performed well only the parameter λ. This motivated the main objective of this work. Consequently, the parameter estimation of the WMGG was further improved. This article developed two maximum likelihood estimation methods: expectation-maximization (EM) algorithm and simulated annealing algorithm of the three parameters of the WMGG distribution. These two methods were compared to the previous study's quasi-Newton method. Monte Carlo simulation technique was employed to assess the algorithm's performance. Sample sizes ranged from small to large as 10, 30, 50, and 100. The simulation was repeated 10,000 rounds in each scenario. Assessment criteria were the mean square error (MSE) and bias. The results revealed that the EM algorithm outperformed the other methods. Furthermore, the quasi-Newton method had the lowest efficiency.
PubDate: May 2023
- Existence and Uniqueness of Polyhedra with Given Values of the Conditional
Curvature at the Vertices
Abstract: Publication date: May 2023
Source:Mathematics and Statistics Volume 11 Number 3 Anvarjon Sharipov and Mukhamedali Keunimjaev The theory of polyhedra and the geometric methods associated with it are not only interesting in their own right but also have a wide outlet in the general theory of surfaces. Certainly, it is only sometimes possible to obtain the corresponding theorem on surfaces from the theorem on polyhedra by passing to the limit. Still, the theorems on polyhedra give directions for searching for the related theorems on surfaces. In the case of polyhedra, the elementary-geometric basis of more general results is revealed. In the present paper, we study polyhedra of a particular class, i.e., without edges and reference planes perpendicular to a given direction. This work is a logical continuation of the author's work, in which an invariant of convex polyhedra isometric on sections was found. The concept of isometry of surfaces and the concept of isometry on sections of surfaces differ from each other, examples of isometric surfaces that are not isometric on sections and examples of non-isometric surfaces that are isometric on sections. However, they have non-empty intersections, i.e., some surfaces are both isometric and non-isometric on sections. In this paper, we prove the positive definiteness of the found invariant. Further, conditional external curvature is introduced for "basic" sets, open faces, edges, and vertices. It is proved that the conditional curvature of the polyhedral angle considered is monotonicity and positive definiteness. At the end of the article, the problem of the existence and uniqueness of convex polyhedra with given values of conditional curvatures at the vertices is solved.
PubDate: May 2023
- Solution Analysis of Riccati's Fractional Differential Equations Using the
ADM-Laplace Transformation and the ADM-Kashuri-Fundo Transformation
Abstract: Publication date: May 2023
Source:Mathematics and Statistics Volume 11 Number 3 Muhamad Deni Johansyah Asep Kuswandi Supriatna Endang Rusyaman Salma Az-Zahra Eddy Djauhari and Aceng Sambas Fractional differential equations (FDEs) are differential equations that involve fractional derivatives. Unlike ordinary derivatives, fractional derivatives are defined by fractional powers of the differentiation operator. FDEs can arise in a variety of contexts, including physics, engineering, biology, and finance. They are typically more complex than ordinary differential equations, and their solutions may exhibit unusual properties such as long-range memory, non-locality, and power-law behavior. The solution of the Riccati Fractional Differential Equation (RFDE) is generally challenging due to its nonlinearity and the presence of the fractional power term. The fractional derivative operators in the RFDE are non-local and involve an integral over a certain range of the independent variable. The non-local nature of the fractional derivatives can make the RFDE harder to handle compared to ordinary differential equations. In this paper, we have examined the Riccati Fractional Differential Equation (RFDE) using the combined theorem of the Adomian Decomposition Method and Laplace Transform (ADM-LT). Furthermore, we have compared with Adomian Decomposition Method and Kashuri-Fundo Transformation (ADM-KFT). It is shown that the ADM-LT is equivalent to the ADM-KFT algorithm for solving the Riccati equation. In addition, we have added new theorem of the relationship between the Kashuri Fundo inverse and the Laplace Transform inverse. The main finding of our study shows that the Adomian Decomposition Method and Laplace Transform (ADM-LT) have a good agreement between numerical simulation and exact solution.
PubDate: May 2023
- Approximation Method Using DP Ball Curves for Solving Ordinary
Differential Equations
Abstract: Publication date: May 2023
Source:Mathematics and Statistics Volume 11 Number 3 Abdul Hadi Bhatti and Sharmila Binti Karim Many researchers frequently developed numerical methods to explore the idea of solving ordinary differential equations (ODEs) approximately. Scholars started evolving approximation methods by developing algorithms to improve the accuracy in terms of error for the approximate solution. Polynomials, piece-wise polynomials in the form of Bézier curves, Bernstein polynomials, etc., are frequently used to represent the approximate solution of ODEs. To get the minimum error between the exact and approximate solutions of ODEs, the DP Ball curve (DPBC) using the least squares method (LSM) is proposed to improve the accuracy of the approximate solutions for the initial value problem IVPs. This paper explores the use of control points of the DPBC with error reduction by minimizing the residual function. The residual function is minimized by constructing the objective function by taking the sum of squares of the residue function for the least residual error. Then, by solving the constraint optimization problem, we obtained the best control points of DPBC. Two strategies are employed: investigating DPBC's control points through error reduction with LSM and computing the optimum control points through degree raising of DPBC for the best approximate solution of ODEs. Substituting the values of control points back into the DPBC allows for the best approximate solution to be obtained. Moreover, the convergence of the proposed method to the IVPs is successfully analyzed in this study. The error accuracy of the proposed method is also compared with the existing studies. Numerous numerical examples of first, second, and third orders are presented to illustrate the efficiency of the proposed method in terms of error. The results of the numerical examples are shown in which the error accuracy is considerably improved.
PubDate: May 2023
- Historical Review of Existing Sequences and the Representation of the Wing
Sequence
Abstract: Publication date: May 2023
Source:Mathematics and Statistics Volume 11 Number 3 Maizon Mohd Darus Haslinda Ibrahim and Sharmila Karim A sequence is simply an ordered list of numbers. Sequences exist in mathematics very often. The Fibonacci, Lucas, Perrin, Catalan, and Motzkin sequences are a few that have drawn academics' attention over the years. These sequences have arisen from different perspectives. By investigating the construction of each sequence, these sequences can be classified into three groups, i.e., those that arise from nature, are constructed from other existing sequences, or are generated from geometric representation. This outcome may assist the researchers in adding a new number sequence to the family of sequences. Our observation of the geometric representation of the Motzkin sequence shows that a new sequence can be constructed, namely the Wing sequence. Therefore, we demonstrate the iterations of the Wing sequence for 3≤n≤5. The wings are constructed by classifying them into (n-1) classes and determining the first and second points. It will then provide (n-2) wings in each class. This technique will construct (n-1)(n-2) wings for each n. The iterations may provide a basic technique for researchers to construct a sequence using the technique of geometric representation. The observation of geometric representations can develop people's thinking skills and increase their visual abilities. Hence, the study of geometric representation may lead to new lines of research that go beyond only sequences.
PubDate: May 2023
- Steiner Antipodal Number of Graphs Obtained from Some Graph Operations
Abstract: Publication date: May 2023
Source:Mathematics and Statistics Volume 11 Number 3 R. Gurusamy A. Meena Kumari and R. Rathajeyalakshmi The Steiner p-antipodal graph of a connected graph G, has vertex set like G and p number of vertices are adjacent to each other in whenever they are p-antipodal in G. If G has more than one component, then p vertices are adjacent to each other in if at least one vertex from different components. Draw Kp related to p-antipodal vertices in . The Steiner antipodal number of a graph G is the smallest natural number p, so that the Steiner p-antipodal graph of G is complete. In this article, Steiner antipodal number has been determined for the generalized corona of graphs and for each natural number p≥2, we can construct many non-isomorphic graphs of order p having Steiner antipodal number p. Also for any pair of natural numbers l,m ≥ 3 with l ≤ m, there is a graph whose Steiner antipodal number is l and Steiner antipodal number of its line graph is m. For every natural number p≥1, there is a graph G whose complement has Steiner antipodal number p.
PubDate: May 2023
- Cartesian Product of Quadratic Residue Graphs
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 Shakila Banu P. and Suganthi T. Rezaei [7], who introduced the first simple graph G, defined it as a quadratic residue graph modulo n if its vertex set is reduced, a residue system modulo n such that two different vertices a and b are nearby, and (mod n). This initiates to study the present article, here we introduce a cartesian product of quadratic residue graphs , where m and n are either prime or composite, and Gm and Hn are quadratic residue graphs, respectively. The aforementioned work suggests and evaluates the regular graphs that are produced from graph F and its adjacency matrix. In addition, we define and examine their generating matrices with the help of adjacency matrix of F. Also, in this article we define three linear codes that are taken from the graph F and the parameters of codes denotes [N, k, d], where N denotes length, k denotes the dimension which is taken from the number of vertices and d denotes the distance which is taken from the minimum degree. Moreover, we also introduce an encoding and decoding algorithm for the graph using binary bits which is illustrated with a suitable example. Finally, we test the error correction capability of the code by using sphere packing bounds.
PubDate: Mar 2023
- Sensitivity Equation for Competitive Model: Derivation, Numerical
Realization and Parameter Estimation
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 Julan HERNADI Ceriawan H. SANTOSO and Iwan T. R. YANTO Ecological systems can be quite complex, consisting of an interconnected system of plants and animals, predators and prey, flowering plants, seed dispersers, insects, parasites, pollinators, and so on. In the case of the existence of a species affecting the survival of other species and vice versa, it can derive a competitive model in the form of a system of differential equations. A competitive model involves a number of parameters which grow in proportion to the number of interacting species. The resistance of a state variable to tiny disturbances of some parameter is referred to as sensitivity. The competitive model of size N consists of N parameters for intrinsic growth, N parameters for carrying capacity, N2 −N parameters for species interaction, and N parameters for initial conditions. As a result, there will be N2(N + 2) distinct values of sensitivity. The purpose of this paper is to derive a general formulation of the sensitivity equations of dynamical system and then apply it to the competitive model. This study also encompasses the formulation of some algorithms and the implementation for solving the sensitivity equation numerically. Finally, the sensitivity functions are employed as qualitative instruments in the optimal design of measurement for parameter estimation through a series of numerical experiments. The results of this study are the ordinary and the generalized sensitivity functions for interacting species. Based on numerical experiments, each group of data provides different information about the existing parameters.
PubDate: Mar 2023
- Actuarial Measures, Estimation and Applications of Sine Burr III Loss
Distribution
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 John Abonongo Ivivi J. Mwaniki and Jane A. Aduda The usefulness of heavy-tailed distributions for modeling insurance loss data is arguably an important subject for actuaries. Appropriate use of trigonometric functions allows a good understanding of the mathematical properties, limits over parameterization, and gives better applicability in modeling different datasets. Thus, the proposed method ensures that no additional parameter(s) is/are introduced in the bit to make a distribution from the F-Loss family of distributions flexible. The purpose of this paper is to improve the flexibility of the F-Loss family of distributions without introducing any additional parameter(s) and to develop heavy-tailed distributions with fewer parameters that give a better parametric fit to a given dataset than other existing distributions. In this paper, a new heavy-tailed distribution known as sine Burr III Loss distribution is proposed using the sine F-Loss generator. This distribution is flexible and able to model varying shapes of the hazard rate compared with the traditional Burr III distribution. The densities exhibit different kinds of decreasing and right-skewed shapes. The hazard rate functions show different kinds of decreasing, increasing constant-decreasing, and upside-down bathtub shapes. The statistical properties and actuarial measures are studied. The skewness is always positive, and the kurtosis is increasing. The numerical values of the actuarial measures show that increasing confidence levels are associated with increasing VaR, TVaR, and TV. The maximum likelihood estimators are studied, and simulations are carried out to ascertain the behavior of the estimators. It is observed that the estimators are consistent. The usefulness of the proposed distribution is demonstrated with two insurance loss datasets and compared with other known classical heavy-tailed distributions. The results show that, the proposed distribution provides the best parametric fit for the two insurance loss datasets. Insurance practitioners can employ the proposed models in modeling insurance loss since they are flexible.
PubDate: Mar 2023
- Inequalities for Forgotten Index of Duplication and Double Duplication of
Graphs
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 Kalpana R and Shobana L Molecular descriptors act as an important part in mathematical chemistry, in investigating quantitative structure-property relationship and quantitative structureactivity relationship. A topological descriptors, also called a molecular descriptor, is a mathematical formula applied to any graph which produces new molecular structure. In medicine mathematical model, the chemical compound is represented as an undirected graph, where each vertex represents an atom and each edge indicates a chemical bond between these atoms. The Wiener index is the first topological index to be used in chemistry introduced by Harold Wiener [1947]. It is used to compare the boiling points of some alkane isomers. There are various topological indices which are applied in chemistry. Among them, our interest is on Forgotten index which is degree based topological index introduced by Furtula and Gutman in 2015[2], defined as where du is the degree of vertex u in G. The mathematicians and chemists have studied several general properties of Forgotten index which may help the chemical and pharmaceutical industry to achieve the significance details by quantitative methods than by experiments.Vaidya et al (2009) proposed the concept of duplication of a vertex by an edge and duplication of an edge by a vertex of graphs. Shobana et al. proposed the double duplication of graphs (2017) [6]. Only connected, simple, undirected and finite graphs are considered throughout this article. Also, some inequalities are obtained by comparing the duplication and double duplication of graphs using Forgotten index which can also be used by chemists to generate new antidrug in future.
PubDate: Mar 2023
- A Glimpse of Nonparametric Single and Double Residual Bootstrap Method
with Outliers
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 Nor Iza Anuar Razak and Zamira Hasanah Zamzuri The significance of a model is affected by outliers. The outliers can affect the effectiveness of structural equation modeling (SEM). Here we describe and investigate the behavior of the nonparametric single and double residual bootstrap (DRB) methods in the presence of outliers when applied to SEM. Our study also intends to shorten the computational time of the standard double bootstrap by using an alternative double bootstrap approach. We demonstrate our proposed method by conducting a Monte Carlo experiment series for clean normal Gaussian distributions and contaminated data. The simulation studies were manipulated with different sample sizes, effect sizes, and 10% of contamination in the Y direction. The performance of the proposed method is evaluated using standard measurements and the construction of confidence intervals. The reasonably close parameter and bootstrap estimates suggest that the nonparametric single and double residual bootstrap is an excellent method. The DRB method showed a robust declining pattern for standard measurement estimates and shorter confidence intervals compared to the single residual bootstrap method in both normal and contaminated data. Also, the double bootstrap method takes twice as long as the single bootstrap method to compute. The DRB method is straightforward but demands slightly more computational time and better prediction approximation. This study offers additional perspectives to fellow researchers considering using the nonparametric single and alternative DRB methods with contaminated data.
PubDate: Mar 2023
- Binary Response on Logistics Regression Model and Its Simulation
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 Budi Pratikno Fifthany Marchelina Napitupulu Jajang Agustini Tripena Br. Sb and Mashuri The research determined the binary response model on logistics regression (LR) and its application. Firstly, we select some eligible factors (predictors,Xi, i=1,2,3,4 ) that are involved in the model, namely age ( X1 ), sex (X2) , treatment (X3) , and nutrition (X4) , with the response (Y) being the case of tuberculosis (TB). Using the stepwise selection model and odd ratio (OR) interpretation, we have three suspected significant predictors (X1 ,X3, and X4 ), but we choose two (only) of the significant predictors, which are X3 and X4. Therefore, the logistics regression model is written as . To test the goodness of fit of the model, we used deviance test (p-value 0.08). Due to this p-value, we then used the level of significance which is 0.08 (nearly close to 0.05) for obtaining the significant model. For more detailed interpretation, we here noted that the OR of the age (X1) , one of the three suspected significant predictors (X1 , X3 , and X4 ), is close to be one ( 1), so it is an independent predictor (not significant). So, we concluded that the significant predictors are only treatment (X3) and nutrition (X4) . Thus, the linear of the logistics regression model is then given as a So, we noted that TB is only dependent on clinical treatment and providing nutrition.
PubDate: Mar 2023
- Investigation on Isotropic Bezier Sweeping Surface "IBSS" with Bishop
Frame
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 W. M. Mahmoud M. A. Soliman and Esraa. M. Mohamed This research aims to study the Sweeping surface which is generated by the motion of the straight line (the profile curve) while this movement of the plane in the space is in the same direction as the normal to a cubic Bezier curve (spine curve). In geometrical modeling, sweeping is an essential and useful tool and has some applications, especially in geometric design. The idea depends on choosing a geometrical object which is the straight line, that is called the generator, and sweeping it along a cubic Bezier curve (spine curve), which is called trajectory, along the Cubic Bezier curve (spine curve) in an isotropic space has produced an Isotropic Bezier Sweeping Surfaces (IBSS). This study discusses Isotropic Bezier Sweeping Surfaces (IBSS) with the Bishop frame. We studied a special case of a surface sweep, which is the cylindrical surface resulting from a path curve that is a straight line. We have calculated the 1st fundamental and 2nd fundamental forms for this surface. The parametric description of the Weingarten Isotropic Bezier Sweeping Surfaces (IBSS) is also calculated in terms of Gaussian and mean curvatures. Mathematica 3D visualizations were used to create these curvatures. Finally, we characterized new associated surfaces according to the Bishop frame on (IBSS), such as studying minimal and developable isotropic Bezier sweeping surfaces (IBSS).
PubDate: Mar 2023
- Numerical Solution of Linear and Nonlinear Second Order Initial Value
Problems Using Three-Step Generalized Off-Step Hybrid Block Method
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 Kamarun Hizam Mansor Oluwaseun Adeyeye and Zurni Omar The numerical of second order initial value problems (IVPs) has garnered a lot of attention in literature, with recent studies ensuring to develop new methods with better accuracy than previously existing approaches. This led to the introduction of hybrid block methods which is a class of block methods capable of directly solving second order IVPs without reduction to a system of first order IVPs. Its hybrid characteristic features the addition of off-step points in the derivation of this block method, which has shown remarkable improvement in the accuracy of the block method. This article proposes a new three-step hybrid block method with three generalized off-step points to find the direct solution of second order IVPs. To derive the method, a power series is adopted as an approximate solution and is interpolated at the initial point and one off-step point while its second derivative is collocated at all points in the interval to obtain the main continuous scheme. The analysis of the method shows that the developed method is of order 7, zero-stable, consistent, and hence convergent. The numerical results affirm that the new method performs better than the existing methods it is compared with, in terms of error accuracy when solving the same IVPs of second order ordinary differential equations.
PubDate: Mar 2023
- Toeplitz Determinant For Error Starlike & Error Convex Function
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 D Kavitha K Dhanalakshmi and K Anitha Normalised Error function has been coined and analyzed in 2018 [13].The concept of normalised error function discussed in [13], motivated us to find the new results of Toeplitz determinant for the subclasses of analytic univalent functions concurrent with error function. By seeing the history of error function in Geometric functions theory, Ramachandran et. al [13] derived the coefficient estimates followed by the Fekete-Szeg¨o problem for the normalised subclasses of starlike and convex functions associated with error function. Finding coefficient estimates is one of the most provoking concepts in geometric function theory. In current scenario scientists are concentrating on special functions which are connected with univalent functions. Based on these concepts, the present paper deals with supremum and infimum of Toeplitz determinant for starlike and convex in terms of error function with convolution product using the concept of subordination. Also, we derive the sharp bounds for probability distribution associated with error starlike and error convex functions.
PubDate: Mar 2023
- m-Continuity and Fixed Points in -Complete
G-Metric Spaces
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 Banoth Madanlal Naik and V.Naga Raju Fixed point technique can be considered as one of the most powerful tools to solve problems which occur in several fields like Physics, Chemistry, Computer Science, Economics and other subbranches of Mathematics etc. Banach [3] gave the first result in the field of metric fixed point theory which guarantees the existence and uniqueness of a fixed point in a complete metric space. Thereafter, many Mathematicians replace the notion of metric space and Banach contractive condition with various generalized metric spaces and different contractions to prove fixed point theorems. One such generalized metric space, called G-metric space, was proposed in [6]. Abhijit Pant, R.P.Pant [1] introduced a new type of contraction and obtained some results in metric spaces in the year 2017. The purpose of this paper is to define -complete G-metric space and study three metric fixed point results for such spaces. In the first two fixed point results, we use weaker form of continuity, called m-continuity and new type contractive conditions while in the third result simulation function is used. The results which we obtained will improve, extend and generalize some results in [1] and [2] in the existing literature. In addition to this, we give examples to validate our results.
PubDate: Mar 2023
- On -coloring and -coloring ofWindmill Graph
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 Rubul Moran Niranjan Bora and Surashmi Bhattacharyya The windmill graph is the graph formed by joining a common vertex to every vertex of m copies of the complete graph Kr. T-coloring of a graph is a map h defined on the set of vertices in such a way that for any edge does not belong to a finite set T of non-negative integers. Strong T-Coloring (ST-coloring) is a particular case of T-coloring and is defined as the map: , for which and for any two distinct edges . Application of T and ST-coloring of graph naturally arises in the modeling of different scientific problems. Frequency assignment problem (FAP) is one of the well known problems in the field of telecommunication, which can be modeled using the concept of T and ST-coloring of graphs. In this paper, we will consider two special types of T-sets. The first one is -initial set, introduced by Cozzens and Roberts, which is of the form where S is any arbitrary set that doesn’t contain any multiple of The second one is λ-multiple of q set, introduced by Raychaudhuri, which is of the form , where S is a subset of the set . We will discuss some parameters related to these two types of colorings viz. T-chromatic number, T-span, T-edge span on the basis of the two T-sets. We will also deduce some generalized results of ST-coloring of any graph based on any T-set, and with the help of these results we will obtain ST-chromatic number and bounds for the ST -span and ST-edge span of windmill graphs.
PubDate: Mar 2023
- P-dist Based Regularized Twin Support Vector Machine on Imbalanced Binary
Dataset
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 Sai Lakshmi B. and G. Gajendran Data classification is a significant task in the field of machine learning. Support vector machine is one of the prominent algorithms in classification. Twin support vector machine is a solitary algorithm evolved from support vector machine which has gained popularity owing to its better generalization ability to a greater extent. Twin support vector machine attains quick training speed by explicitly exploring a pair of non-parallel hyperplanes for imbalanced data. In a Twin support vector machine, choosing numerical values for hyper parameters is challenging. Hyper parameter tuning is a prime factor that enhances the performance of a model. However, randomly preferred hyper parameters in the Twin support vector machine are uncertain. This paper proposes a novel p-dist-based regularized Twin support vector machine for imbalanced binary classification problems. Pairwise distances such as Jaccard and Correlation distances are considered for attuning the hyper parameters. The proposed work has been analyzed on many publicly available real-world benchmark datasets for both linear and non-linear cases. The performance of the p-dist-based regularized Twin support vector machine is computationally tested and compared with existing models. The outcome of the proposed model is validated using quality metrics such as Accuracy, F - mean, G-mean, and Elapsed time. Ultimately, the significant result exhibits better performance with less computational time in comparison to several existing methods.
PubDate: Mar 2023
- Performance Analysis of A Single Server Queue Operating in A Random
Environment - A Novel Approach
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 Akshaya Ramesh and S. Udayabaskaran In this paper, we consider a single server queueing system operating in a random environment subject to disaster, repair and customer impatience. The random environment resides in any one of N + 1 phases 0, 1, 2, · · · ,N + 1. The queueing system resides in phase k, k = 1, 2, · · · ,N for a random interval of time and the sojourn period ends at the occurrence of a disaster. The sojourn period is exponentially distributed with mean . At the end of the sojourn period, all customers in the system are washed out, the server goes for repair/set up and the system moves to phase 0. During the repair time, customers join the system, become impatient and leave the system. The impatience time is exponentially distributed with mean . Immediately after the repair, the server is ready for offering service in phase i with probability , k = 1, 2, · · · ,N. In the k−level of the environment, customers arrive according to a Poisson process with rate and the service time is exponential with mean . Explicit expressions for time-dependent state probabilities are found and the corresponding steady-state probabilities are deduced. Some new performance measures are also obtained. Choosing arbitrary values of the parameters subject to the stability condition, the behaviour of the system is examined. For the chosen values of the parameters, the performance measures indicated that the system did not exhibit much deviation by the presence of several phases of the environment.
PubDate: Mar 2023
- Exponential-Inverse Exponential[Weibull]: A New Distribution
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 Mahmoud Riad Mahmoud Azza E. Ismail and Moshera A. M. Ahmad Statistical distributions play a major role in analyzing experimental data, and finding an appropriate one for the data at hand is not an easy task. Extending a known family of distribution to construct a new one is a long honored technique in this regard. The T-X[Y] methodology is utilized to construct a new distribution as described in this study. The T-inverse exponential family of distributions, which was previously introduced by the same authors, is used to examine the exponential-inverse exponential[Weibull] distribution (Exp-IE[Weibull]). Several fundamental properties are explored, including survival function, hazard function, quantile function, median, skewness, kurtosis, moments, Shannon’s entropy, and order statistics. Our distribution exhibits a wide range of shapes with varying skewness and assume most possible forms of hazard rate function. The unknown parameters of the Exp-IE [Weibull] distribution are estimated via the maximum likelihood method for a complete and type II censored samples. We performed two applications on real data. The first one is vinyle chloride data, which is explained by [1] and the second is cancer patients data, which is explained by [2]. The significance of the Exp-IE[Weibull] model in relation to alternative distributions (Fr´echet, Weibull-exponential, logistic-exponential, logistic modified Weibull, Weibull-Lomax [log-logistic] and inverse power logistic exponential) is demonstrated. When using the applied real data, the new distribution (Exp-IE[Weibull]) achieved better results for the AIC and BIC criterion compared to other listed distributions.
PubDate: Mar 2023
- Flows Local Control in Resource Networks with A Low Resource
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 Vladimir A. Skorokhodov and Iakov M. Erusalimskiy The flow control problem in resource networks consists in finding such a set of vertices and capacities of arcs, which go out from these vertices, such that the limit state of the resource network is the closest to the given state . This problem is naturally divided into two subproblems. The first of them is the ”local” subproblem, which consists in determining the capacities of arcs which go out from the vertices of a given subset (hereinafter, the set will be called the set of controlled vertices). The second subproblem is the ”global” subproblem, which consists in finding the optimal set of controlled vertices , consisting of at most s elements. The paper is devoted to the study of the possibility of flows local control in resource networks. Methods for solving a local subproblem for regular resource networks with a low resource allocation are proposed. The conditions for the unreachability of the limit state , which coincides with the given state are obtained. Three cases are considered for the distribution of controlled vertices on a resource network. In each of the considered cases, it is shown that if the condition of unreachability of the limit state is not satisfied, then there is a set of the capacities values of the arcs that go out the controlled vertices, for which the limit state coincides with the state .
PubDate: Mar 2023
- Multivariate Hotelling- Control Chart for
Neutrosophic Data
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 Saritha M.B and R. Varadharajan Industries are consistently confronted with a myriad of challenges, the most significant of which is the requirement to increase product quality while simultaneously minimising manufacturing costs. Statistical Process Control (SPC) provides quality control charts as one of its primary methods for achieving this goal. When it comes to monitoring the quality features of a process, the control chart is the most popular and widely used kind of statistical analysis tool. It is very necessary to make use of multivariate control charts if the quality of a process is found to be connected with more than one characteristic. The Hotelling- chart is one of the most familiar methods of multivariate control chart. It is used for simultaneously monitoring the process mean and determining whether or not the process mean vector for two or more variables is under control. However, this is applicable only when the data is accurate, determined, and exact. As a result, when the data is vague or ambiguous, the utility of the conventional Hotelling- control chart is limited. Within the scope of this research, we put up a neutrosophic Hotelling- control chart as a potential solution to the issue described above. The performance of the proposed chart is evaluated using simulation at various degrees of shift in process average, with the neutrosophic alarm rate serving as the performance measure. To further investigate the applicability of the suggested chart in the actual world, we made use of a real-world example taken from the chemical sector.
PubDate: Mar 2023
- A Rotated Similarity Reduction Approach with Half-Sweep Successive
Over-Relaxation Iteration for Solving Two-Dimensional Unsteady
Convection-Diffusion Problems
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 Nur Afza Mat Ali Jumat Sulaiman Azali Saudi and Nor Syahida Mohamad In this paper, we transformed a two-dimensional unsteady convection-diffusion equation into a two-dimensional steady convection-diffusion equation using the similarity transformation technique. This technique can be easily applied to linear or nonlinear problems and is capable of reducing the size of computational works since the main idea of this technique is to reduce at least one independent variable. The corresponding similarity equation is then solved numerically using an effective numerical technique, namely a new five-point rotated similarity finite difference scheme via half-sweep successive over-relaxation iteration. This work compared the performance of the proposed method with Gauss-Seidel and successive over-relaxation with the full-sweep concept. Numerical tests were carried out to obtain the performance of the proposed method using C simulation. The results revealed that the combination of the five-point rotated similarity finite difference scheme via half-sweep successive over-relaxation iteration is the most superior method in terms of the iteration number and computational time compared to all these methods. Additionally, in terms of accuracy, all three iterative methods are also comparable.
PubDate: Mar 2023
- Estimation of the Location Parameter of Cauchy Distribution Using Some
Variations of the Ranked Set Sampling Technique
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 Arwa Salem Maabreh and Mohammad Fraiwan Al-Saleh It is well known that ranked set sampling (RSS) technique and its variations, when applicable, are more efficient for estimating the population mean than the usual random sampling techniques. Despite the fascinating applications of Cauchy distribution, it has many unusual properties. For example: its moments either don’t exist or exist but are infinite, and its minimal sufficient statistics are just the order statistics. Given that the shape of the Cauchy distribution is similar to the normal one, it would be advantageous to carry out some statistical studies to focus on estimating its parameters; in particular the location parameter which is the median. In this paper, the estimation of the location parameter of the Cauchy distribution using RSS and some of its variations; namely, Double RSS, Median RSS, Multistage RSS, and Steady-State RSS are considered. The estimators are compared with each other and with their counterparts using simple random sampling (SRS). The findings show that RSS or any of its variations, being evaluated in this study, is more efficient in estimating the location parameter compared to SRS. The comparison among the RSS variations reveals that the steady-state RSS is more efficient than other RSS variations. Moreover, to overcome some of the challenges of Cauchy distribution, such as the non-existence of moments, a truncated Cauchy distribution is used. For this distribution, all moments are finite as well as the moments of order statistics. Results show that RSS and Median RSS outperform the SRS in estimating the location parameter, even with the truncated version of Cauchy. Overall, the work of this paper identifies other advantages of RSS techniques.
PubDate: Mar 2023
- Reliability Evaluation of Linear or Circular Consecutive k-out-of-n: F
System Using Dynamic Bayesian Network
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 R Sakthivel and G Vijayalakshmi In the field of reliability theory, one of the most significant topics to discuss is the process of determining the reliability of a complex system based on the reliabilities of its individual components. The consecutive k-out-of-n:F system is used in telephone networks, photographing in nuclear accelerators, spacecraft relay stations, telecommunication system consisting of relay stations connecting transmitter and receiver, microwave relay stations, the design of integrated circuits, vacuum systems in accelerators, oil pipeline systems and computing networks. The reliability estimation of the consecutive k-out-of-n:F system is studied because it plays an important role in many physical systems. Dynamic Bayesian networks are graphical models for time-varying probabilistic inference and causal analysis under system uncertainty. The dynamic Bayesian network is built for the proposed system since time is continuously measured. The consecutive k-out-of-n:F system depends on the k components, because the system fails when the consecutive k components fail, otherwise the system works. The contributions are the dynamic Bayesian network construction of the proposed system and the reliability analysis of the linear and circular consecutive k-out-of-n:F system. Furthermore, Dynamic Bayesian network- based reliability is shown to be significantly higher than the reliability achieved by Malinowski, Preuss and Gao, Liu, Wang, Peng and Amirian, Khodadadi, Chatrabgoun. The Dynamic Bayesian network- based Reliability of linear and circular consecutive k-out-of-n:F system is also compared.
PubDate: Mar 2023
- Convergence Analysis of Space Discretization of Time Fractional Telegraph
Equation
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 Ebimene James Mamadu Henrietta Ify Ojarikre and Ignatius Nkonyeasua Njoseh The role of fractional differential equations in the advancement of science and technology cannot be overemphasized. The time fractional telegraph equation (TFTE) is a hyperbolic partial differential equation (HPDE) with applications in frequency transmission lines such as the telegraph wire, radio frequency, wire radio antenna, telephone lines, and among others. Consequently, numerical procedures (such as finite element method, H1 – Galerkin mixed finite element method, finite difference method, and among others) have become essential tools for obtaining approximate solutions for these HPDEs. It is also essential for these numerical techniques to converge to a given analytic solution to certain rate. The Ritz projection is often used in the analysis of stability, error estimation, convergence and superconvergence of many mathematical procedures. Hence, this paper offers a rigorous and comprehensive analysis of convergence of the space discretized time-fractional telegraph equation. To this effect, we define a temporal mesh on [0,T] with a finite element space in Mamadu-Njoseh polynomial space, φm-1, of degree ≤m-1. An interpolation operator (also of a polynomial space) was introduced along the fractional Ritz projection to prove the convergence theorem. Basically, we have employed both the fractional Ritz projection and interpolation technique as superclose estimate in L2 - norm between them to avoid a difficult Ritz operator construction to achieve the convergence of the method.
PubDate: Mar 2023
- MTSClust with Handling Missing Data Using VAR-Moving Average Imputation
Abstract: Publication date: Mar 2023
Source:Mathematics and Statistics Volume 11 Number 2 Embay Rohaeti I Made Sumertajaya Aji Hamim Wigena and Kusman Sadik Modeling and forecasting multivariate time series (MTS) data with multiple objects may be challenging, especially if the data have volatility and missing data. Several studies on inflation data have been proposed, but these studies either did not use MTS data or did not consider missing data. This study aims to develop an approach that can obtain general models and forecasts for MTS data with volatility and missing data. We proposed Vector Autoregressive Moving Average Imputation Method - Multivariate Time Series Clustering (VAR-IMMA - MTSClust) to group the objects into clusters. The clusters can then be used to obtain general models and forecasts. This study consists of three stages. The first stage is the imputation simulation stage, where 10%, 20%, and 30% of MTS data were randomly removed and imputed using the original VAR-IM and the proposed VAR-IMMA. The second stage is the clustering stage where six clustering methods, i.e., K-means Euclidean, K-means Manhattan, K-means DTW, PAM Euclidean, PAM Manhattan, and PAM DTW, were used on both the completed data and the imputed data from the first stage. The third stage is the modeling and forecasting stage, where clusters from the second stage are used to obtain general models and forecasts for each cluster. The simulations were performed 1000 times and evaluated using RMSE, RMSSTD, R-squared, ARI, and balanced accuracy. The results showed that VAR-IMMA could increase the imputation accuracy by 10% in 50% of cases and even more in another 25% of cases. This increase in imputation accuracy was proven beneficial in the second stage, where clustering on imputed data formed clusters that are still like the completed data clusters despite missing data. K-means Euclidean and PAM Euclidean are two of the best methods. Finally, the use of VAR-IMMA and PAM Euclidean on inflation rate data with missing data was illustrated. The imputed clusters have an ARI score of 0.57 and balanced accuracy of 92%, leading to similar models and forecasts to the ones in the completed data.
PubDate: Mar 2023
- NE-Nil Clean Rings and Their Generalization
Abstract: Publication date: Jul 2023
Source:Mathematics and Statistics Volume 11 Number 4 Renas T. M.Salim and Nazar H. Shuker This article presents the concept of a NE-nil clean ring, which is a generalization of the strongly nil clean ring. A ring R is considered NE-nil clean if, for every a in R, there exists a1 in R such that aa1 = with a − a1 = q and a1q = qa1, where q is nilpotent and is idempotent. This article's aim is to introduce a new type of ring, the NE-nil clean ring, and provide the fundamental properties of this ring. We also establish the relationship between NE-nil clean rings and 2-Boolean rings. Additionally, we demonstrate that the Jacobson radical and the right singular ideal over NE-nil clean ring are nil ideals. Among other results, we prove that every strongly nil clean ring and every weak * nil clean ring are NE-nil clean. We establish that a strongly 2-nil clean ring and NE-nil clean ring are equivalent. Furthermore, we introduce and investigate NT-nil clean ring, that is a ring with every a in R, there exists a1 in R such that aa1 = t with a − a1 = q and a1q = qa1, where t is a tripotent and q is nilpotent, by showing that these rings are a generalization of NE-nil clean rings. We provide the basic properties of these rings and explore their relationship with NE-nil clean and Zhou rings.
PubDate: Jul 2023
- A New Procedure for Multiple Outliers Detection in Linear Regression
Abstract: Publication date: Jul 2023
Source:Mathematics and Statistics Volume 11 Number 4 Ugah Tobias Ejiofor Arum Kingsley Chinedu Charity Uchenna Onwuamaeze Everestus Okafor Ossai Henrrietta Ebele Oranye Nnaemeka Martin Eze Mba Emmanuel Ikechukwu Ifeoma Christy Mba Comfort Njideka Ekene-Okafor Asogwa Oluchukwu Chukwuemeka and Nkechi Grace Okoacha In this paper, a simple asymptotic test statistic for identifying multiple outliers in linear regression is proposed. Sequential methods of multiple outliers detection test for the presence of a single outlier each time the procedure is applied. That is, the most severe or extreme outlying observation (the observation with the largest absolute internally studentized residual from the original fit of the mode to the entire observations) is tested first. If the test detects this observation as an outlier, then this observation is deleted, and the model is refitted to the remaining (reduced) observations. Then the observation with the next largest absolute internally studentized residual from the reduced sample is tested, and so on. This procedure of deleting observations and recomputing studentized residuals is continued until the null hypothesis of no outliers fails to be rejected. However, in this work our method or procedure entails calculating and uses only one set of internally studentized residuals obtained from fitting the model to the original data throughout the test exercise, and hence the procedure of deleting an observation, refitting the data to the remaining observations (reduced values) and recomputing the absolute internally studentized residuals at each stage of the test is avoided. The test statistic is incorporated into a technique (procedure) that entails a sequential application of a function of the internally studentized residuals. The procedure is a straightforward multistage method and is based on a result giving large sample properties of the internally studentized residuals. Approximate critical values of this test statistic are obtained based on approximations that depend on the application of the Bonferroni inequality since their exact values are not available. The new test statistic is very simple to compute, efficient and effective in large data sets, where more complex methods are difficult to apply because of their enormous computational demands or requirements. The results of the simulation study and numerical examples clearly show that the proposed test statistic is very successful in the identification of outlying observations.
PubDate: Jul 2023
- Overtrees and Their Chromatic Polynomials
Abstract: Publication date: Jul 2023
Source:Mathematics and Statistics Volume 11 Number 4 Iakov M. Erusalimskiy and Vladimir A. Skorokhodov In this paper, graphs called overtrees are introduced and studied. These are connected graphs that contain a single simple cycle. Such graphs are connected graphs following the trees in terms of the number of edges. An overtree can be obtained from a tree by adding an edge to connect two non-adjacent vertices of a tree. The same class of graphs can also be defined as a class of graphs obtained from trees by replacing one vertex of the tree with a simple cycle. The main characteristics of overtrees are , which is the number of vertices, and , which is the number of vertices of a simple cycle (). A formula for the chromatic polynomial of an overtree is obtained, which is determined by the characteristics and only. As a consequence, it is obtained the formula for the chromatic function of a graph which is built from a tree by replacing some of its vertices (possibly all) with simple cycles of arbitrary length. It follows from these formulas that any overtree with an even-length cycle is two-colored, and with an odd-length cycle is three-colored. The same is true for graphs obtained from trees by replacing some vertices with simple cycles.
PubDate: Jul 2023
- Fractional Differential Equations and Matrix Bicomplex Two-parameter
Mittag-Leffler Functions
Abstract: Publication date: Jul 2023
Source:Mathematics and Statistics Volume 11 Number 4 A. Thirumalai K. Muthunagai and M. Kaliyappan The skew field of Quaternions is the best known extension of the field of Complex numbers. The beauty of the Quaternions is that they form a field but the handicap is loss of commutativity. Thus the four- dimensional algebra called Bicomplex numbers with the set of all Complex numbers as a subalgebra preserving commutativity came into existence, by considering two imaginary units. The conventional calculus is generalized using Fractional calculus which is useful to extend derivatives of integer order to fractional order. Due to their vast applications to various disciplines of Science and Engineering, Mittag- Leffler functions have become prominent. Our contribution here is a combination of all the three streams mentioned above. In our research findings, bicomplex two-parameter Mittag- Leffler functions have been obtained as the solutions for the set of fractional differential equations that are linear in bicomplex space. A block diagonal of a square matrix is a diagonal matrix whose Principal diagonal elements are square matrices and the diagonal elements of lie along the diagonal of . A Jordan block is a matrix that is upper triangular with in the Principal diagonal, 1s just above the Principal diagonal and all other entries as 0. A Jordan Canonical form is a block diagonal matrix where each block is Jordan. A minimal polynomial of a matrix is a polynomial which is monic in with least degree. By using the methods of the minimal polynomial (eigenpolynomial) and Jordan canonical matrix, we have computed matrix Mittag–Leffler functions. The solutions obtained for the numerical examples have been visualized and interpreted using MATLAB.
PubDate: Jul 2023
- To Enhance New Interval Arithmetic Operations in Solving Linear
Programming Problem Using Interval-valued Trapezoidal Neutrosophic Numbers
Abstract: Publication date: Jul 2023
Source:Mathematics and Statistics Volume 11 Number 4 S Sinika and G Ramesh Now in real-life scenarios, indeterminacy arises everywhere in various fields, including physics, mathematics, economics, philosophy, social sciences, etc. It occurs whenever prediction is difficult, when we didn't get a predetermined outcome or obtain fixed or multiple possible outcomes etc. Overcoming indeterminacy is one of the most prominent duties for everyone to lead a confusion-less society. Hence a neutrosophic concept came into force to analyze indeterminacy explicitly. In contrast, a fuzzy set assigns only membership grade, and an intuitionistic set allocates membership and non-membership to elements. Decision-makers can use neutrosophic settings to model uncertainty and ambiguity in complex systems for flexible analysis. The neutrosophic environment with interval numbers makes one handle the situations efficiently. Hence we utilize interval-valued trapezoidal neutrosophic numbers for more flexibility. Trapezoidal number together with interval truth, interval indeterminacy, and interval falsity are the parameters of these neutrosophic numbers. Considering a de-neutrosophication technique in crisp numbers again leads to vagueness in real-life circumstances. Hence our primary goal is to develop a new de-neutrosophication strategy in the form of an interval number instead of the crisp number. This paper provides an overview of the de-neutrosophication and a new ranking technique based on an interval number, and some extended neutrosophic linear programming theorems. Further, an interval version of simplex and Robust Two-Step method (RTSM) are used to answer an interval-valued trapezoidal neutrosophic linear programming problem. Finally, this paper highlights the limitations and advantages of the proposed technique to improve problem-solving in a wide range of fields.
PubDate: Jul 2023
- Self-Adjoint Operators in Bilinear Spaces
Abstract: Publication date: Jul 2023
Source:Mathematics and Statistics Volume 11 Number 4 Sabarinsyah Hanni Garminia Pudji Astuti and Zelvin Mutiara Leastari In this research, it was agreed that a bilinear form is an extension of the inner product since a symmetry bilinear form will be equivalent to the inner product over a field of real numbers. Concepts in bilinear space, such as the concept of orthogonality of two vectors, the concept of orthogonal subspace of a subspace, the concept of adjoint operators of a linear operator and the concept of closed subspace are defined according to those prevailing in the inner product space fact assumed to be extensions of the concepts applicable in the inner product space. In the context of a cap subspace, we can identify the necessary and sufficient conditions for any linear operator in a continuous Hilbert space. These results open up opportunities to introduce the concept of pseudo-continuous linear mapping in bilinear spaces. We have obtained the result that pseudo-continuous linear mapping spaces in bilinear spaces have a relationship with linear mapping spaces that have adjoint mapping. We have also obtained the result that the structure of linear operators limited to Hilbert spaces can be extended to pseudo-continuous operator structures in bilinearal spaces. In this study, we have generalized the properties of self-adjoint operators in product spaces in infinite dimensions to bilinear, including closed properties of addition operations, and scalar multiplication, commutative properties, properties of inverse operators, properties of zero operators, properties of polynomial operators over real fields, and orthogonal properties of eigenspaces of different eigenvalues.
PubDate: Jul 2023
- Hybrid Correlation Coefficient of Spearman with MM-Estimator
Abstract: Publication date: Jul 2023
Source:Mathematics and Statistics Volume 11 Number 4 Siti Hajar binti Abu Bakar Muhamad Safiih Bin Lola Anton Abdulbasah Kamil Nurul Hila Zainuddin and Mohd Tajuddin Abdullah The Spearman rho nonparametric correlation coefficient is widely used to measure the strength and degree of association between two variables. However, outliers in the data can skew the results, leading to inaccurate results as the Spearman correlation coefficient is sensitive toward outliers. Thus, the robust approach is used to construct a robust model which is highly resistant to data contamination. The robustness of an estimator is measured by the breakdown point which is the smallest fraction of outliers in a sample data without affecting the estimator entirely. To overcome this problem, the aim of this study is two-fold. Firstly, researchers have proposed a robust Spearman correlation coefficient model based on the MM-estimator, called the MM-Spearman correlation coefficient. Secondly, to test the performance of the proposed model, it was tested by the Monte Carlo simulation and contaminated air pollution data in Kuala Terengganu, Terengganu, Malaysia. The data have been contaminated from 10% to 50% outliers. The performance of the MM-Spearman correlation coefficient properties was evaluated by statistical measurements such as standard error, mean squared error, root mean squared error and bias. The MM-Spearman correlation coefficient model outperformed the classical model, producing significantly smaller standard error, mean squared error, and root mean squared error values. The robustness of the model was evaluated using the breakdown point, which measures the smallest fraction of outliers that can be present in sample data without entirely affecting the estimator. The hybrid MM-Spearman correlation coefficient model demonstrated high robustness and efficiently handled data contamination up to 50%. However, the study has a limitation in that it can only overcome data contamination up to a maximum of 50%. Despite this limitation, the proposed model provides accurate and efficient results, enabling management authorities to make sound decisions without being affected by contaminated data. The MM-Spearman correlation coefficient model provides a valuable tool for researchers and decision-makers, allowing them to analyze data with a high degree of accuracy and robustness, even in the presence of outliers.
PubDate: Jul 2023
- Alternative Algebra for Multiplication and Inverse of Interval Number
Abstract: Publication date: Jul 2023
Source:Mathematics and Statistics Volume 11 Number 4 Mashadi Abdul Hadi and Sukono Recently, there are a lot of arithmetic interval forms. One of them only defines nonnegative interval numbers, whereas another one defines all forms of intervals. However, there are not many differences among the many arithmetic forms that were provided, particularly for addition and subtraction. For multiplication, division or inverse, there are many types of operations offered. But the problem is how to determine inverse of an interval number. There are many alternative offers to determine inverse of an interval number . But only for certain cases and for many cases, we have which is not equal to interval number . Based on these conditions, in this article an analysis of the issues with several existing interval algebras will be given and based on the analysis an alternative will be proposed to determine the form of multiplication and inverse from an interval number, which begins to define the positivity of an interval number with mid-point and then we construct algebra operations especially for multiplication. From the multiplication operation, we can construct the inverse form of an interval number . Furthermore, it is proven that for numbers of interval where , there is an interval number , so that it applies .
PubDate: Jul 2023
- Optimal Stochastic Allocation in Multivariate Stratified Sampling
Abstract: Publication date: Jul 2023
Source:Mathematics and Statistics Volume 11 Number 4 Mahfouz Maha I. Rashwan Mahmoud M. and Khadr Zeinab A. Optimal allocation of stratified sample is obtained either by minimizing the variance of the sample estimate for a fixed total cost of the survey or the total cost of survey for the fixed precision of the estimate. Actually, the survey cost and the variance of the estimate move in opposite directions, that is minimizing any of them results in increasing the other. Moreover, in practice, due to the uncertainty in the population data, the variances as well as the costs should be treated as random variables. In this paper, a multivariate optimal stochastic compromise allocation is proposed using multi-objective mathematical programming model that simultaneously minimizes both of the total cost of the survey as well as the individual variances of the overall stratified mean of each of the characteristics of interest. The proposed Stochastic Programming model is to be solved using the Chance-Constrained Programming technique. The proportional increase in the variance of the estimator under the optimum variance and under the optimum cost is set as a constraint and is upper-bounded by a pre-determined quantity. Simulation-based comparative study is conducted to assess the performance of the proposed allocation versus other optimal allocation techniques. Based on the criteria used for comparison, the findings show that the suggested model produced the highest efficient estimators with the highest precision, and efficient allocation of the sample size to the strata that accounts for the differences in the strata sizes and the variation within strata.
PubDate: Jul 2023
- Existence, Uniqueness, and Stability Results for Fractional Differential
Equations with Lacunary Interpolation by the Spline Method
Abstract: Publication date: Jul 2023
Source:Mathematics and Statistics Volume 11 Number 4 Ridha G. Karem Karwan H. F. Jwamer and Faraidun K. Hamasalh Although there are theoretical conclusions about the existence, uniqueness, and properties of solutions to ordinary and partial differential equations, only the simplest and most straightforward particular problems can usually be solved explicitly, especially when nonlinear terms are involved, and we typically develop approximation. In order to resolve the form problem of fractional order beginning value (1) by lacunary interpolation with a fractional degree spline function, the main goal of this paper is to investigate and improve some approximate solutions as well as new approximate solution techniques that have been proposed for the first time. From a practical standpoint, the numerical solution of these differential equations is crucial because only a tiny portion of equations can be resolved analytically. For fractional differential equations that are sensitive to the beginning conditions, we provide a fractional spline approach. The polynomial coefficient-based spline interpolation must be constructed using the Caputo fractional integral and derivative. For the given spline function, a stability analysis is completed after investigating error boundaries. The numerical rationale for the suggested technique is thought to use three cases. The outcomes demonstrate how effective the spline fractional technique is in interpolating the coefficient with fractional polynomials. Finally, to demonstrate the effectiveness and correctness of the suggested strategy, general procedure programs are created in MATLAB and used to a number of instructive cases.
PubDate: Jul 2023
- Some Results on Sequences in Banach Spaces
Abstract: Publication date: Jul 2023
Source:Mathematics and Statistics Volume 11 Number 4 B. M. Cerna Maguina and Miguel A. Tarazona Giraldo In this work, we prove in a very particular way the theorems of Dvoretzky-Roger's, Shur's, Orcliz's and Theorem 14.2 in their versions presented in the text [3]. The demonstrations of these Theorems carried out by us consist in establishing an appropriate link between the object of study and the relation that affirms that, for any real numbers , there exists a unique real number such that . Once the nexus is established, we use the definition of weak or strong convergence together with the Hahn-Banach Theorem to obtain the desired results. The relation is obtained by decomposing the Hilbert space as the direct sum of a closed subspace and its orthogonal complement. Since the dimension of the space is finite, this guarantees that any linear functional defined on the space is continuous, and this guarantees that the kernel of said linear functional is closed in the space . Therefore we have that the space breaks down, as the direct sum of the kernel of the continuous linear functional and its orthogonal complement, that is: , where the dimension of ker and the dimension of .
PubDate: Jul 2023
- A New Type of Single Server Queue Operating in A Multi-level Environment
with Customer Impatience
Abstract: Publication date: Jul 2023
Source:Mathematics and Statistics Volume 11 Number 4 Akshaya Ramesh and S. Udayabaskaran A new type of single server queue is considered. In this type, the server asks for an assignment in a multi-level environment and the customer develops impatience during the assignment process. The environment has N levels and the server is assigned to operate in one of these levels with level dependent arrival and service rates. Customers arrive at the system all the time and there is an infinite buffer with the system. The assignment is done by a random switch which can initiate an assignment process only if at least one customer is in the system. The server working in any level of the environment reports to the random switch after serving the last customer in that level. Customers are not flushed out at any time. The random switch initiates an assignment process immediately at the epoch of arrival of a customer to the system. Assignment time is random and during the assignment period, customers are permitted to join the system. Once the assignment process starts, each customer waiting in the buffer clicks on a random impatience timer with him/her and leaves the system in case his/her timer ends before the assignment to the server is made. For this model, steady-state probabilities are found and a performance analysis is also made.
PubDate: Jul 2023
- A Piecewise Linear Collocation with Closed Newton Cotes Scheme for Solving
Abstract: Publication date: Jul 2023
Source:Mathematics and Statistics Volume 11 Number 4 Nor Syahida Mohamad Jumat Sulaiman Azali Saudi and Nur Farah Azira Zainal In this paper, an efficient and reliable algorithm has been established to solve the second kind of FIE based on the lower-order piecewise polynomial and the lower-order quadrature method, namely Half-sweep Composite Trapezoidal (HSCT), which was used to discretize any integral term. Furthermore, due to the benefit of the complexity reduction technique via the half-sweep iteration concept presented from previous studies based on the cell-centered approach, this paper attempts to derive an HSCT piecewise linear collocation approximation equation generated from the discretization process of the proposed problem by considering the distribution of node points with vertex-centered type. Using half-sweep collocation node points over the linear collocation approximation equation, we could construct a system of HSCT linear collocation approximation equations, whose coefficient matrix is huge-scale and dense. Furthermore, to attain the piecewise linear collocation solution of this linear system, we considered the efficient algorithm of the Half-Sweep Successive Over-Relaxation (HSSOR) iterative method. Therefore, several numerical experiments of the proposed iterative methods have been implemented by solving three tested examples, and the obtained results that were based on three parameters, namely iteration quantity, accomplished time, and maximum absolute error, were recorded and compared against other two iterations, namely Full-Sweep Gauss-Seidel (FSGS) and Half-Sweep Gauss-Seidel (HSGS).
PubDate: Jul 2023
- A New Conjugate Gradient Algorithm for Minimization Problems Based on the
Modified Conjugacy Condition
Abstract: Publication date: Jul 2023
Source:Mathematics and Statistics Volume 11 Number 4 Dlovan Haji Omar Salah Gazi Shareef and Bayda Ghanim Fathi Optimization refers to the process of finding the best possible solution to a problem within a given set of constraints. It involves maximizing or minimizing a specific objective function while adhering to specific constraints. Optimization is used in various fields, including mathematics, engineering, economics, computer science, and data science, among others. The objective function can be a simple equation, a complex algorithm, or a mathematical model that describes a system or process. There are various optimization techniques available, including linear programming, nonlinear programming, genetic algorithms, simulated annealing, and particle swarm optimization, among others. These techniques use different algorithms to search for the optimal solution to a problem. In this paper, the main goal of unconstrained optimization is to minimize an objective function that uses real variables and has no value restrictions. In this study, based on the modified conjugacy condition, we offer a new conjugate gradient (CG) approach for nonlinear unconstrained problems in optimization. The new method satisfied the descent condition and the sufficient descent condition. We compare the numerical results of the new method with the Hestenes-Stiefel (HS) method. Our novel method is quite effective according to the number of iterations (NOI) and the number of functions (NOF) evaluated, as demonstrated by the numerical results on certain well-known non-linear test functions.
PubDate: Jul 2023
- Properties of Classes of Analytic Functions of Fractional Order
Abstract: Publication date: Jul 2023
Source:Mathematics and Statistics Volume 11 Number 4 K R Karthikeyan and Senguttuvan Alagiriswamy The study of Univalent Function Theory is very vast and complicated, so simplifying assumptions were necessary. In view of the Riemann Mapping theorem, the most apt thing would be to replace an analytic function defined on an arbitrary domain with an analytic function defined in the unit disc and having a Taylor's series expansion of the form . The powers of the series are usually integers, so all the prerequisite results also support the study of analytic functions having a series expansion with integers powers. The main deviation presented here is that we have defined a subclass of analytic functions using a Taylor's series whose powers are non-integers. To make this study more comprehensive, Janowski function which maps the unit disc onto a right half plane has been used in conjunction with two primary tools namely Subordination and Hadamard product. Motivated by the well-known class of λ-convex functions, here we have defined a fractional differential operator which is a convex combination of two analytic functions. Using the defined fractional differential operator, we introduce and study a new class of analytic functions involving a conic region impacted by the Janowski function. Necessary and sufficient conditions, coefficient estimates, growth and distortion bounds have been obtained for the defined function class. Since studies of various subclasses of analytic functions with fractional powers are rare, here we have pointed out several closely related studies by various authors. However, the superordinate function is a familiar function which has lots of applications.
PubDate: Jul 2023
- Kumaraswamy Generalized Half-Logistic Distribution
Abstract: Publication date: Jul 2023
Source:Mathematics and Statistics Volume 11 Number 4 Wasan AL Shemmari and Ahmed AL Adilee Statistical distributions play an essential part in the process of interpreting experimental data; nevertheless, choosing a distribution appropriate for the data that is currently available is not an easy task. Extending a known family of distribution to construct a new one is a long-honored technique. We suggest a new distribution named kumaraswamy generalized half-Logistic distribution (KW-GHLD). This distribution is obtained by adding two parameters to the existing model to raise its ability to fit complex data sets. Many mathematical and statistical properties were investigated, such as the survival function, the hazard function, the moment, the moment generating function, the incomplete moments, the Renyi entropy, the stochastic ordering, the probability-weighted moments, the order statistics, and the quantile function. The maximum likelihood method is utilized to make estimates for the KW-GHL distribution's unknown parameters. We study the efficacy of the desired distribution (KW-GHLD) by applying it to some real data set, which has been discussed within the measures of goodness of fit (AIC, BIC, CAIC, and QHIC) and comparing the outcomes with those obtained by the original distribution (GHLD), which produced best outcomes. This allowed us to determine whether or not the desired distribution is effective. Finally, we present several conclusions related to our findings.
PubDate: Jul 2023
- Performance Analysis of a Markovian Model for Two Heterogeneous Servers
Accompanied by Retrial, Impatience, Vacation and Additional Server
Abstract: Publication date: Jul 2023
Source:Mathematics and Statistics Volume 11 Number 4 G. Vinitha P. Godhandaraman and V. Poongothai The two heterogeneous servers of the Markovian retrial queue model with an additional server, impatience behavior and vacation are presented in this research paper. An arriving customer who finds accessible servers gets immediate service. Otherwise, if both servers are engaged, an entering customer will join in the orbit to retry and get their service after some random time. If any customers in the orbit discover that the waiting time is longer than expected, they may leave without receiving the service. We consider two servers with different service rates to provide the service based on "First Come, First Served". When the number of customers in orbit increases occasionally, we will instantly activate an additional server to reduce the queue size. After the orbit becomes null, the server goes for maintenance activity. The practical application is given to justify our model. The proposed model was obtained using the birth-death process and the equations were governed using Chapman-Kolmogorov equations. Finally, we have solved the equations using a recursive approach and the performance indices are derived to improve quality and efficiency.
PubDate: Jul 2023
- A Facet Defining of the Dicycle Polytope
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Mamane Souleye Ibrahim and Oumarou Abdou Arbi In this paper, we consider the polytope of all elementary dicycles of a digraph . Dicycles problem, in graph theory and combinatorial optimization, solved by polyhedral approaches has been extensively studied in literature. Therefore cutting plane and branch and cut algorithms are unavoidable to exactly solve such a combinatorial optimization problem. For this purpose, we introduce a new family of valid inequalities called alternating 3-arc path inequalities for the polytope of elementary dicycles . Indeed, these inequalities can be used in cutting plane and branch and cut algorithms to construct strengthened relaxations of a linear formulation of the dicycle problem. To prove the facetness of alternating 3-arc path inequalities, in opposite to what is usually done that consists basically to determine the affine subspace of a linear description of the considered polytope, we resort to constructive algorithms. Given the set of arcs of the digraph , algorithms devised and introduced are based on the fact that from a first elementary dicycle, all other dicycles are iteratively generated by replacing some arcs of previously generated dicycles by others such that the current elementary dicycle contains an arc that does not belong to any other previously generated dicycles. These algorithms generate dicyles with affinely independent incidence vectors that satisfy alternating 3-arc path inequalities with equality. It can easily be verified that all these devised algorithms are polynomial from time complexity point of view.
PubDate: Jan 2023
- Brachistochrone Curve Representation via Transition Curve
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Rabiatul Adawiah Fadzar and Md Yushalify Misro The brachistochrone curve is an optimal curve that allows the fastest descent path of an object to slide frictionlessly under the influence of a uniform gravitational field. In this paper, the Brachistochrone curve will be reconstructed using two different basis functions, namely Bézier curve and trigonometric Bézier curve with shape parameters. The Brachistochrone curve between two points will be approximated via a C-shape transition curve. The travel time and curvature will be evaluated and compared for each curve. This research revealed that the trigonometric Bézier curve provides the closest approximation of Brachistochrone curve in terms of travel time estimation, and shape parameters in trigonometric Bézier curve provide better shape adjustability than Bézier curve.
PubDate: Jan 2023
- A Note on External Direct Products of BP-algebras
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Chatsuda Chanmanee Rukchart Prasertpong Pongpun Julatha U. V. Kalyani T. Eswarlal and Aiyared Iampan The notion of BP-algebras was introduced by Ahn and Han [2] in 2013, which is related to several classes of algebra. It has been examined by several researchers. In the group, the concept of the direct product (DP) [21] was initially developed and given some features. Then, other algebraic structures are subjected to DP groups. Lingcong and Endam [16] examined the idea of the DP of (0-commutative) B-algebras and B-homomorphisms in 2016 and discovered several related features, one of which is a DP of two Balgebras that is a B-algebra. Later on, the concept of the DP of B-algebra was expanded to include finite family B-algebra, and some of the connected issues were researched. In this work, the external direct product (EDP), a general concept of the DP, is established, and the results of the EDP for certain subsets of BP-algebras are determined. In addition, we define the weak direct product (WDP) of BP-algebras. In light of the EDP BP-algebras, we conclude by presenting numerous essential theorems of (anti-)BP-homomorphisms.
PubDate: Jan 2023
- New Results on Face Magic Mean Labeling of Graphs
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 S. Vani Shree and S. Dhanalakshmi In the midst of the 1960s, a theory by Kotzig-Ringel and a study by Rosa sparked curiosity in graph labeling. Our primary objective is to examine some types of graphs which admit Face Magic Mean Labeling (FMML). A bijection is called a (1,0,0) F-Face magic mean labeling [FMML] of if the induced face labeling A bijection is called a (1,1,0) F-Face magic mean labeling [FMML] of if the induced face labeling In this paper it is being investigated that the (1, 0, 0) – Face Magic Mean Labeling (F-FMML) of Ladder graphs, Tortoise graph and Middle graph of a path graph. Also (1,0,0) and (1,1,0) F-Face Magic Mean Labeling is verified for Ortho Chain Square Cactus graph, Para Chain Square Cactus graph and some snake related graphs like Triangular snake graphs and Quadrilateral snake graphs. For a wide range of applications, including the creation of good kind of codes, synch-set codes, missile guidance codes and convolutional codes with optimal auto correlation characteristics, labeled graphs serve as valuable mathematical models. They aid in the ability to develop the most efficient non-standard integer encodings; labeled graphs have also been used to identify ambiguities in the access protocol of communication networks; data base management to identify the best circuit layouts, etc.
PubDate: Jan 2023
- A New Quasi-Newton Method with PCG Method for Nonlinear Optimization
Problems
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Bayda Ghanim Fathi and Alaa Luqman Ibrahim The major stationary iterative method used to solve nonlinear optimization problems is the quasi-Newton (QN) method. Symmetric Rank-One (SR1) is a method in the quasi-Newton family. This algorithm converges towards the true Hessian fast and has computational advantages for sparse or partially separable problems [1]. Thus, investigating the efficiency of the SR1 algorithm is significant. It's possible that the matrix generated by SR1 update won't always be positive. The denominator may also vanish or become zero. To overcome the drawbacks of the SR1 method, resulting in better performance than the standard SR1 method, in this work, we derive a new vector depending on the Barzilai-Borwein step size to obtain a new SR1 method. Then using this updating formula with preconditioning conjugate gradient (PCG) method is presented. With the aid of inexact line search procedure by strong Wolfe conditions, the new SR1 method is proposed and its performance is evaluated in comparison to the conventional SR1 method. It is proven that the updated matrix of the new SR1 method, , is symmetric matrix and positive definite matrix, given is initialized to identity matrix. In this study, the proposed method solved 13 problems effectively in terms of the number of iterations (NI) and the number of function evaluations (NF). Regarding NF, the new SR1 method also outperformed the classic SR1 method. The proposed method is shown to be more efficient in solving relatively large-scale problems (5,000 variables) compared to the original method. From the numerical results, the proposed method turned out to be significantly faster, effective and suitable for solving large dimension nonlinear equations.
PubDate: Jan 2023
- Adaptive Step Size Stochastic Runge-Kutta Method of Order 1.5(1.0) for
Stochastic Differential Equations (SDEs)
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Noor Julailah Abd Mutalib Norhayati Rosli and Noor Amalina Nisa Ariffin The stiff stochastic differential equations (SDEs) involve the solution with sharp turning points that permit us to use a very small step size to comprehend its behavior. Since the step size must be set up to be as small as possible, the implementation of the fixed step size method will result in high computational cost. Therefore, the application of variable step size method is needed where in the implementation of variable step size methods, the step size used can be considered more flexible. This paper devotes to the development of an embedded stochastic Runge-Kutta (SRK) pair method for SDEs. The proposed method is an adaptive step size SRK method. The method is constructed by embedding a SRK method of 1.0 order into a SRK method of 1.5 order of convergence. The technique of embedding is applicable for adaptive step size implementation, henceforth an estimate error at each step can be obtained. Numerical experiments are performed to demonstrate the efficiency of the method. The results show that the solution for adaptive step size SRK method of order 1.5(1.0) gives the smallest global error compared to the global error for fix step size SRK4, Euler and Milstein methods. Hence, this method is reliable in approximating the solution of SDEs.
PubDate: Jan 2023
- Construction of the Graph of Mathieu Group
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Suzila Mohd Kasim Shaharuddin Cik Soh and Siti Nor Aini Mohd Aslam Suppose that is a group and is a subset of . Then, the graph of a group , denoted by , is the simple undirected graph in which two distinct vertices are connected to each other by an edge if and only if both vertices satisfy . The main contribution of this paper is to construct the graph using the elements of Mathieu group, . Additionally, the connectivity of has been proven as a connected graph. Finally, an open problem is highlighted in addressing future research.
PubDate: Jan 2023
- Half-sweep Modified SOR Approximation of A Two-dimensional Nonlinear
Parabolic Partial Differential Equation
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Jackel Vui Lung Chew Jumat Sulaiman Andang Sunarto and Zurina Patrick The sole subject of this numerical analysis was the half-sweep modified successive over-relaxation approach (HSMSOR), which takes the form of an iterative formula. This study computed a class of two-dimensional nonlinear parabolic partial differential equations subject to Dirichlet boundary conditions numerically using the implicit-type finite difference scheme. The computational cost optimization was considered by converting the traditional implicit finite difference approximation into the half-sweep finite difference approximation. The implementation required inner-outer iteration cycles, the second-order Newton method, and a linearization technique. The created HSMSOR is utilized to obtain an approximation of the linearized equations system through the inner iteration cycle. In contrast, the problem's numerical solutions are obtained using the outer iteration cycle. The study examined the local truncation error and the stability, convergence, and method analysis. Results from three initial-boundary value issues showed that the proposed method had competitive computational costs compared to the existing method.
PubDate: Jan 2023
- On the Performance of Bayesian Generalized Dissimilarity Model Estimator
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Evellin Dewi Lusiana Suci Astutik Nurjannah and Abu Bakar Sambah The Generalized Dissimilarity Model (GDM) is an extension of Generalized Linear Model (GLM) that is used to describe and estimate biological pairwise dissimilarities following a binomial process in response to environmental gradients. Some improvement has been made to accommodate the uncertainty quantity of GDM by applying resampling scheme such as Bayesian Bootstrap (BBGDM). Because there is an ecological assumption in the GDM, it is reasonable to use a proper Bayesian approach rather than resampling method to obtain better modelling and inference results. Similar to other GLM techniques, the GDM also employs a link function, such as the logit link function that is commonly used for the binomial regression model. By using this link, a Bayesian approach to GDM framework which called Bayesian GDM (BGDM) can be constructed. In this paper, we aim to evaluate the estimators' performance of Bayesian Generalized Dissimilarity Model (BGDM) in relative to BBGDM. Our study revealed that the performance of BGDM estimator outperformed that of BBGDM, especially in term of unbiasedness and efficiency. However, the BGDM estimators failed to meet consistency property. Moreover, the application of the BGDM to a real case study indicates that its inferential abilities are superior to the preceding model.
PubDate: Jan 2023
- An Effective Spectral Approach to Solving Fractal Differential Equations
of Variable Order Based on the Non-singular Kernel Derivative
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 M. Basim N. Senu A. Ahmadian Z. B. Ibrahim and S. Salahshour A new differential operators class has been discovered utilising fractional and variable-order fractal Atangana-Baleanu derivatives that have inspired the development of differential equations' new class. Physical phenomena with variable memory and fractal variable dimension can be described using these operators. In addition, the primary goal of this study is to use the operation matrix based on shifted Legendre polynomials to obtain numerical solutions with respect to this new differential equations' class, which will aid us in solving the issue and transforming it into an algebraic equation system. This method is employed in solving two forms of fractal fractional differential equations: non-linear and linear. The suggested strategy is contrasted with the mixture of two-step Lagrange polynomials, the predictor-corrector algorithm, as well as the fractional calculus methods' fundamental theorem, using numerical examples to demonstrate its accuracy and simplicity. The estimation error was proposed to contrast the results of the suggested methods and the exact solution to the problems. The proposed approach could apply to a wider class of biological systems, such as mathematical modelling of infectious disease dynamics and other important areas of study, such as economics, finance, and engineering. We are confident that this paper will open many new avenues of investigation for modelling real-world system problems.
PubDate: Jan 2023
- A Formal Solution of Quadruple Series Equations
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 A. K. Awasthi Rachna and Rohit It cannot be overstated how significant Series Equations are to the fields of pure and applied mathematics respectively. The majority of mathematical topics revolve around the use of series. Virtually, in every subject of mathematics, series play an important role. Series solutions play a major role in the solution of mixed boundary value problems. Dual, triple, and quadruple series equations are useful in finding the solution of four part boundary value problems of electrostatics, elasticity and other fields of Mathematical physics. Cooke devised a method for finding the solution of quadruple series equations involving Fourier-Bessel series and obtained the solution using operator theory. Several workers have devoted considerable attention to the solutions of various equations involving for instance, trigonometric series, The Fourier-Bessel series, The Fourier Legendre series, The Dini series, series of Jacobi and Laguerre polynomials and series equations involving Bateman K-functions. Indeed, many of these problems arise in the investigation of certain classes of mixed boundary value problems in potential theory. There has been less work on quadruple series equations involving various polynomials and functions. In light of the significance of quadruple series solutions, proposed work examines quadruple series equations that include the product of r generalised Bateman K functions. Solution is formal, and there has been no attempt made to rationalise many restricting processes that have been encountered.
PubDate: Jan 2023
- On the Performance of Full Information Maximum Likelihood in SEM Missing
Data
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Amal HMIMOU M'barek IAOUSSE Soumaia HMIMOU Hanaa HACHIMI and Youssfi EL KETTANI Missing data is a real problem in all statistical modeling fields, particularly, in structural equation modeling which is a set of statistical techniques used to estimate models with latent concepts. In this research paper, an investigation of the techniques used to handle missing data in structural equation models is elaborated. To clarify this, a presentation of the mechanisms of missing data is made based on the probability distribution. This presentation recognizes three mechanisms: missing completely at random, missing at random, and missing not at random. Ignoring missing data in the statistical analysis may mislead the estimation and generates biased estimates. Many techniques are used to remedy this problem. In the present paper, we have presented three of them, namely, listwise deletion, pairwise deletion, and full information maximum likelihood. To investigate the power of each of these methods while using structural equation models a simulation study is launched. Furthermore, an examination of the correlation between the exogenous latent variables is done to extend the previous studies. We simulated a three latent variable structural model each with three observed variables. Three sample sizes (700, 1000, 1500) are examined accordingly to three missing rates for two specified mechanisms (2%, 10%, 15%). In addition, for each sample hundred other samples were generated and investigated using the same case design. The criteria of examination are a parameter bias calculated for each case design. The results illustrate as theoretically expected the following: (1) the non-convergence of pairwise deletion, (2) a huge loss of information when using listwise deletion, and (3) a relative performance for the full information maximum likelihood compared to listwise deletion when using the parameters bias as a criterion, particularly, for the correlation between the exogenous latent variables. This performance is revealed, chiefly, for larger sample sizes where the multivariate normal distribution occurs.
PubDate: Jan 2023
- Some Results of Generalized Weighted Norlund-Euler- Statistical
Convergence in Non-Archimedean Fields
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Muthu Meena Lakshmanan E and Suja K Non-Archimedean analysis is the study of fields that satisfy the stronger triangular inequality, also known as ultrametric property. The theory of summability has many uses throughout analysis and applied mathematics. The origin of summability methods developed with the study of convergent and divergent series by Euler, Gauss, Cauchy and Abel. There is a good number of special methods of summability such as Abel, Borel, Euler, Taylor, Norlund, Hausdroff in classical Analysis. Norlund, Euler, Taylor and weighted mean methods in Non-Archimedan Analysis have been investigated in detail by Natarajan and Srinivasan. Schoenberg developed some basic properties of statistical convergence and also studied the concept as a summability method. The relationship between the summability theory and statistical convergence has been introduced by Schoenberg. The concept of weighted statistical convergence and its relations of statistical summability were developed by Karakaya and Chishti. Srinivasan introduced some summability methods namely y-method, Norlund method and Weighted mean method in p-adic Fields. The main objective of this work is to explore some important results on statistical convergence and its related concepts in Non-Archimedean fields using summability methods. In this article, Norlund-Euler- statistical convergence, generalized weighted summability using Norlund-Euler- method in an Ultrametric field are defined. The relation between Norlund-Euler- statistical convergence and Statistical Norlund-Euler- summability has been extended to non-Archidemean fields. The notion of Norlund-Euler- statistical convergence and inclusion results of Norlund-Euler statistical convergent sequence has been characterized. Further the relation between Norlund-Euler- statistical convergence of order α & β has been established.
PubDate: Jan 2023
- Two New Preconditioned Conjugate Gradient Methods for Minimization
Problems
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Hussein Ageel Khatab and Salah Gazi Shareef In application to general function, each of the conjugate gradient and Quasi-Newton methods has particular advantages and disadvantages. Conjugate gradient (CG) techniques are a class of unconstrained optimization algorithms with strong local and global convergence qualities and minimal memory needs. Quasi-Newton methods are reliable and eﬃcient on a wide range of problems and they converge faster than the conjugate gradient method and require fewer function evaluations but they have the disadvantage of requiring substantially more storage and if the problem is ill-conditioned, they may take several iterations. A new class has been developed, termed preconditioned conjugate gradient (PCG) method. It is a method that combines two methods, conjugate gradient and Quasi-Newton. In this work, two new preconditioned conjugate gradient algorithms are proposed namely New PCG1 and New PCG2 to solve nonlinear unconstrained optimization problems. A new PCG1 combines conjugate gradient method Hestenes-Stiefel (HS) with new self-scaling symmetric Rank one (SR1), and a new PCG2 combines conjugate gradient method Hestenes-Stiefel (HS) with new self-scaling Davidon, Flecher and Powell (DFP). The algorithm uses the strong Wolfe line search condition. Numerical comparisons with standard preconditioned conjugate gradient algorithms show that for these new algorithms, computational scheme outperforms the preconditioned conjugate gradient.
PubDate: Jan 2023
- A Simple Approach for Explicit Solution of The Neutron Diffusion Kinetic
System
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Hind K. Al-Jeaid This paper introduces a new approach to directly solve a system of two coupled partial differential equations (PDEs) subjected to physical conditions describing the diffusion kinetic problem with one delayed neutron precursor concentration in Cartesian geometry. In literature, many difficulties arise when dealing with the current model using various numerical/analytical approaches. Normally, mathematicians search for simple but effective methods to solve their physical models. This work aims to introduce a new approach to directly solve the model under investigation. The present approach suggests to transform the given PDEs to a system of linear ordinary differential equations (ODEs). The solution of this system of ODEs is obtained by a simple analytical procedure. In addition, the solution of the original system of PDEs is determined in explicit form. The main advantage of the current approach is that it avoided the use of any natural transformations such as the Laplace transform in the literature. It also gives the solution in a direct manner; hence, the massive computational work of other numerical/analytical approaches is avoided. Hence, the proposed method is effective and simpler than those previously published in the literature. Moreover, the proposed approach can be further extended and applied to solve other kinds of diffusion kinetic problems.
PubDate: Jan 2023
- The Locating Chromatic Number for Certain Operation of Origami Graphs
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Asmiati Agus Irawan Aang Nuryaman and Kurnia Muludi The locating chromatic number introduced by Chartrand et al. in 2002 is the marriage of the partition dimension and graph coloring. The locating chromatic number depends on the minimum number of colors used in the locating coloring and the different color codes in vertices on the graph. There is no algorithm or theorem to determine the locating chromatic number of any graph carried out for each graph class or the resulting graph operation. This research is the development of scientific theory with a focus of the study on developing new ideas to determine the extent to which the locating chromatic number of a graph increases when applied to other operations. The locating chromatic number of the origami graph was obtained. The next exciting thing to know is locating chromatic number for certain operation of origami graphs. This paper discusses locating chromatic number for specific operation of origami graphs. The method used in this study is to determine the upper and lower bound of the locating chromatic number for certain operation of origami graphs. The result obtained is an increase of one color in the locating chromatic number of origami graphs.
PubDate: Jan 2023
- ANOVA Assisted Variable Selection in High-dimensional Multicategory
Response Data
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Demudu Naganaidu and Zarina Mohd Khalid Multinomial logistic regression is preferred in the classification of multicategory response data for its ease of interpretation and the ability to identify the associated input variables for each category. However, identifying important input variables in high-dimensional data poses several challenges as the majority of variables are unnecessary in discriminating the categories. Frequently used techniques in identifying important input variables in high-dimensional data include regularisation techniques such as Least Absolute Selection Shrinkage Operator (LASSO) and sure independent screening (SIS) or combinations of both. In this paper, we propose to use ANOVA, to assist the SIS in variable screening for high-dimensional data when the response variable is multicategorical. The new approach is straightforward and computationally effective. Simulated data without and with correlation are generated for numerical studies to illustrate the methodology, and the results of applying the methods on real data are presented. In conclusion, ANOVA performance is comparable with SIS in variable selection for uncorrelated input variables and performs better when used in combination with both ANOVA and SIS for correlated input variables.
PubDate: Jan 2023
- A New Bivariate Odd Generalized Exponential Gompertz Distribution
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Mervat Mahdy Eman Fathy and Dina S. Eltelbany The objective of this study was to present a novel bivariate distribution, which we denoted as the bivariate odd generalized exponential gompertz(BOGE-G) distribution. Other well-known models included in this one include the gompertz, generalized exponential, odd generalized exponential, and odd generalized exponential gompertz distribution. The model introduced here is of Marshall-Olkin type [16]. The marginals of the new bivariate distribution have odd generalized exponential gompertz distribution which proposed by[7]. Closed forms exist for both the joint probability density function and the joint cumulative distribution function. The bivariate moment generating function, marginal moment generating function, conditional distribution, joint reliability function, marginal hazard rate function, joint mean waiting time, and joint reversed hazard rate function are some of the properties of this distribution that have been discussed. The maximum likelihood approach is used to estimate the model parameters. To demonstrate empirically the significance and adaptability of the new model in fitting and evaluating real lifespan data, two sets of real data are studied using the new bivariate distribution. Using the software Mathcad, a simulation research was conducted to evaluate the bias and mean square error (MSE) characteristics of MLE. We found that the bias and MSE decrease as the sample size increases.
PubDate: Jan 2023
- Even Vertex -Graceful Labeling on Rough
Graph
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 R. Nithya and K. Anitha The study of set of objects with imprecise knowledge and vague information is known as rough set theory. The diagrammatic representation of this type of information may be handled through graphs for better decision making. Tong He and K. Shi introduced the constructional processes of rough graph in 2006 followed by the notion of edge rough graph. They constructed rough graph through set approximations called upper and lower approximations. He et al developed the concept of weighted rough graph with weighted attributes. Labelling is the process of making the graph into a more sensible way. In this process, integers are assigned for vertices of a graph so that we will be getting distinct weights for edges. Weight of an edge brings the degree of relationship between vertices. In this paper we have considered the rough graph constructed through rough membership values and as well as envisaged a novel type of labeling called Even vertex -graceful labeling as weight value for edges. In case of rough graph, weight of an edge will identify the consistent attribute even though the information system is imprecise. We have investigated this labeling for some special graphs like rough path graph, rough cycle graph, rough comb graph, rough ladder graph and rough star graph etc. This Even vertex -graceful labeling will be useful in feature extraction process and it leads to graph mining.
PubDate: Jan 2023
- A New Methodology on Rough Lattice Using Granular Concepts
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 B. Srirekha Shakeela Sathish and P. Devaki Rough set theory has a vital role in the mathematical field of knowledge representation problems. Hence, a Rough algebraic structure is defined by Pawlak. Mathematics and Computer Science have many applications in the field of Lattice. The principle of the ordered set has been analyzed in logic programming for crypto-protocols. Iwinski extended an approach towards the lattice set with the rough set theory whereas an algebraic structure based on a rough lattice depends on indiscernibility relation which was established by Chakraborty. Granular means piecewise knowledge, grouping with similar elements. The universe set was partitioned by an indiscernibility relation to form a Granular. This structure was framed to describe the Rough set theory and to study its corresponding Rough approximation space. Analysis of the reduction of granular from the information table is based on object-oriented. An ordered pair of distributive lattices emphasize the congruence class to define its projection. This projection of distributive lattice is analyzed by a lemma defining that the largest and the smallest elements are trivial ordered sets of an index. A Rough approximation space was examined to incorporate with the upper approximation and analysis with various possibilities. The Cartesian product of the distributive lattice was investigated. A Lattice homomorphism was examined with an equivalence relation and its conditions. Hence the approximation space exists in its union and intersection in the upper approximation. The lower approximation in different subsets of the distributive lattice was studied. The generalized lower and upper approximations were established to verify some of the results and their properties.
PubDate: Jan 2023
- Raise Estimation: An Alternative Approach in The Presence of Problematic
Multicollinearity
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Jinse Jacob and R. Varadharajan When adopting the Ordinary Least Squares (OLS) method to compute regression coefficients, the results become unreliable when two or more predictor variables are linearly related to one another. The confidence interval of the estimates becomes longer as a result of the increased variance of the OLS estimator, which also causes test procedures to have the potential to generate deceptive results. Additionally, it is difficult to determine the marginal contribution of the associated predictors since the estimates depend on the other predictor variables that are included in the model. This makes the determination of the marginal contribution difficult. Ridge Regression (RR) is a popular alternative to consider in this scenario; however, doing so impairs the standard approach for statistical testing. The Raise Method (RM) is a technique that was developed to combat multicollinearity while maintaining statistical inference. In this work, we offer a novel approach for determining the raise parameter, because the traditional one is a function of actual coefficients, which limits the use of Raise Method in real-world circumstances. Using simulations, the suggested method was compared to Ordinary Least Squares and Ridge Regression in terms of its capacity to forecast, stability of its coefficients, and probability of obtaining unacceptable coefficients at different levels of sample size, linear dependence, and residual variance. According to the findings, the technique that we designed turns out to be quite effective. Finally, a practical application is discussed.
PubDate: Jan 2023
- Developing Average Run Length for Monitoring Changes in the Mean on the
Presence of Long Memory under Seasonal Fractionally Integrated MAX Model
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Wilasinee Peerajit The cumulative sum (CUSUM) control chart can sensitively detect small-to-moderate shifts in the process mean. The average run length (ARL) is a popular technique used to determine the performance of a control chart. Recently, several researchers investigated the performance of processes on a CUSUM control chart by evaluating the ARL using either Monte Carlo simulation or Markov chain. As these methods only yield approximate results, we developed solutions for the exact ARL by using explicit formulas based on an integral equation (IE) for studying the performance of a CUSUM control chart running a long-memory process with exponential white noise. The long-memory process observations are derived from a seasonal fractionally integrated MAX model while focusing on X. The existence and uniqueness of the solution for calculating the ARL via explicit formulas were proved by using Banach's fixed-point theorem. The accuracy percentage of the explicit formulas against the approximate ARL obtained via the numerical IE method was greater than 99%, which indicates excellent agreement between the two methods. An important conclusion of this study is that the proposed solution for the ARL using explicit formulas could sensitively detect changes in the process mean on a CUSUM control chart in this situation. Finally, an illustrative case study is provided to show the efficacy of the proposed explicit formulas with processes involving real data.
PubDate: Jan 2023
- Multiplication and Inverse Operations in Parametric Form of Triangular
Fuzzy Number
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Mashadi Yuliana Safitri and Sukono Many authors have given the arithmetic form of triangular fuzzy numbers, especially for addition and subtraction; however, there is not much difference. The differences occur for multiplication, division, and inverse operations. Several authors define the inverse form of triangular fuzzy numbers in parametric form. However, it always does not obtain , because we cannot uniquely determine the inverse that obtains the unique identity. We will not be able to directly determine the inverse of any matrix in the form of a triangular fuzzy number. Thus, all problems using the matrix in the form of a triangular fuzzy number cannot be solved directly by determining . In addition, there are various authors who, with various methods, try to determine but still do not produce . Consequently, the solution of a fully fuzzy linear system will produce an incompatible solution, which results in different authors obtaining different solutions for the same fully fuzzy linear system. This paper will promote an alternative method to determine the inverse of a fuzzy triangular number in parametric form. It begins with the construction of a midpoint for any triangular fuzzy number , or in parametric form . Then the multiplication form will be constructed obtaining a unique inverse which produces . The multiplication, division, and inverse forms will be proven to satisfy various algebraic properties. Therefore, if a triangular fuzzy number is used, and also a triangular fuzzy number matrix is used, it can be easily directly applied to produce a unique inverse. At the end of this paper, we will give an example of calculating the inverse of a parametric triangular fuzzy number for various cases. It is expected that the reader can easily develop it in the case of a fuzzy matrix in the form of a triangular fuzzy number.
PubDate: Jan 2023
- Inclusion Results of a Generalized Mittag-Leffler-Type Poisson
Distribution in the k-Uniformly Janowski Starlike and the k-Janowski
Convex Functions
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Jamal Salah Hameed Ur Rehman and Iman Al Buwaiqi Due to the Mittag-Leffler function's crucial contribution to solving the fractional integral and differential equations, academics have begun to pay more attention to this function. The Mittag-Leffler function naturally appears in the solutions of fractional-order differential and integral equations, particularly in the studies of fractional generalization of kinetic equations, random walks, Levy flights, super-diffusive transport, and complex systems. As an example, it is possible to find certain properties of the Mittag-Leffler functions and generalized Mittag-Leffler functions [4,5]. We consider an additional generalization in this study, , given by Prabhakar [6,7]. We normalize the later to deduce in order to explore the inclusion results in a well-known class of analytic functions, namely and , -uniformly Janowski starlike and k-Janowski convex functions, respectively. Recently, researches on the theory of univalent functions emphasize the crucial role of implementing distributions of random variables such as the negative binomial distribution, the geometric distribution, the hypergeometric distribution, and in this study, the focus is on the Poisson distribution associated with the convolution (Hadamard product) that is applied to define and explore the inclusion results of the followings: and the integral operator . Furthermore, some results of special cases will be also investigated.
PubDate: Jan 2023
- Linear Stability of Double-sided Symmetric Thin Liquid Film by
Integral-theory
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Ibrahim S. Hamad The Integral Theory approach is used to explore the stability and dynamics of a free double-sided symmetric thin liquid film. For a Newtonian liquid with non-variable density and moving viscosity, the flowing in a thinning liquid layer is analyzed in two dimensions. To construct an equation that governs such flow, the Navier and Stokes formulas are utilized with proper boundary conditions of zero shear stress conjointly of normal stress on the bounding free surfaces with dimensionless variables. After that, the equations that are a non-linear evolution structure of layer thickness, local stream rate, and the unknown functions can be solved by using straight stability investigation, and the normal mode strategy can moreover be connected to these conditions to reveal the critical condition. The characteristic equation for the growth rate and wave number can be analyzed by using MATLAM programming to show the region of stable and unstable films. As a result of our research, we are able to demonstrate that the effect of a thin, free, double-sided liquid layer is an unstable component.
PubDate: Jan 2023
- Development of Nonparametric Structural Equation Modeling on Simulation
Data Using Exponential Functions
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Tamara Rezti Syafriana Solimun Ni Wayan Surya Wardhani Atiek Iriany and Adji Achmad Rinaldo Fernandes Objective: This study aims to determine the development of nonparametric SEM analysis on simulation data using the exponential function. Methodology: This study uses simulation data which is defined as an experimental approach to imitate the behavior of the system using a computer with the appropriate software. This study uses nonparametric structural equation modeling (SEM) analysis. The function used in this study is the exponential function. Results: The results showed that with simulation data all relationships have a significant effect on each other which have formative and reflective indicators. Testing the direct effect of Y2 on Y3 produces a structural coefficient value of 0.255 with a p-value
PubDate: Jan 2023