Similar Journals
Mathematics and Statistics
Number of Followers: 2 Open Access journal ISSN (Print) 2332-2071 - ISSN (Online) 2332-2144 Published by Horizon Research Publishing [51 journals] |
- A Facet Defining of the Dicycle Polytope
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Mamane Souleye Ibrahim and Oumarou Abdou Arbi In this paper, we consider the polytope of all elementary dicycles of a digraph . Dicycles problem, in graph theory and combinatorial optimization, solved by polyhedral approaches has been extensively studied in literature. Therefore cutting plane and branch and cut algorithms are unavoidable to exactly solve such a combinatorial optimization problem. For this purpose, we introduce a new family of valid inequalities called alternating 3-arc path inequalities for the polytope of elementary dicycles . Indeed, these inequalities can be used in cutting plane and branch and cut algorithms to construct strengthened relaxations of a linear formulation of the dicycle problem. To prove the facetness of alternating 3-arc path inequalities, in opposite to what is usually done that consists basically to determine the affine subspace of a linear description of the considered polytope, we resort to constructive algorithms. Given the set of arcs of the digraph , algorithms devised and introduced are based on the fact that from a first elementary dicycle, all other dicycles are iteratively generated by replacing some arcs of previously generated dicycles by others such that the current elementary dicycle contains an arc that does not belong to any other previously generated dicycles. These algorithms generate dicyles with affinely independent incidence vectors that satisfy alternating 3-arc path inequalities with equality. It can easily be verified that all these devised algorithms are polynomial from time complexity point of view.
PubDate: Jan 2023
- Brachistochrone Curve Representation via Transition Curve
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Rabiatul Adawiah Fadzar and Md Yushalify Misro The brachistochrone curve is an optimal curve that allows the fastest descent path of an object to slide frictionlessly under the influence of a uniform gravitational field. In this paper, the Brachistochrone curve will be reconstructed using two different basis functions, namely Bézier curve and trigonometric Bézier curve with shape parameters. The Brachistochrone curve between two points will be approximated via a C-shape transition curve. The travel time and curvature will be evaluated and compared for each curve. This research revealed that the trigonometric Bézier curve provides the closest approximation of Brachistochrone curve in terms of travel time estimation, and shape parameters in trigonometric Bézier curve provide better shape adjustability than Bézier curve.
PubDate: Jan 2023
- A Note on External Direct Products of BP-algebras
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Chatsuda Chanmanee Rukchart Prasertpong Pongpun Julatha U. V. Kalyani T. Eswarlal and Aiyared Iampan The notion of BP-algebras was introduced by Ahn and Han [2] in 2013, which is related to several classes of algebra. It has been examined by several researchers. In the group, the concept of the direct product (DP) [21] was initially developed and given some features. Then, other algebraic structures are subjected to DP groups. Lingcong and Endam [16] examined the idea of the DP of (0-commutative) B-algebras and B-homomorphisms in 2016 and discovered several related features, one of which is a DP of two Balgebras that is a B-algebra. Later on, the concept of the DP of B-algebra was expanded to include finite family B-algebra, and some of the connected issues were researched. In this work, the external direct product (EDP), a general concept of the DP, is established, and the results of the EDP for certain subsets of BP-algebras are determined. In addition, we define the weak direct product (WDP) of BP-algebras. In light of the EDP BP-algebras, we conclude by presenting numerous essential theorems of (anti-)BP-homomorphisms.
PubDate: Jan 2023
- New Results on Face Magic Mean Labeling of Graphs
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 S. Vani Shree and S. Dhanalakshmi In the midst of the 1960s, a theory by Kotzig-Ringel and a study by Rosa sparked curiosity in graph labeling. Our primary objective is to examine some types of graphs which admit Face Magic Mean Labeling (FMML). A bijection is called a (1,0,0) F-Face magic mean labeling [FMML] of if the induced face labeling A bijection is called a (1,1,0) F-Face magic mean labeling [FMML] of if the induced face labeling In this paper it is being investigated that the (1, 0, 0) – Face Magic Mean Labeling (F-FMML) of Ladder graphs, Tortoise graph and Middle graph of a path graph. Also (1,0,0) and (1,1,0) F-Face Magic Mean Labeling is verified for Ortho Chain Square Cactus graph, Para Chain Square Cactus graph and some snake related graphs like Triangular snake graphs and Quadrilateral snake graphs. For a wide range of applications, including the creation of good kind of codes, synch-set codes, missile guidance codes and convolutional codes with optimal auto correlation characteristics, labeled graphs serve as valuable mathematical models. They aid in the ability to develop the most efficient non-standard integer encodings; labeled graphs have also been used to identify ambiguities in the access protocol of communication networks; data base management to identify the best circuit layouts, etc.
PubDate: Jan 2023
- A New Quasi-Newton Method with PCG Method for Nonlinear Optimization
Problems
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Bayda Ghanim Fathi and Alaa Luqman Ibrahim The major stationary iterative method used to solve nonlinear optimization problems is the quasi-Newton (QN) method. Symmetric Rank-One (SR1) is a method in the quasi-Newton family. This algorithm converges towards the true Hessian fast and has computational advantages for sparse or partially separable problems [1]. Thus, investigating the efficiency of the SR1 algorithm is significant. It's possible that the matrix generated by SR1 update won't always be positive. The denominator may also vanish or become zero. To overcome the drawbacks of the SR1 method, resulting in better performance than the standard SR1 method, in this work, we derive a new vector depending on the Barzilai-Borwein step size to obtain a new SR1 method. Then using this updating formula with preconditioning conjugate gradient (PCG) method is presented. With the aid of inexact line search procedure by strong Wolfe conditions, the new SR1 method is proposed and its performance is evaluated in comparison to the conventional SR1 method. It is proven that the updated matrix of the new SR1 method, , is symmetric matrix and positive definite matrix, given is initialized to identity matrix. In this study, the proposed method solved 13 problems effectively in terms of the number of iterations (NI) and the number of function evaluations (NF). Regarding NF, the new SR1 method also outperformed the classic SR1 method. The proposed method is shown to be more efficient in solving relatively large-scale problems (5,000 variables) compared to the original method. From the numerical results, the proposed method turned out to be significantly faster, effective and suitable for solving large dimension nonlinear equations.
PubDate: Jan 2023
- Adaptive Step Size Stochastic Runge-Kutta Method of Order 1.5(1.0) for
Stochastic Differential Equations (SDEs)
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Noor Julailah Abd Mutalib Norhayati Rosli and Noor Amalina Nisa Ariffin The stiff stochastic differential equations (SDEs) involve the solution with sharp turning points that permit us to use a very small step size to comprehend its behavior. Since the step size must be set up to be as small as possible, the implementation of the fixed step size method will result in high computational cost. Therefore, the application of variable step size method is needed where in the implementation of variable step size methods, the step size used can be considered more flexible. This paper devotes to the development of an embedded stochastic Runge-Kutta (SRK) pair method for SDEs. The proposed method is an adaptive step size SRK method. The method is constructed by embedding a SRK method of 1.0 order into a SRK method of 1.5 order of convergence. The technique of embedding is applicable for adaptive step size implementation, henceforth an estimate error at each step can be obtained. Numerical experiments are performed to demonstrate the efficiency of the method. The results show that the solution for adaptive step size SRK method of order 1.5(1.0) gives the smallest global error compared to the global error for fix step size SRK4, Euler and Milstein methods. Hence, this method is reliable in approximating the solution of SDEs.
PubDate: Jan 2023
- Construction of the Graph of Mathieu Group
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Suzila Mohd Kasim Shaharuddin Cik Soh and Siti Nor Aini Mohd Aslam Suppose that is a group and is a subset of . Then, the graph of a group , denoted by , is the simple undirected graph in which two distinct vertices are connected to each other by an edge if and only if both vertices satisfy . The main contribution of this paper is to construct the graph using the elements of Mathieu group, . Additionally, the connectivity of has been proven as a connected graph. Finally, an open problem is highlighted in addressing future research.
PubDate: Jan 2023
- Half-sweep Modified SOR Approximation of A Two-dimensional Nonlinear
Parabolic Partial Differential Equation
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Jackel Vui Lung Chew Jumat Sulaiman Andang Sunarto and Zurina Patrick The sole subject of this numerical analysis was the half-sweep modified successive over-relaxation approach (HSMSOR), which takes the form of an iterative formula. This study computed a class of two-dimensional nonlinear parabolic partial differential equations subject to Dirichlet boundary conditions numerically using the implicit-type finite difference scheme. The computational cost optimization was considered by converting the traditional implicit finite difference approximation into the half-sweep finite difference approximation. The implementation required inner-outer iteration cycles, the second-order Newton method, and a linearization technique. The created HSMSOR is utilized to obtain an approximation of the linearized equations system through the inner iteration cycle. In contrast, the problem's numerical solutions are obtained using the outer iteration cycle. The study examined the local truncation error and the stability, convergence, and method analysis. Results from three initial-boundary value issues showed that the proposed method had competitive computational costs compared to the existing method.
PubDate: Jan 2023
- On the Performance of Bayesian Generalized Dissimilarity Model Estimator
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Evellin Dewi Lusiana Suci Astutik Nurjannah and Abu Bakar Sambah The Generalized Dissimilarity Model (GDM) is an extension of Generalized Linear Model (GLM) that is used to describe and estimate biological pairwise dissimilarities following a binomial process in response to environmental gradients. Some improvement has been made to accommodate the uncertainty quantity of GDM by applying resampling scheme such as Bayesian Bootstrap (BBGDM). Because there is an ecological assumption in the GDM, it is reasonable to use a proper Bayesian approach rather than resampling method to obtain better modelling and inference results. Similar to other GLM techniques, the GDM also employs a link function, such as the logit link function that is commonly used for the binomial regression model. By using this link, a Bayesian approach to GDM framework which called Bayesian GDM (BGDM) can be constructed. In this paper, we aim to evaluate the estimators' performance of Bayesian Generalized Dissimilarity Model (BGDM) in relative to BBGDM. Our study revealed that the performance of BGDM estimator outperformed that of BBGDM, especially in term of unbiasedness and efficiency. However, the BGDM estimators failed to meet consistency property. Moreover, the application of the BGDM to a real case study indicates that its inferential abilities are superior to the preceding model.
PubDate: Jan 2023
- An Effective Spectral Approach to Solving Fractal Differential Equations
of Variable Order Based on the Non-singular Kernel Derivative
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 M. Basim N. Senu A. Ahmadian Z. B. Ibrahim and S. Salahshour A new differential operators class has been discovered utilising fractional and variable-order fractal Atangana-Baleanu derivatives that have inspired the development of differential equations' new class. Physical phenomena with variable memory and fractal variable dimension can be described using these operators. In addition, the primary goal of this study is to use the operation matrix based on shifted Legendre polynomials to obtain numerical solutions with respect to this new differential equations' class, which will aid us in solving the issue and transforming it into an algebraic equation system. This method is employed in solving two forms of fractal fractional differential equations: non-linear and linear. The suggested strategy is contrasted with the mixture of two-step Lagrange polynomials, the predictor-corrector algorithm, as well as the fractional calculus methods' fundamental theorem, using numerical examples to demonstrate its accuracy and simplicity. The estimation error was proposed to contrast the results of the suggested methods and the exact solution to the problems. The proposed approach could apply to a wider class of biological systems, such as mathematical modelling of infectious disease dynamics and other important areas of study, such as economics, finance, and engineering. We are confident that this paper will open many new avenues of investigation for modelling real-world system problems.
PubDate: Jan 2023
- A Formal Solution of Quadruple Series Equations
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 A. K. Awasthi Rachna and Rohit It cannot be overstated how significant Series Equations are to the fields of pure and applied mathematics respectively. The majority of mathematical topics revolve around the use of series. Virtually, in every subject of mathematics, series play an important role. Series solutions play a major role in the solution of mixed boundary value problems. Dual, triple, and quadruple series equations are useful in finding the solution of four part boundary value problems of electrostatics, elasticity and other fields of Mathematical physics. Cooke devised a method for finding the solution of quadruple series equations involving Fourier-Bessel series and obtained the solution using operator theory. Several workers have devoted considerable attention to the solutions of various equations involving for instance, trigonometric series, The Fourier-Bessel series, The Fourier Legendre series, The Dini series, series of Jacobi and Laguerre polynomials and series equations involving Bateman K-functions. Indeed, many of these problems arise in the investigation of certain classes of mixed boundary value problems in potential theory. There has been less work on quadruple series equations involving various polynomials and functions. In light of the significance of quadruple series solutions, proposed work examines quadruple series equations that include the product of r generalised Bateman K functions. Solution is formal, and there has been no attempt made to rationalise many restricting processes that have been encountered.
PubDate: Jan 2023
- On the Performance of Full Information Maximum Likelihood in SEM Missing
Data
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Amal HMIMOU M'barek IAOUSSE Soumaia HMIMOU Hanaa HACHIMI and Youssfi EL KETTANI Missing data is a real problem in all statistical modeling fields, particularly, in structural equation modeling which is a set of statistical techniques used to estimate models with latent concepts. In this research paper, an investigation of the techniques used to handle missing data in structural equation models is elaborated. To clarify this, a presentation of the mechanisms of missing data is made based on the probability distribution. This presentation recognizes three mechanisms: missing completely at random, missing at random, and missing not at random. Ignoring missing data in the statistical analysis may mislead the estimation and generates biased estimates. Many techniques are used to remedy this problem. In the present paper, we have presented three of them, namely, listwise deletion, pairwise deletion, and full information maximum likelihood. To investigate the power of each of these methods while using structural equation models a simulation study is launched. Furthermore, an examination of the correlation between the exogenous latent variables is done to extend the previous studies. We simulated a three latent variable structural model each with three observed variables. Three sample sizes (700, 1000, 1500) are examined accordingly to three missing rates for two specified mechanisms (2%, 10%, 15%). In addition, for each sample hundred other samples were generated and investigated using the same case design. The criteria of examination are a parameter bias calculated for each case design. The results illustrate as theoretically expected the following: (1) the non-convergence of pairwise deletion, (2) a huge loss of information when using listwise deletion, and (3) a relative performance for the full information maximum likelihood compared to listwise deletion when using the parameters bias as a criterion, particularly, for the correlation between the exogenous latent variables. This performance is revealed, chiefly, for larger sample sizes where the multivariate normal distribution occurs.
PubDate: Jan 2023
- Some Results of Generalized Weighted Norlund-Euler- Statistical
Convergence in Non-Archimedean Fields
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Muthu Meena Lakshmanan E and Suja K Non-Archimedean analysis is the study of fields that satisfy the stronger triangular inequality, also known as ultrametric property. The theory of summability has many uses throughout analysis and applied mathematics. The origin of summability methods developed with the study of convergent and divergent series by Euler, Gauss, Cauchy and Abel. There is a good number of special methods of summability such as Abel, Borel, Euler, Taylor, Norlund, Hausdroff in classical Analysis. Norlund, Euler, Taylor and weighted mean methods in Non-Archimedan Analysis have been investigated in detail by Natarajan and Srinivasan. Schoenberg developed some basic properties of statistical convergence and also studied the concept as a summability method. The relationship between the summability theory and statistical convergence has been introduced by Schoenberg. The concept of weighted statistical convergence and its relations of statistical summability were developed by Karakaya and Chishti. Srinivasan introduced some summability methods namely y-method, Norlund method and Weighted mean method in p-adic Fields. The main objective of this work is to explore some important results on statistical convergence and its related concepts in Non-Archimedean fields using summability methods. In this article, Norlund-Euler- statistical convergence, generalized weighted summability using Norlund-Euler- method in an Ultrametric field are defined. The relation between Norlund-Euler- statistical convergence and Statistical Norlund-Euler- summability has been extended to non-Archidemean fields. The notion of Norlund-Euler- statistical convergence and inclusion results of Norlund-Euler statistical convergent sequence has been characterized. Further the relation between Norlund-Euler- statistical convergence of order α & β has been established.
PubDate: Jan 2023
- Two New Preconditioned Conjugate Gradient Methods for Minimization
Problems
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Hussein Ageel Khatab and Salah Gazi Shareef In application to general function, each of the conjugate gradient and Quasi-Newton methods has particular advantages and disadvantages. Conjugate gradient (CG) techniques are a class of unconstrained optimization algorithms with strong local and global convergence qualities and minimal memory needs. Quasi-Newton methods are reliable and eﬃcient on a wide range of problems and they converge faster than the conjugate gradient method and require fewer function evaluations but they have the disadvantage of requiring substantially more storage and if the problem is ill-conditioned, they may take several iterations. A new class has been developed, termed preconditioned conjugate gradient (PCG) method. It is a method that combines two methods, conjugate gradient and Quasi-Newton. In this work, two new preconditioned conjugate gradient algorithms are proposed namely New PCG1 and New PCG2 to solve nonlinear unconstrained optimization problems. A new PCG1 combines conjugate gradient method Hestenes-Stiefel (HS) with new self-scaling symmetric Rank one (SR1), and a new PCG2 combines conjugate gradient method Hestenes-Stiefel (HS) with new self-scaling Davidon, Flecher and Powell (DFP). The algorithm uses the strong Wolfe line search condition. Numerical comparisons with standard preconditioned conjugate gradient algorithms show that for these new algorithms, computational scheme outperforms the preconditioned conjugate gradient.
PubDate: Jan 2023
- A Simple Approach for Explicit Solution of The Neutron Diffusion Kinetic
System
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Hind K. Al-Jeaid This paper introduces a new approach to directly solve a system of two coupled partial differential equations (PDEs) subjected to physical conditions describing the diffusion kinetic problem with one delayed neutron precursor concentration in Cartesian geometry. In literature, many difficulties arise when dealing with the current model using various numerical/analytical approaches. Normally, mathematicians search for simple but effective methods to solve their physical models. This work aims to introduce a new approach to directly solve the model under investigation. The present approach suggests to transform the given PDEs to a system of linear ordinary differential equations (ODEs). The solution of this system of ODEs is obtained by a simple analytical procedure. In addition, the solution of the original system of PDEs is determined in explicit form. The main advantage of the current approach is that it avoided the use of any natural transformations such as the Laplace transform in the literature. It also gives the solution in a direct manner; hence, the massive computational work of other numerical/analytical approaches is avoided. Hence, the proposed method is effective and simpler than those previously published in the literature. Moreover, the proposed approach can be further extended and applied to solve other kinds of diffusion kinetic problems.
PubDate: Jan 2023
- The Locating Chromatic Number for Certain Operation of Origami Graphs
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Asmiati Agus Irawan Aang Nuryaman and Kurnia Muludi The locating chromatic number introduced by Chartrand et al. in 2002 is the marriage of the partition dimension and graph coloring. The locating chromatic number depends on the minimum number of colors used in the locating coloring and the different color codes in vertices on the graph. There is no algorithm or theorem to determine the locating chromatic number of any graph carried out for each graph class or the resulting graph operation. This research is the development of scientific theory with a focus of the study on developing new ideas to determine the extent to which the locating chromatic number of a graph increases when applied to other operations. The locating chromatic number of the origami graph was obtained. The next exciting thing to know is locating chromatic number for certain operation of origami graphs. This paper discusses locating chromatic number for specific operation of origami graphs. The method used in this study is to determine the upper and lower bound of the locating chromatic number for certain operation of origami graphs. The result obtained is an increase of one color in the locating chromatic number of origami graphs.
PubDate: Jan 2023
- ANOVA Assisted Variable Selection in High-dimensional Multicategory
Response Data
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Demudu Naganaidu and Zarina Mohd Khalid Multinomial logistic regression is preferred in the classification of multicategory response data for its ease of interpretation and the ability to identify the associated input variables for each category. However, identifying important input variables in high-dimensional data poses several challenges as the majority of variables are unnecessary in discriminating the categories. Frequently used techniques in identifying important input variables in high-dimensional data include regularisation techniques such as Least Absolute Selection Shrinkage Operator (LASSO) and sure independent screening (SIS) or combinations of both. In this paper, we propose to use ANOVA, to assist the SIS in variable screening for high-dimensional data when the response variable is multicategorical. The new approach is straightforward and computationally effective. Simulated data without and with correlation are generated for numerical studies to illustrate the methodology, and the results of applying the methods on real data are presented. In conclusion, ANOVA performance is comparable with SIS in variable selection for uncorrelated input variables and performs better when used in combination with both ANOVA and SIS for correlated input variables.
PubDate: Jan 2023
- Even Vertex -Graceful Labeling on Rough
Graph
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 R. Nithya and K. Anitha The study of set of objects with imprecise knowledge and vague information is known as rough set theory. The diagrammatic representation of this type of information may be handled through graphs for better decision making. Tong He and K. Shi introduced the constructional processes of rough graph in 2006 followed by the notion of edge rough graph. They constructed rough graph through set approximations called upper and lower approximations. He et al developed the concept of weighted rough graph with weighted attributes. Labelling is the process of making the graph into a more sensible way. In this process, integers are assigned for vertices of a graph so that we will be getting distinct weights for edges. Weight of an edge brings the degree of relationship between vertices. In this paper we have considered the rough graph constructed through rough membership values and as well as envisaged a novel type of labeling called Even vertex -graceful labeling as weight value for edges. In case of rough graph, weight of an edge will identify the consistent attribute even though the information system is imprecise. We have investigated this labeling for some special graphs like rough path graph, rough cycle graph, rough comb graph, rough ladder graph and rough star graph etc. This Even vertex -graceful labeling will be useful in feature extraction process and it leads to graph mining.
PubDate: Jan 2023
- A New Methodology on Rough Lattice Using Granular Concepts
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 B. Srirekha Shakeela Sathish and P. Devaki Rough set theory has a vital role in the mathematical field of knowledge representation problems. Hence, a Rough algebraic structure is defined by Pawlak. Mathematics and Computer Science have many applications in the field of Lattice. The principle of the ordered set has been analyzed in logic programming for crypto-protocols. Iwinski extended an approach towards the lattice set with the rough set theory whereas an algebraic structure based on a rough lattice depends on indiscernibility relation which was established by Chakraborty. Granular means piecewise knowledge, grouping with similar elements. The universe set was partitioned by an indiscernibility relation to form a Granular. This structure was framed to describe the Rough set theory and to study its corresponding Rough approximation space. Analysis of the reduction of granular from the information table is based on object-oriented. An ordered pair of distributive lattices emphasize the congruence class to define its projection. This projection of distributive lattice is analyzed by a lemma defining that the largest and the smallest elements are trivial ordered sets of an index. A Rough approximation space was examined to incorporate with the upper approximation and analysis with various possibilities. The Cartesian product of the distributive lattice was investigated. A Lattice homomorphism was examined with an equivalence relation and its conditions. Hence the approximation space exists in its union and intersection in the upper approximation. The lower approximation in different subsets of the distributive lattice was studied. The generalized lower and upper approximations were established to verify some of the results and their properties.
PubDate: Jan 2023
- Raise Estimation: An Alternative Approach in The Presence of Problematic
Multicollinearity
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Jinse Jacob and R. Varadharajan When adopting the Ordinary Least Squares (OLS) method to compute regression coefficients, the results become unreliable when two or more predictor variables are linearly related to one another. The confidence interval of the estimates becomes longer as a result of the increased variance of the OLS estimator, which also causes test procedures to have the potential to generate deceptive results. Additionally, it is difficult to determine the marginal contribution of the associated predictors since the estimates depend on the other predictor variables that are included in the model. This makes the determination of the marginal contribution difficult. Ridge Regression (RR) is a popular alternative to consider in this scenario; however, doing so impairs the standard approach for statistical testing. The Raise Method (RM) is a technique that was developed to combat multicollinearity while maintaining statistical inference. In this work, we offer a novel approach for determining the raise parameter, because the traditional one is a function of actual coefficients, which limits the use of Raise Method in real-world circumstances. Using simulations, the suggested method was compared to Ordinary Least Squares and Ridge Regression in terms of its capacity to forecast, stability of its coefficients, and probability of obtaining unacceptable coefficients at different levels of sample size, linear dependence, and residual variance. According to the findings, the technique that we designed turns out to be quite effective. Finally, a practical application is discussed.
PubDate: Jan 2023
- Developing Average Run Length for Monitoring Changes in the Mean on the
Presence of Long Memory under Seasonal Fractionally Integrated MAX Model
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Wilasinee Peerajit The cumulative sum (CUSUM) control chart can sensitively detect small-to-moderate shifts in the process mean. The average run length (ARL) is a popular technique used to determine the performance of a control chart. Recently, several researchers investigated the performance of processes on a CUSUM control chart by evaluating the ARL using either Monte Carlo simulation or Markov chain. As these methods only yield approximate results, we developed solutions for the exact ARL by using explicit formulas based on an integral equation (IE) for studying the performance of a CUSUM control chart running a long-memory process with exponential white noise. The long-memory process observations are derived from a seasonal fractionally integrated MAX model while focusing on X. The existence and uniqueness of the solution for calculating the ARL via explicit formulas were proved by using Banach's fixed-point theorem. The accuracy percentage of the explicit formulas against the approximate ARL obtained via the numerical IE method was greater than 99%, which indicates excellent agreement between the two methods. An important conclusion of this study is that the proposed solution for the ARL using explicit formulas could sensitively detect changes in the process mean on a CUSUM control chart in this situation. Finally, an illustrative case study is provided to show the efficacy of the proposed explicit formulas with processes involving real data.
PubDate: Jan 2023
- Multiplication and Inverse Operations in Parametric Form of Triangular
Fuzzy Number
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Mashadi Yuliana Safitri and Sukono Many authors have given the arithmetic form of triangular fuzzy numbers, especially for addition and subtraction; however, there is not much difference. The differences occur for multiplication, division, and inverse operations. Several authors define the inverse form of triangular fuzzy numbers in parametric form. However, it always does not obtain , because we cannot uniquely determine the inverse that obtains the unique identity. We will not be able to directly determine the inverse of any matrix in the form of a triangular fuzzy number. Thus, all problems using the matrix in the form of a triangular fuzzy number cannot be solved directly by determining . In addition, there are various authors who, with various methods, try to determine but still do not produce . Consequently, the solution of a fully fuzzy linear system will produce an incompatible solution, which results in different authors obtaining different solutions for the same fully fuzzy linear system. This paper will promote an alternative method to determine the inverse of a fuzzy triangular number in parametric form. It begins with the construction of a midpoint for any triangular fuzzy number , or in parametric form . Then the multiplication form will be constructed obtaining a unique inverse which produces . The multiplication, division, and inverse forms will be proven to satisfy various algebraic properties. Therefore, if a triangular fuzzy number is used, and also a triangular fuzzy number matrix is used, it can be easily directly applied to produce a unique inverse. At the end of this paper, we will give an example of calculating the inverse of a parametric triangular fuzzy number for various cases. It is expected that the reader can easily develop it in the case of a fuzzy matrix in the form of a triangular fuzzy number.
PubDate: Jan 2023
- Inclusion Results of a Generalized Mittag-Leffler-Type Poisson
Distribution in the k-Uniformly Janowski Starlike and the k-Janowski
Convex Functions
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Jamal Salah Hameed Ur Rehman and Iman Al Buwaiqi Due to the Mittag-Leffler function's crucial contribution to solving the fractional integral and differential equations, academics have begun to pay more attention to this function. The Mittag-Leffler function naturally appears in the solutions of fractional-order differential and integral equations, particularly in the studies of fractional generalization of kinetic equations, random walks, Levy flights, super-diffusive transport, and complex systems. As an example, it is possible to find certain properties of the Mittag-Leffler functions and generalized Mittag-Leffler functions [4,5]. We consider an additional generalization in this study, , given by Prabhakar [6,7]. We normalize the later to deduce in order to explore the inclusion results in a well-known class of analytic functions, namely and , -uniformly Janowski starlike and k-Janowski convex functions, respectively. Recently, researches on the theory of univalent functions emphasize the crucial role of implementing distributions of random variables such as the negative binomial distribution, the geometric distribution, the hypergeometric distribution, and in this study, the focus is on the Poisson distribution associated with the convolution (Hadamard product) that is applied to define and explore the inclusion results of the followings: and the integral operator . Furthermore, some results of special cases will be also investigated.
PubDate: Jan 2023
- Linear Stability of Double-sided Symmetric Thin Liquid Film by
Integral-theory
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Ibrahim S. Hamad The Integral Theory approach is used to explore the stability and dynamics of a free double-sided symmetric thin liquid film. For a Newtonian liquid with non-variable density and moving viscosity, the flowing in a thinning liquid layer is analyzed in two dimensions. To construct an equation that governs such flow, the Navier and Stokes formulas are utilized with proper boundary conditions of zero shear stress conjointly of normal stress on the bounding free surfaces with dimensionless variables. After that, the equations that are a non-linear evolution structure of layer thickness, local stream rate, and the unknown functions can be solved by using straight stability investigation, and the normal mode strategy can moreover be connected to these conditions to reveal the critical condition. The characteristic equation for the growth rate and wave number can be analyzed by using MATLAM programming to show the region of stable and unstable films. As a result of our research, we are able to demonstrate that the effect of a thin, free, double-sided liquid layer is an unstable component.
PubDate: Jan 2023
- Development of Nonparametric Structural Equation Modeling on Simulation
Data Using Exponential Functions
Abstract: Publication date: Jan 2023
Source:Mathematics and Statistics Volume 11 Number 1 Tamara Rezti Syafriana Solimun Ni Wayan Surya Wardhani Atiek Iriany and Adji Achmad Rinaldo Fernandes Objective: This study aims to determine the development of nonparametric SEM analysis on simulation data using the exponential function. Methodology: This study uses simulation data which is defined as an experimental approach to imitate the behavior of the system using a computer with the appropriate software. This study uses nonparametric structural equation modeling (SEM) analysis. The function used in this study is the exponential function. Results: The results showed that with simulation data all relationships have a significant effect on each other which have formative and reflective indicators. Testing the direct effect of Y2 on Y3 produces a structural coefficient value of 0.255 with a p-value
PubDate: Jan 2023
- Geographically Weighted Negative Binomial Regression Modeling using
Adaptive Kernel on the Number of Maternal Deaths during Childbirth
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 Fahimah Fauwziyah Suci Astutik and Henny Pramoedyo The standard model that is used for count data is Poisson Regression. In fact, most of the count data is overdispersed, which means that the response variable has greater variance than the mean. So the Poisson Regression cannot be used because overdispersion can cause inaccurate parameter estimators. One of the most widely used methods to overcome overdispersion is Negative Binomial Regression. If there are spatial effects such as spatial heterogeneity that are taken into Negative Binomial model, the appropriate method to analyze is Geographically Weighted Negative Binomial Regression (GWNBR). A spatial weighting matrix is required in the GWNBR model. In this study, three weighting functions were used, that is Adaptive Gaussian Kernel, Adaptive Bisquare Kernel, and Adaptive Tricube Kernel. From the three weighting functions, a model will be formed and the best model will be selected based on the smallest AIC. Count data used in this study is maternal deaths during childbirth in West Java Province, which is the highest case in Indonesia. The results of the analysis show that based on the smallest AIC, the best modeling in maternal deaths during childbirth in West Java is the GWNBR model using the Adaptive Gaussian Kernel weight. The results of the best model were obtained from three groups based on the predictor variables that had a significant effect.
PubDate: Sep 2022
- Characterization of a Class of Generalised Core-satellite Graphs Using
Average Degree
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 Malathy V and Kalyani Desikan Network equilibrium models are significantly distinct in supply chain networks, traffic networks, and e-waste flow networks. The idea of network equilibrium is strongly perceived while determining the tuner sets of a graph (network). Tuner sets are subsets of vertices of the graph G whose degrees are lower than the average degree of G, d(G) that can compensate or balance the presence of vertices whose degrees are greater than d(G). Generalised core-satellite graph comprises copies of (the satellites) meeting in Kc (the core) and it belongs to the family of graphs of diameter two. It has a central core of vertices connected to a few satellites, where all satellite cliques need not be identical and can be of different sizes. Properties like hierarchical structure of large real-world networks, are competently modeled using core-satellite graphs [1, 2, 5]. This family of graphs exhibits the properties similar to scale-free network as they possess anomalous vertex connectivity, where a small fraction of vertices (the core) are densely connected. Since these graphs possess such a structural property, interesting results are obtained for these graphs when tuner sets are determined. In this paper, we have considered the graph , with p> q, a subclass of the generalized core-satellite graph which is a join of η copies of the clique Kq and γ copies of the clique Kp with the core K1. We have obtained the tuner set for this subclass and established the relation between the Top T(G) and the cardinality of the tuner set through necessary and sufficient conditions. We analyze and characterize these graphs and obtain some interesting results while simultaneously examining the existence of tuner sets.
PubDate: Sep 2022
- Anti-hesitant Fuzzy Subalgebras, Ideals and Deductive Systems of Hilbert
Algebras
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 Aiyared Iampan S. Yamunadevi P. Maragatha Meenakshi and N. Rajesh The Hilbert algebra, one of several algebraic structures, was first described by Diego in 1966 [7] and has since been extensively studied by other mathematicians. Torra [18] was the first to suggest the idea of hesitant fuzzy sets (HFSs) in 2010, which is a generalization of the fuzzy sets defined by Zadeh [20] in 1965 as a function from a reference set to a power set of the unit interval. The significance of the ideas of hesitant fuzzy subalgebras, ideals, and filters in the study of the different logical algebras aroused our interest in applying these concepts to Hilbert algebras. In this paper, the concepts of HFSs to subalgebras (SAs), ideals (IDs), and deductive systems (DSs) of Hilbert algebras are introduced in terms of anti-types. We call them anti-hesitant fuzzy subalgebras (AHFSAs), anti-hesitant fuzzy ideals (AHFIDs), and anti-hesitant fuzzy deductive systems (AHFDSs). The relationships between AHFSAs, AHFIDs, and AHFDSs and their lower and strong level subsets are provided. As a result of the study, we found their generalization as follows: every AHFID of a Hilbert algebra Ω is an AHFSA and an AHFDS of Ω. We also study and find the conditions for the complement of an HFS to be an AHFSA, an AHFID, and an AHFDS. In addition, the relationships between the complements of AHFSAs, AHFIDs, and AHFDSs and their upper and strong level subsets are also provided.
PubDate: Sep 2022
- On a Weak Solution of a Fractional-order Temporal Equation
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 Iqbal M. Batiha Zainouba Chebana Taki-Eddine Oussaeif Adel Ouannas and Iqbal H. Jebril Several real-world phenomena emerging in engineering and science fields can be described successfully by developing certain models using fractional-order partial differential equations. The exact, analytical, semi-analytical or even numerical solutions for these models should be examined and investigated by distinguishing between their solvablities and non-solvabilities. In this paper, we aim to establish some sufficient conditions for exploring the existence and uniqueness of solution for a class of initial-boundary value problems with Dirichlet condition. The gained results from this research paper are established for the class of fractional-order partial differential equations by a method based on Lax Milgram theorem, which relies in its construction on properties of the symmetric part of the bilinear form. Lax Milgram theorem is deemed as a mathematical scheme that can be used to examine the existence and uniqueness of weak solutions for fractional-order partial differential equations. These equations are formulated here in view of the Caputo fractional-order derivative operator, which its inverse operator is the Riemann-Louville fractional-order integral one. The results of this paper will be supportive for mathematical analyzers and researchers when a fractional-order partial differential equation is handled in terms of finding its exact, analytical, semi-analytical or numerical solution.
PubDate: Sep 2022
- Nano -connectedness and Strongly Nano -connectedness in Nano Topological
Spaces
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 S. Gunavathy R. Alagar Aiyared Iampan and Vediyappan Govindan This article's goals are to propose a brand-new category of space termed "nano-ideal topological spaces" and to look at how they relate to conventional topological spaces. To determine their relationships in these spaces, we create certain closed sets. These sets' fundamental characteristics and properties are provided. Additionally, we look into two theories of optimal connectivity in nano topological spaces. In particular, we obtain certain features of such spaces and define -connectedness and strongly -connectedness nano-topological spaces in terms of any ideal . This study aims to illustrate a novel kind of nano-topological space called nano--topological space, and we define the relationships between the various classes of open sets. We speak about how we might characterise them. Some of their characterizations are finally supported. The lower and upper approximations are used by the author to define nano topological space. As weak variants of Nano open sets, he also created Nano -open sets, Nano semi-open sets, and Nano pre-open sets. Continuity, the fundamental notion of topology in nano topological space, was also introduced. Also, we introduce the notion of nano -continuity between nano topological spaces and we investigate several properties of this type of near-nano continuity. Finally, we introduce two examples as applications in nano-topological spaces.
PubDate: Sep 2022
- Finite Domination Type for Monoid Presentations
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 Elton Pasku and Anjeza Krakulli In [5], Squier, Otto and Kobayashi explored a homotopical property for monoids called finite derivation type (FDT) and proved that FDT is a necessary condition that a finitely presented monoid must satisfy if it is to have a finite canonical presentation. In the latter development in [2], Kobayashi proved that the property is equivalent with what is called in [2] finite domination type. It was indicated in the end of [2] that there are monoids which are not even finitely generated, and as a consequence are not of FDT. It was this indication that inspired us to look for the possibility of defining a property of monoids which encapsulates both, FDT and finite domination type. This is realized in the current paper by extending the notion of finite domination from monoids to rewriting systems, and to achieve this, we are based on the approach of Isbell in [1], who defined the notion of the dominion of a subcategory of a category and characterized that dominion in terms of zigzags in over . The reason we followed this approach is that to every rewriting system which gives a monoid , there is always a category associated to it which contains three types of information at the same time: (i) all the possible ways in which the elements of are written in terms of words with letters from , (ii) all the possible ways one can transform a word with letters from into another one representing the same element of by using rewriting rules from . Each of such way gives is in fact a path in the reduction graph of . The last information (iii) encoded in is that contains all the possible ways that two parallel paths of the reduction graph are linked to each other by a series of compositions of whiskerings of other parallel paths. This category turns out to have the advantage that it can "measure" the extent to which a set of parallel paths is sufficient to express any pair of parallel paths by composing whiskers from . The gadget used to measure this, is the Isbell dominion of the whisker category generated by over . We then define the monoid given by to be of finite domination type (FDOT) if both and are finite and there is a finite set of morphisms such that is exactly . The first main result of our paper is that likewise FDT, FDOT is an invariant of the monoid presentation, and the second one is that that FDT implies FDOT, while remains open whether the converse is true or not. The importance of FDOT stands in the fact that not only it generalizes FDT, but the way it is defined has a lot in common with , giving thus hope that FDOT is the right tool to put FDT and into the same framework.
PubDate: Sep 2022
- Numerical Solution of the Two-Dimensional Elasticity Problem in Strains
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 Khaldjigitov Abduvali and Djumayozov Umidjon Usually, the boundary value problems of the theory of elasticity are formulated with respect to displacements, and are reduced to the well-known Lame equations. Strains and stresses can be calculated from displacements as a solution to Lame's equation. Also known are the Beltrami Mitchell equations, which make it possible to formulate the boundary value problem of the theory of elasticity with respect to stresses. Currently, the boundary value problems of the theory of elasticity in stresses are studied in more detail in the two-dimensional case, and usually solved numerically with the introduction of the Airy stress function. But, the direct solution of boundary value problems of elasticity theory with respect to stresses requires further researches. This work, similarly to the boundary value problem in stresses, is devoted to the formulation and numerical solution of boundary value problems of the theory of elasticity with respect to deformations. The proposed boundary value problem consists of six Beltrami-Mitchell-type equations depending on strains and three equations of the equilibrium equation expressed with respect to deformations. As boundary conditions, in addition to the usual conditions for surface forces, three additional conditions are also introduced based on the equilibrium equations. The boundary value problem is considered in detail for a rectangular area. The discrete analogue of the boundary value problem is composed by the finite difference method. The convergence of difference schemes and an iterative method for their solution are studied. Software has been developed in the C++ environment for solving boundary value problems in the theory of elasticity and deformation. A number of boundary value problems on the deformation of a rectangular plate are solved numerically under various boundary conditions. The reliability of the obtained results is substantiated by comparing the numerical results, with the exact solution, as well as with the known solutions of the plate tension problems with parabolic and uniformly distributed edge loads.
PubDate: Sep 2022
- Fuzzy Norm on Fuzzy -Normed Space
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 Mashadi Abdul Hadi and Sukono In various articles, fuzzy -normed space concept for is constructed from fuzzy normed space which uses intuitionistic approach or -norm approach concept. However, fuzzy normed space can be approached using fuzzy point too. This paper shows that fuzzy -normed space for can be constructed from fuzzy normed space using fuzzy point approach of fuzzy set. Furthermore, for , it is also discussed how to construct fuzzy ()-normed space from fuzzy -normed space using fuzzy point approach. The method that can be used is as follows. From fuzzy normed space, we construct a norm function that satisfies properties of fuzzy -normed, so that fuzzy -normed space is derived. Conversely, from fuzzy -normed space, we construct a normed function that satisfies properties of fuzzy ()-normed, so that fuzzy ()-normed space is obtained. Finally, we get two new theorems that state that a fuzzy -normed space from any fuzzy normed space and fuzzy ()-normed space for from fuzzy -normed space using fuzzy point of fuzzy set always can be constructed.
PubDate: Sep 2022
- Empirical Power and Type I Error of Covariate Adjusted Nonparametric
Methods
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 Jiabu Ye and Dejian Lai In clinical trials, practitioners collect baseline covariates for enrolled patients prior to treatment assignment. In recent guidance from Food and Drug Administration and European Medicines Agency, regulators encourage practitioners to utilize baseline information at the analysis stage to improve the efficiency. However, the current guidance focused on linear or non-linear modelling approach. Nonparametric statistical methods were not focus in the guidance. In this article, we conducted simulations of several covariate-adjusted nonparametric statistical tests. Wilcoxon rank sum test is a widely used method for non-normally distributed response variables between two groups but its original form does not take into account the possible effect of covariates. We investigated the empirical power and the type I error of the Wilcoxon type test statistics under various settings of covariate adjustments commonly encountered in clinical trials. In addition to Wilcoxon type test statistics, we also compared simulation results to more advanced nonparametric test statistics such as the aligned rank test and Jaeckel, Hettmansperger-McKean test. The simulation result shows when there is covariate imbalance, applying Wilcoxon rank sum test without adjusting the covariates will become problematic. The survey of the covariate adjustments for varies tests under investigation gives brief guidance to trial practitioners in real practice, particularly whose baseline covariates are not well balanced.
PubDate: Sep 2022
- Henstock - Kurzweil Integral for Banach Valued Function
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 T. G. Thange and S. S. Gangane In this paper, we have studied the Henstock - Kurzweil integral which is a generalized Riemann integral means. Hen-stock - Kurzweil integral is the natural extension of Riemann integral. We defined Henstock - Kurzweil integral of Banach space valued function with respect to a function of bounded variation which is an extension of real valued Henstock - Kurzweil integral with respect to an increasing function. We investigated elementary properties of the Henstock - Kurzweil integral of Banach space valued function with respect to a function of bounded variation. We proved the convergence theorems and Saks - Henstock lemma of the Henstock - Kurzweil integral of Banach valued functions with respect to a function of bounded vari-ation. Equi-integrability with respect to Banach space valued function is defined and equi-integrable theorem of Henstock - Kurzweil integral of Banach space valued function with respect to a function of bounded variation is proved. Finally Bochner Henstock - Kurzweil integral of Banach valued function with respect to a function of bounded variation is defined and the relation between Bochner Henstock - Kurzweil integral and Henstock - Kurzweil integral is exhibited.
PubDate: Sep 2022
- Mathematical Analysis of Dynamic Models of Suspension Bridges with Delayed
Damping
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 Akbar B. Aliyev and Yeter M. Farhadova Suspension bridges are a type of construction in which the deck is suspended under a series of suspension cables that are on vertical hangers. The first modern example of this project began to appear in the early 1800s. Modern suspension bridges are lightweight, aesthetically pleasing and can span longer distances than any other bridge form. Many papers have been devoted to the modelling of suspension bridges, for instance, Lazer and McKenna studied the problem of nonlinear oscillation in a suspension bridge. They introduced a (one-dimensional) mathematical model for the bridge that takes into account of the fact that the coupling provided by the stays connecting the main cable to the deck of the road bed is fundamentally nonlinear, that is, they gave rise to the system of semi linear hyperbolic equation, where the first equation describes the vibration of the road bed in the vertical plain and the second equation describes that of the main cable from which the road bed is suspended by the tie cables. Recently, interest in this field has been increasing at a high rate. In this paper, we investigate some mathematical models of suspension bridges with a strong delay in linear aerodynamic resistance force. We establish the exponential decay of the solution for the corresponding homogeneous system and prove the existence of an absorbing set as well as a bounded attractor.
PubDate: Sep 2022
- Solutions of Nonlinear Fractional Differential Equations with
Nondifferentiable Terms
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 Monica Botros E.A.A.Ziada and I.L. EL-Kalla In this research, we employ a newly developed strategy based on a modified version of the Adomian decomposition method (ADM) to solve nonlinear fractional differential equations (FDE) with both differential and nondifferential variables. FDE have disturbed the interest of many researchers. This is due to the development of both the theory and applications of fractional calculus. This track from various areas of fractional differential equations can be used to model various fields of science and engineering such as fluid flows, viscoelasticity, electrochemistry, control, electromagnetic, and many others. Several fractional derivative definitions have been presented, including Riemann–Liouville, Caputo,and Caputo– Fabrizio fractional derivative. We just need to calculate the first Adomain polynomial in this technique avoiding the hurdles in the nondifferentiable nonlinear terms' remaining polynomials. Furthermore, the proposed technique is easy to programme and produces the desired output with minimal work and time on the same processor. When compared to the exact solution, this method has the advantage of reducing calculation steps, while producing accurate results. The supporting evidence proves that modified Adomian decomposition has an advantage over traditional Adomian decomposition method which can be explained very clear with nonlinear fractional differential equations. Our computational examples with difficult issues are used to prove the new algorithm's efficiency. The results show that the modified ADM is powerful, which has a faster convergence solution than the original one. Convergence analysis is discussed, also the uniqueness is explained.
PubDate: Sep 2022
- Some Results on Theory of Numbers, Partial Differential Equations and
Numerical Analysis
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 B. M. Cerna Maguiña Dik D. Lujerio Garcia Carlos Reyes Pareja and Torres Dominguez Cinthia In this article, given a number that ends in one and assuming that there are integer solutions for the equations or or , the straight line was used passing through the center of gravity of the triangle bounded by the vertices . Considering A ≥ 25, we manage to divide the domain of the curve into two disjoint subsets, and using Theorem (2.2) of this article, we find the subset where the integer solution of the equation is found. Similar process is done when , in case P is of the form or . These curves are different and to obtain a process similar to the one carried out previously, we proceeded according to Observation 2.2. Our results allow minimizing the number of operations to perform when our problem requires to be implemented computationally. Furthermore, we obtain some conditions to find the solution of the equations: , where is of class , and is a bounded open domain of with piecewise smooth boundary . All the operations carried out to find the solution have been carried out assuming that these exist, and we have found the conditions that must satisfy for the coefficients . We finish by finding an optimal domain for the real solution of a given polynomial of degree five. This process carried out on said given polynomial can also be carried out to reduce the degree of a given polynomial and thus obtain information about its roots.
PubDate: Sep 2022
- The Exact Solutions of the Space and Time Fractional Telegraph Equations
by the Double Sadik Transform Method
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 Prapart Pue-on The double integral transform is a robust implementation that is important in handling scientific and engineering problems. Besides its simplicity of use and straightforward application to the issue, the ability to reduce the problems to an algebraic equation that can be easily solved is a substantial advantage of the tool. Among the several integral transforms, the double Sadik transform is acknowledged to be one of the most frequently used in solving differential and integral equations. This work deals with investigating a generalized double integral transform called the double Sadik transform. The proof of the double Sadik transforms for partial fractional derivatives in the Caputo sense is displayed, and the double Sadik transforms method is introduced. The method has been applied to solve the initial boundary value problems for linear space and timefractional telegraph equations. Moreover, the suggested strategy can be used on non-linear problems via an iterative method and a decomposition concept. Some known-solution questions are evaluated with relatively minimal computational cost. The results are represented by utilizing the Mittag-Leffler function and covering the solution of a classical telegraph equation. The obtained exact solutions not only show the accuracy and efficiency of the technique, but also reveal reliability when compared to those obtained using other methods.
PubDate: Sep 2022
- On -Ideal Statistically Convergent of Double Sequences in n-Normed Spaces
over Non-Archimedean Fields
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 R. Sakthipriya and K. Suja The main aim of this work is to investigate some important properties of statistical convergence sequence in non-Archimedean fields. Statistical convergence has been discussed in various fields of mathematics namely approximation theory, measure theory, probability theory, trigonometric series, number theory, etc. The concept of summability over valued fields is a significant area of mathematics that has many applications in analytic continuation, quantum mechanics, probability theory, Fourier analysis, approximation theory, and fixed point theory. The theory of statistical convergence plays a notable space in the summability theory and functional analysis. The purpose of this work is to provide certain characterizations of ideal statistical convergence of sequence and ideal statistical Cauchy sequence in n-normed spaces and the establishment of relevant results in non-Archimedean fields. The ideal statistical convergence of sequence and ideal statistically Cauchy sequence are defined. A few related theorems are proved in field . The results of this work are extended to establish statistical convergence of double sequences in n-normed space and some new results have been proved. In this work, the main concept is ideal statistical convergence of double sequences in n-normed space over a complete, non-trivially valued, non-Archimedean field. Throughout this article, is a complete, non-trivially valued, non-Archimedean field.
PubDate: Sep 2022
- Mathematical Analysis of Priority Bi-serial Queue Network Model
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 Deepak Gupta Aarti Saini and A.K.Tripathi One of the most comprehensive theories of stochastic models is queueing theory. Through innovative analytical research with broad applicability, advanced theoretical models are being developed. In the present research, we would like to investigate at a queuing network model with low and high priority users and different server transition probabilities. The two service channels used in this study, and , are connected to the same server, . Customers with low and high priorities are invited by the server . The objective of the research is to design a model that helps in minimizing congestion in different systems. Poisson distribution is used to characterize both the arrival and service patterns. The functioning of this system takes place in a stochastic domain. The differential difference equations have been established, and the consistency of behaviour of the system has been examined. The generating function approach, the law of calculus, and a statistical formula are used to assess the model's performance. Numerical analyses and graphical presentations are used to show the model's outcomes. The results of the model are displayed graphically and through numerical analyses. This model can be used in a number of real situations, including administration, manufacturing, hospitals, banking systems, etc. In such situations, the present study is quite beneficial for understanding the system and redesigning it.
PubDate: Sep 2022
- Using Clustering Methods to Detect the Revealed Preferences of Moroccans
towards the Electric Vehicles: Latent Class Analysis (LCA) and K-Modes
Algorithm (K-MA)
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 Taoufiq El Harrouti Mourad Azhari Hajar Deqqaq Abdellah Abouabdellah Sanaa El Aidi and Habiba Chaoui Latent Class Analysis (LCA) and k-Mode Algorithm (K-MA) are two unsupervised machine learning techniques. These methods aim to identify individuals on the basis of their shared traits. They are utilized in the context of categorical data and can be used to detect people's opinions toward green forms of transportation, especially Electric Vehicles (EV) as an alternative to conventional internal combustion engine vehicles. The LCA approach discovers group profiles (clusters) based on observed variables, whereas the K-MA technique is an adaptation of the k-means algorithm for categorical variables. In this study, we apply these two methods to identify Moroccans' preferences for the electrification of their means of transportation. Both algorithms are able to divide the analyzed sample into two groups, with the first group being more interested in EV. The second group consists of individuals who are less concerned about ecologically sustainable transportation. In addition, we conclude that the LCA algorithm performs well and is superior to the K-MA, and that its discrimination power (65% vs 35%) is more than that of the K-MA (52% vs 48%).
PubDate: Sep 2022
- Asymptotically Minimax Goodness-of-fit Testing for Single-index Models
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 Jean-Philippe Tchiekre Christophe Pouet and Armel Fabrice E. Yodé In the context of non parametric multivariate regression model, we are interested in goodness-of-fit testing for the single-index models. These models are dimension reduction models and are therefore useful in multidimensional nonparametric statistics because of the well-known phenomenon called the curse of dimensionality. Fan and Li [5] have proposed the first consistent test for goodness-of-fit testing of the single-index by using nonparametric kernel estimation method and a central limit theorem for degenerate -statistics of order higher than two. Since then, the minimax properties of this test have not been investigated. Following this work, we use here the asymptotic minimax approach. We are interested in finding the asymptotic minimax rate of testing which gives the minimal distance between the null and alternative hypotheses such that a successful testing is possible. We propose a test procedure of level which can tend to zero when the sample size tends to infinity. We have established the minimax asymptotic properties of our test procedure by showing that it reaches the asymptotic minimax rate for the dimension and there is no test of level reaching this rate for . Because of its minimax asymptotic properties, our test is able to distinguish the null hypothesis of the closest possible alternative. The results obtained were possible thanks to a large deviation result that we established for a degenerate U-statistic of order two appearing in our decision variable.
PubDate: Sep 2022
- Accuracy Improvement of Block Backward Differentiation Formulas for
Solving Stiff Ordinary Differential Equations Using Modified Versions of
Euler's Method
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 Nurfaezah Mohd Husin Iskandar Shah Mohd Zawawi Nooraini Zainuddin and Zarina Bibi Ibrahim In this study, the fully implicit 2-point block backward differentiation formulas (BBDF) method has been successfully utilized for solving stiff ordinary differential equations (ODEs) by taking into account the uses of new starting methods namely, modified Euler's method (MEM), improved modified Euler's method (IMEM), and new Euler's method (NEM). The reason of proposing the BBDF is that the method has been proven useful for stiff ODEs due to its A-stable properties. Furthermore, the method is able to approximate the solutions at two points simultaneously at each step. The proposed method is also implemented through Newton's iteration procedure, which involves the calculation of the Jacobian matrix. Accuracy of the method is evaluated based on its performance in solving linear and non-linear initial value problems (IVPs) of first order stiff ODEs with transient and steady-state solutions. Some comparisons are made with the conventional BBDF approach for indicating the reliability of the proposed method. Numerical results indicate that not only classical Euler's method provides accurate solutions for BBDF, but also the numerous modified versions of Euler's methods improve the accuracy of BBDF, in terms of absolute error at certain step size and stage of iteration.
PubDate: Sep 2022
- Towards a Model for Simulating Collision of Multiple Water Droplets Flow
Down a Leaf Surface
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 Moa'ath N. Oqielat In the current article, a physics-based mathematical model is presented to generate realistic trajectories of water droplets across the Frangipani leaf surface and can be applied on any other kind of leaves, which is the first in the series of two articles that we are going to present the second article later on. In the second article, we will study the collision between the droplet and the liquid streak. The model has many applications in different scientific and engineering fields, such as modelling pesticide movements on leaves surfaces and modeling absorption and nutrition systems. The leaf surface consists of a triangular mesh structure that needs to be constructed using different techniques such as a well-known technique called EasyMesh method. The leaf surface is constructed using surface fitting techniques, such as finite elements methods and Clough-Tocher method, using a set of 3D real-world data points collected by a laser scanner, and the motion of the droplet on each triangle is calculated using a derived equation of motion. The motion of the droplet affected different forces, such as gravity and drag forces. Simulations of the model were verified using Matlab programming, and the results seemed to be real and capture the droplet motion very well.
PubDate: Sep 2022
- Stochastic-Fractal Analysis Modeling of Salts Precipitation from Aqueous
Solution
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 Isela J. Reyna-Rosas Josué F. Pérez-Sánchez Edgardo Suárez-Domínguez Alejandra Hernández-Alvarado Susana Gonzalez-Santana and F. Izquierdo-Kulich Electrolytes are of interest because thin plate coatings are normally obtained from aqueous solutions. The properties of the surface are important because various properties such as resistance or durability depend on it. To understand the phenomenological processes, it is better to analyze simpler processes such as sodium chloride. In this paper, a model is proposed to predict the temporal behavior of the fractal dimension of the patterns formed in salts precipitation by solvent evaporation in a scattering surface; for fractal-box counting, ImageJ software was used. The model was obtained by applying stochastic methods and fractal geometry, describing the internal fluctuations caused by precipitation and dissolution on the mesoscopic scale of solid crystalline particles. From adjusting the proposed model to the experimental data, it is possible to estimate the velocity constants related to the microscopic precipitation processes of the particles that form the pattern. The model was validated and used to study the precipitation of carbonate salts and sodium chloride, respectively, obtaining predictions corresponding to the physicochemical properties of these salts. From the adjustment of the proposed models to the observed experimental data, the value of the velocity constants of the precipitation and dissolution processes was also estimated.
PubDate: Sep 2022
- Analysis of Hetrogeneous Feedback Queue Model in Stochastic and in Fuzzy
Environment Using L-R Method
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 Vandana Saini Deepak Gupta and A.K. Tripathi In this paper, we analyse a feedback queue network in stochastic and in fuzzy environment. We consider a model with three heterogeneous servers which are commonly attached to a server in starting. At the initial stage, all queue performance measures are obtained in steady-state that is in stochastic environment. After that, work is extended to fuzzy environment because practically all characteristics of the system are not exact, they are uncertain in nature. In the present work we use probability generating function technique, triangular fuzzy numbers, classical formulae for the calculation of all queue characteristics and L-R method to calculate queue characteristics in fuzzy environment.
PubDate: Sep 2022
- The Initialization of Flexible K-Medoids Partitioning Method Using a
Combination of Deviation and Sum of Variable Values
Abstract: Publication date: Sep 2022
Source:Mathematics and Statistics Volume 10 Number 5 Kariyam Abdurakhman Subanar and Herni Utami This research proposed a new algorithm for clustering datasets using the Flexible K-Medoids Partitioning Method. The procedure is divided into two phases, selecting the initial medoids and determining the partitioned dataset. The initial medoids are selected based on the block representation of a combination of the sum and deviation of the variable values. The relative positions of the objects will be separated when the sum of the values of the p variables is different even though these objects have the same variance. The objects are selected flexibly from each block as the initial medoids to construct the initial groups. This process ensures that any identical objects will be in the same group. The candidate of final medoids is determined randomly by selecting objects from each initial group. Then, the final medoids were identified based on the combination of objects that produces the minimum value of the total deviation within the cluster. The proposed method overcomes the empty group that may arise in a simple and fast k-medoids algorithm. In addition, it overcomes identical objects in the different groups that may occur in the initialization of the simple k-medoids algorithm. Furthermore, the artificial data and six real datasets, namely iris, ionosphere, soybean small, primary tumor, heart disease case 1 and zoo were used to evaluate this method, and the results were compared with other algorithms based on the initial and final groups' performance. The experiment results showed that the proposed method ensures that no initial groups are empty. For real datasets, the adjusted Rand index and clustering accuracy of the final groups of the new algorithm outperforms the other methods.
PubDate: Sep 2022
- Construction of Rough Graph through Rough Membership Function
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 R. Aruna Devi and K. Anitha Rough membership function defines the degree of relationship between conditional and decision attributes of an information system. It is defined by where is the subset of under the relation where is the universe of discourse. It can be expressed in different forms like cardinality form, probabilistic form etc. In cardinality form, it is expressed as where as in probabilistic form it can be denoted as where is the equivalence class of with respect to . This membership function is used to measure the value of uncertainty. In this paper we have introduced the concept of graphical representation of rough sets. Rough graph was introduced by He Tong in 2006. In this paper, we propose a novel method for the construction of rough graph through rough membership function . We propose that there is an edge between vertices if . The rough graph is being constructed for an information system; here objects are considered as vertices. Rough path, rough cycle, rough ladder graph are introduced in this paper. We develop the operations on rough graph and also extend the properties of rough graph.
PubDate: Nov 2022
- Central Automorphisms in n-abelian Groups
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 Rugare Kwashira The study of Aut(G), the group of automorphisms of G, has been undertaken by various authors. One way to facilitate this study is to investigate the structure of Autc(G), the subgroup of central automorphisms. For some classes of groups, algebraic properties like solvability, nilpotency, abelian and nilpotency relative to an automorphism can be deduced through the study of the subgroups Autc(G) and Autc∗ (G) where Autc∗ (G) is the group of central automorphisms that fix Z(G) point-wise. For instance, [6], if Autc(G) = Aut(G) then G is nilpotent of class 2 and if G is f-nilpotent for Autc∗ (G), then for a group G, the notions of relative nilpotency and nilpotency coincide [8]. The group is abelian if G is identity nilpotent only [8]. For an arbitrary group G, the subgroups Autc(G) and Autc∗ (G) are trivial, but for the case when G is a p-group, Autc(G) is non-trivial and the structure of Autc∗ (G) have been described [4]. The study of the influence of types of subgroups on the structure of G is a powerful technique, thus, one can investigate the influence of maximal invariant subgroups of G on the structure of Autc∗ (G). We shall consider a class of finite, non-commutative, n-abelian groups that are not necessarily pgroups. Here, n = 2l + 1 is a positive integer and l is an odd integer. The purpose of this paper is to explicitly describe the central automorphisms of G = Gl that fix the center elementwise and consequently the algebraic structure of Autc∗ (G). For this goal, we will study the invariant normal subgroups M of G such that and M is maximal in G. It suffices to study Hom(G/M,Z(G)), the group of homomorphisms from the quotient G/M to the center Z(G). We explore the central automorphism group of pullbacks involving groups of the form Gl. We extend our study to central automorphisms in this class of groups Gl, in which the mapping is an automorphism. For such groups, Autc∗ (G) can be described through Hom(G/M,Z(G)), where M is normal and a maximal subgroup in G such that the quotient group G/M is abelian. We show that Hom and Autc∗ (G) is isomorphic to the cyclic group of order a prime p. The class of groups studied in our paper falls under a bigger class of groups which have a special characterization that their non normal subgroups are contranormal. The results of this paper can be generalized to this bigger class of groups.
PubDate: Nov 2022
- Some Fixed Point Results in Bicomplex Valued Metric Spaces
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 Duduka Venkatesh and V. Naga Raju Fixed points are also called as invariant points. Invariant point theorems are very essential tools in solving problems arising in different branches of mathematical analysis. In the present paper, we establish three unique common invariant point theorems using two self-mappings, four self-mappings and six self-mappings in the bicomplex valued metric space. In the first theorem, we generate a common invariant point theorem for four self-mappings by using weaker conditions such as weakly compatible, generalized contraction and property. Then, in the second theorem, we generate a common invariant point theorem for six self-mappings by using inclusion relation, generalized contraction, weakly compatible and commuting maps. Further, in the third theorem, we generate a common coupled invariant point for two self mappings using different contractions in the bicomplex valued metric space. The above results are the extention and generalization of the results of [11] in the Bicomplex metric space. Moreover, we provide an example which supports the results.
PubDate: Nov 2022
- A Study on Intuitionistic Fuzzy Critical Path Problems Through Centroid
Based Ranking Method
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 T. Yogashanthi Shakeela Sathish and K. Ganesan In this study the intuitionistic fuzzy version of the critical path method has been proposed to solve networking problems with uncertain activity durations. Intuitionistic fuzzy set [1] is an extension of fuzzy set theory [2] unlike fuzzy set, it focuses on degree of belonging, the degree of nonbelonging or non-membership function and the degree of hesitancy which helps the decision maker to adopt the best among the worst cases. Trapezoidal and the triangular intuitionistic fuzzy numbers are utilized to describe the uncertain activity or task durations of the project network. Here trapezoidal and triangular intuitionistic fuzzy numbers are converted into their corresponding parametric form and applying the proposed intuitionistic fuzzy arithmetic operations and a new method of ranking based on the parametric form of intuitionistic fuzzy numbers, the intuitionistic fuzzy critical path with vagueness reduced intuitionistic fuzzy completion duration of the project has been obtained. The authentication of the proposed method can be checked by comparing the obtained results with the results available in pieces of literature.
PubDate: Nov 2022
- Transparency Order and Cross-Correlation Analysis of Boolean Functions
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 Mayasar Ahmad Dar Hiral Raja Afshan Butt and Deepmala Sharma Transparency order is considered to be a cryptographically significant property that characterizes the resistance of S-boxes in opposition to differential power analysis attacks. The S-box having low transparency order is more resistant to these attacks. Until now, little attempts have been noticed to examine theoretically the transparency order and its relationship with other cryptographic properties. All constructions associated with transparency order are relying on search algorithms. In this paper, we discuss the new interpretation of bent functions in terms of their transparency order. Using the concept of vector concatenation and correlation characteristics, we find the transparency order of Boolean functions. The notion of complementary transparency order is given. For a pair of Boolean functions, we interpret complementary transparency order by their Walsh-Hadamard transform. We establish a relationship of transparency order with cross-correlation for a pair of Boolean functions. We find a relationship of transparency order with −variable decomposition bent functions. We generalize the bounds on sum-of-squares of autocorrelation in terms of transparency order of Boolean functions using Walsh-Hadamard spectra. Further the transparency order of a function fulfilling the propagation criterion about a linear subspace is evaluated.
PubDate: Nov 2022
- Maximum Likelihood Estimation in the Inverse Weibull Distribution with
Type II Censored Data
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 Fatima A. Alshaikh and Ayman Baklizi We consider maximum likelihood estimation for the parameters and certain functions of the parameters in the Inverse Weibull (IW) distribution based on type II censored data. The functions under consideration are the Mean Residual Life (MRL), which is very important in reliability studies, and Tail Value at Risk (TVaR), which is an important measure of risk in actuarial studies. We investigated the performance of the MLE of the parameters and derived functions under various experimental conditions using simulation techniques. The performance criteria are the bias and the mean squared error of the estimators. Recommendations on the use of the MLE in this model are given. We found that the parameter estimators are almost unbiased, while the MRL and TVaR estimators are asymptotically unbiased. Moreover, the mean squared error of all estimators decreased for larger sample sizes and it increased when the censoring proportion is increased for a fixed sample size. The conclusion is that the maximum likelihood method of estimation works well for the parameters and the derived functions of the parameter like the MRL and TVaR. Two examples on a real data set are presented to illustrate the application of the methods used in this paper. The first one is on survival time of pigs while the other is on fire losses.
PubDate: Nov 2022
- Estimation of Nonparametric Path Fourier Series and Truncated Spline
Ensemble Models
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 Atiek Iriany and Adji Achmad Rinaldo Fernandes To ascertain whether there is a causal connection between exogenous and endogenous factors, one method is to perform path analysis. The linearity assumption is the one that has the power to alter the model. The model's shape is impacted by the linearity assumption. The path analysis is parametric if the linearity assumption is true, but non-parametric path analysis is used if the non-linear form is unknown and there is no knowledge of the data pattern. If the non-linear form is unknown and there is no knowledge of the data pattern, non-linear path analysis is used. This study's goal was to calculate the nonparametric route function using a combination of truncated spline and Fourier series methods. The findings demonstrated that nonparametric path analysis only in cases where the linearity presumption is violated can one employ the Fourier series and truncated spline. Then, using the Ordinary Least Square (OLS) approach, the estimator of Nonparametric Regression-Based Path Analysis was obtained, delivering an estimation result that is not unique because it makes use of a nonparametric approach. The contribution of this paper can be used as reference material, especially analysis in statistics. With this paper, it is hoped that it can be applied in various fields. Suggestions for further research can develop this research with other models.
PubDate: Nov 2022
- Signal Modeling with IG Noise and Parameter Estimation Based on RJMCMC
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 Akhmad Fauzy Suparman and Epha Diana Supandi Piecewise constant (PC) is a stochastic model that can be applied in various fields such as engineering and ecology. The stochastic model contains a noise. The accuracy of the stochastic model in modeling a signal is influenced by the type of noise. This paper aims to propose inverse-gamma noise in the PC model and the procedure for estimating the model parameters. The model parameters are estimated using the Bayes approach. Model parameters have a variable dimension space so that the Bayesian estimator cannot be determined analytically. Therefore, the Bayesian estimator is calculated using the reversible jump Markov Chain Monte Carlo (RJMCMC) algorithm. The performance of the RJMCMC algorithm is validated using data synthesis. The finding is a new PC model in which the noise has an inverse-gamma distribution. In addition, this paper also proposes a parameter estimation procedure for the model based on an RJMCMC. The simulation study shows that the model parameter estimators generated by this algorithm are close to the model parameter values. This paper concludes that inverse gamma noise can be used as an alternative noise in the PC model. The RJMCMC is categorized as a valid algorithm and can estimate the PC model parameters where the noise has an inverse-gamma distribution. The novelty in this paper is the development of a new stochastic model and the procedure for estimating the model parameters. In application, the findings in this paper have the potential to improve the suitability of the stochastic model to the signal.
PubDate: Nov 2022
- Ruin Probability for Some Mixed Linear Exponential Family in Classical
Risk Process
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 Khanchit Chuarkham Arthit Intarasit and Pakwan Riyapan This article presents the probability of ruin for the classical risk process by including the density function of claims which satisfies a mixed linear exponential family. This can be defined as , where , , is a positive integer with with , , , and is the canonical parameter. The main results show that the ordinary differential equation for the probability of ruin in the general case by using chain rule and mathematical induction technique is given in Theorem 2.2, the ordinary differential equation for some mixed linear exponential family when , , , , , is demonstrated in Theorem 2.3, and an explicit solution for the probability of ruin when the mixed linear exponential family satisfies the conditions which are , , with , and is indicated in Theorem 2.4. Finally, we use MATLAB to generate the numerical simulations for the probability of ruin in the risk process that the number of claims is a Poisson process and the density function of claims satisfies a mixed linear exponential family and a gamma distribution under the conditions of Theorem 2.4 with the parameters =1 and =0.2. The numerical results reveal that the relative frequency of the ruin and the ruin probability also satisfy the Lundberg inequality which is the necessary condition for the ruin probability. In addition, the absolute values of its differences are small in order to confirm that the main results are correct.
PubDate: Nov 2022
- Bipolar Soft Limit Points in Bipolar Soft Generalized Topological Spaces
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 Hind Y. Saleh Baravan A. Asaad and Ramadhan A. Mohammed The concept of soft set theory can be used as a mathematical tool for dealing with problems that contain uncertainty. Then, a new mixed mathematical model called the bipolar soft set is created by merging soft sets and bipolarity, which gave the concept of a binary model of grading. Bipolar soft set is characterized by two soft sets, one of which provides positive information and the other negative. Bipolar soft generalized topology is a generalization of bipolar soft topology. The importance of limit points in all branches of mathematics cannot be ignored. It forms one of the most significant and fundamental concepts in topology. On this basis, the derived set concept is required in the establishment and continuation of some properties. Accordingly, the limit point in bipolar soft generalized theory is defined. In this paper, we present the notion of bipolar soft generalized limit points. We explained the relation between the bipolar soft generalized derived and the bipolar soft generalized closure set. Added to that, we discussed some structures of a bipolar soft generalized topological space such as: -interior point, -exterior point, -boundary point, -neighborhood point and basis on . Finally, we give comparisons among these concepts of bipolar soft generalized topological spaces () by using bipolar soft point (). Each concept introduced in this paper is explained with clear examples.
PubDate: Nov 2022
- Nonparametric REML-like Estimation in Linear Mixed Models with
Uncorrelated Homoscedastic Errors
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 E.-P. Ndong Nguéma and Betrand Fesuh Nono Restricted Maximum Likelihood (REML) is the most recommended approach for fitting a Linear Mixed Model (LMM) nowadays. Yet, as ML, REML suffers the drawback that it performs such a fitting by assuming normality for both the random effects and the residual errors, a dubious assumption for many real data sets. Now, there have been several attempts at trying to justify the use of the REML likelihood equations outside of the Gaussian world, with varying degrees of success. Recently, a new fitting methodology, code named 3S, was presented for LMMs with only added assumption (to the basic ones) that the residual errors are uncorrelated and homoscedastic. Specifically, the 3S-A1 variant was designed and then shown, for Gaussian LMMs, to differ only slightly from ML estimation. In this article, using the same 3S framework, we develop another iterative nonparametric estimation methodology, code named 3S-A1.RE, for the kind of LMMs just mentioned. However, we show that if the LMM is, indeed, Gaussian with i.i.d. residual errors, then the set of estimating equations defining any 3S-A1.RE iterative procedure is equivalent to the set of REML equations, but while including the nonnegativity constraints on all variance estimates, as well as positive semi-definiteness on all covariance matrices. In numerical tests on some simulated and real world clustered and longitudinal data sets, our new methods proved to be highly competitive when compared to the traditional REML in the R statistical software.
PubDate: Nov 2022
- Enacting Alternating Least Square Algorithm to Estimate Model Fit of Sem
Generalized Structured Component Analysis
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 Cylvia Nissa Steffani and Gunardi Structural Equation Modeling (SEM) is a statistical modeling technique that combines three methods, namely factor analysis, path analysis and regression analysis to test a theoretical model in social science, psychology and management. Covariance-based SEM is a parametric SEM that must meet several parametric assumptions such as, multivariate normally distributed data, large sample sizes and independent observations, so that, variance-based SEM was developed to overcome the problem of covariance SEM, namely the Generalized Structured Component Analysis (GSCA) method. This study aims to implement the GSCA method on factors data that are expected to have an effect on the level of behavioral intention towards online food delivery services and to examine the significance of the mediating variable on the structural relationship. The results of hypothesis testing with a 95% confidence level showed that the quality of convenience motivation, prior online purchase experience, and attitude towards online food delivery services had a significant effect on behavioral intentions towards online food delivery services. The fit value is above 0, 523 which indicates that the model is able to explain around 52,3% of the variation of the data. Furthermore, the hedonic motivation variable has a significant effect on convenience motivation. Post usage usefulness and prior online purchase experience variables significantly affected the attitudes towards online food delivery services. The proposed model using GSCA achieves a much better result (good fit) compared with the previous model using Confirmatory Factor Analysis (CFA) with marginal fit.
PubDate: Nov 2022
- Iterative Algorithms for Solving the Partial Eigenvalue Problem for
Symmetric Interval Matrixes
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 Alimzhan A. Ibragimov and Dilafruz N. Khamroeva In this paper, we consider iterative methods for solving a partial eigenvalue problem for real symmetric interval matrices. Such matrices have applications in modeling many technical problems where a lot of data suffers from limited variation or uncertainty. In modeling most applied problems, when some parameter values fluctuate with a known amplitude, then it can be considered that it is advisable to use interval methods. The algorithms proposed by us are built on the basis of the power method and its modification, the so-called "Method of scalar products" for solving a partial problem of eigenvalues of an interval symmetric matrix. These methods have not yet been studied in detail and are not justified for interval matrices. In the developed algorithms, boundary matrices are first determined by the Deif theorem, and then a partial eigenvalue problem is solved. We also study the problem of convergence of the power method for boundary matrices of a given interval symmetric matrix. The results of the computational experiment show that the interval eigenvalues obtained by the proposed algorithms are in good agreement with the results obtained by other researchers, and in some cases even better. The obtained numerical results are compared by the number of iterations and the width of the interval solution.
PubDate: Nov 2022
- Binomial-Geometric Mixture and Its Applications
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 Hussein Eledum and Alaa R. El-Alosey A mixture distribution is a combination of two or more probability distributions; it can be obtained from different distribution families or the same distribution families with different parameters. The underlying distributions may be discrete or continuous, so the resulting mixture probability distribution function should be a mass or density function. In the last few years, there has been great interest in the problem of developing a mixture distribution based on the binomial distribution. This paper uses the probability generating function method to develop a new two-parameter discrete distribution called a binomial-geometric (BG) distribution, a mixture of binomial distribution with the number of trials (parameter ) taken after a geometric distribution. The quantile function, moments, moment generating function, Shannon entropy, order statistics, stress-strength reliability and simulating the random sample are some of the statistical highlights of the BG distribution that are explored. The model's parameters are estimated using the maximum likelihood method. To examine the performance of the accuracy of point estimates for BG distribution parameters, the Monte Carlo simulation is performed with different scenarios. Finally, the BG distribution is fitted to two real lifetime count data sets from the medical field. As a result, the proposed BG distribution is an overdispersed right-skewed and can accommodate a constant hazard rate function. The proposed BG distribution is appropriate for modelling the overdispersed right-skewed real-life count data sets and it can be an alternative to the negative binomial and geometric distributions.
PubDate: Nov 2022
- On the Generalized Quadratic-Quartic Cauchy Functional Equation and its
Stability over Non-Archimedean Normed Space
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 A. Ramachandran and S. Sangeetha Functional equation plays a very important and interesting role in the area of mathematics, which involves simple algebraic manipulations and through which one can arrive an interesting solution. The theory of functional equations is also used in the development of other areas such as analysis, algebra, Geometry etc., the new methods and techniques are applied in solving problem in Information theory, Finance, Geometry, wireless sensor networks etc., In recent decades, the study of various types of stability of a functional equation such as HUS (Hyers-Ulam stability), HURS (Hyers-Ulam-Rassias stability) and generalized HUS of different types of functional equation and also for mixed type were discussed by many authors in various space. The problem of the stability of different functional equations has been widely studied by many authors, and more interesting results have been proved in the classical case (Archimedean). In recent years, the analogues results of the stability problem of these functional equations were investigated in non-Archimedean space. The aim of this study is to investigate the HUS of a mixed type of general Quadratic-Quartic Cauchy functional equation in non-Archimedean normed space. In this current article, we prove the generalized HUS for the following Quadratic-Quartic Cauchy functional equation over non-Archimedean Normed space.
PubDate: Nov 2022
- Step, Ramp, Delta, and Differentiable Activation Functions Obtained Using
Percolation Equations
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 David S. McLachlan and Godfrey Sauti This paper presents two new analytical equations, the Two Exponent Phenomenological Percolation Equation (TEPPE) and the Single Exponent Phenomenological Percolation Equation (SEPPE) which, for the proper choice of parameters, approximate the widely used Heaviside Step Function. The plots of the equations presented in the figures in this paper show some, but by no means all, of the step, ramp, delta, and differentiable activation functions that can be obtained using the percolation equations. By adjusting the parameters these equations can give linear, concave, and convex ramp functions, which are basic signals in systems used in engineering and management. The equations are also Analytic Activation Functions, the form or nature of which can be varied by changing the parameters. Differentiating these functions gives delta functions, the height and width of which depend on the parameters used. The TEPPE and SEPPE and their derivatives are presented in terms of the conductivity () owing to their original use in describing the electrical properties of binary composites, but are applicable to other percolative phenomena. The plots in the figures presented are used to show the response (composite conductivity) for the parameters (higher conductivity component of the composite), (lower conductivity component of the composite) and , the volume fraction of the higher conductivity component in the composite. The additional parameters are the critical volume fraction, , which determines the position of the step or delta function on the axis and one or two exponents , and .
PubDate: Nov 2022
- Modified Mathematical Models in Biology by the Means of Caputo Derivative
of a Function with Respect to Another Exponential Function
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 Jamal Salah Maryam Al Hashmi Hameed Ur Rehman and Khaled Al Mashrafi In this article, the researcher considered some well-known mathematical models of ordinary differential equations applied in biology such as the bacterial growth, the natural FC solution models for vegetables, the biological phospholipids pathway, the glucose absorption by the body and the spread of epidemics. The ordinary differential equations for each model are fractionalized by the means of Caputo derivative of a function with respect to certain exponential function. In each model, we embed the concept fractionalization associated with a chosen exponential function in order to modify the given model. Consequently, various propositions are evoked by hypothetically allowing some modifications in several mathematical models of biology. The results are further visualized by providing the graphs of Mittag-Leffler function on various parameters. The graphs' analysis explored the behavior of the solution for every modified model. In this study, the solutions of the modified models are all of the Mittag–Leffler form while all original models are solved by the means of exponential function. Slight changes of the behavior of the solutions are due to the assumptions and the change of parameters.
PubDate: Nov 2022
- Bootstrap-t Confidence Interval on Local Polynomial Regression Prediction
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 Abil Mansyur and Elmanani Simamora In local polynomial regression, prediction confidence interval estimation using standard theory will give coverage probability close to exact coverage probability. However, if the normality assumption is not met, the bootstrap method makes it possible to apply it. The working principle of the bootstrap method uses the resampling method where the sample data becomes a population and there is no need to know the distribution of the sample data is normal or not. Indiscriminate selection of smoothing parameters allows scatterplot results from local polynomial regressions to be rough and can even lead to misleading statistical conclusions. It is necessary to consider the optimal smoothing parameters to get local polynomial regression predictions that are not overfitting or underfitting. We offer two new algorithms based on the nested bootstrap resampling method to determine the bootstrap-t confidence interval in predicting local polynomial regression. Both algorithms consider the search for optimal smoothing parameters. The first algorithm performs paired and residual bootstrap samples, and the second algorithm performs based on residuals with residuals. The first algorithm provides a scatterplot and reasonable coverage probability on relatively large sample data. In contrast, the second algorithm is more powerful for each data size, including for relatively small sample data sizes. The mean of the bootstrap-t confidence interval coverage probability shows that the second algorithm for second-degree local polynomial regression is better than the other three. However, the larger the sample data size gives, the closer the closer the average coverage probability of the two algorithms is to the nominal coverage probability.
PubDate: Nov 2022
- Parameter Estimation for Weibull Burr Type X Model with Right Censored
Data
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 Amna R. Ashour Noor A. Ibrahim Mundher A. Khaleel and Pelumi E. Oguntunde Studies have considered generalizing statistical distributions in the past. These were aimed at making such distributions more flexible and suitable for describing real-world phenomena. In this study, we considered exploring the Weibull Burr Type X distribution, which extends the Burr Type X distribution using the Weibull generator. Particularly, the performance of the maximum likelihood estimators for its parameters encompassing the right censored dataset was explored and compared. On the performance of its estimators with respect to bias and root mean square error, we considered the Monte Carlo simulation study to make a comparison using varying sample sizes and censored percentages. We illustrated the usefulness and potentials of the Weibull Burr Type X distribution using a right censored dataset. We considered comparing the fitness of this model to its sub-models using real world dataset. The result showed that the Weibull Burr Type X distribution provides a better fit than other competing models. This indicates that the distribution is flexible and competitive. The Weibull Burr Type X distribution exhibits unimodal and decreasing shapes. The extra parameter in the distribution varies the model's tail weight and introduces skewness into the model. We introduced this model as an alternative to other existing models for modelling right censored data in various research fields and areas of study.
PubDate: Nov 2022
- Qualitative Analysis of Food-Web Model through Diffusion-Driven
Instability
Abstract: Publication date: Nov 2022
Source:Mathematics and Statistics Volume 10 Number 6 Chetan Swarup Many food webs exist in the ecosystem, and their survival is directly dependent on the growth rate of primary prey; it balances the entire ecosystem. The spatiotemporal dynamics of three species' food webs was proposed and analyzed in this paper, where the intermediate predator's predation term follows Holling Type IV and the top predator's predation term follows Holling Type II. To begin, we examine the system's stability using linear stability analysis. We first obtained an equilibrium solution set and then used a Jacobian method to investigate the system's stability at a biologically feasible equilibrium point. We investigate random movement in species in the presence of diffusion, establish conditions for system stability, and derive the Turing instability condition. Following that, the Turing instability condition for a spatial food web system is calculated. Finally, numerical simulations are used to validate the findings. We discovered several intriguing spatial patterns (spots, strip, and mixed patterns) that help us understand the dynamics of the real-world food web. As a result, the Turing instability analysis used in the complex food web system is especially relevant experimentally because the associated consequences can be researched and applied to a wide range of mathematical, ecological, and biological models.
PubDate: Nov 2022
- Form Invariance - An Alternative Answer to the Measurement Problem of Item
Response Theory
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Henrik Bernshausen Christoph Fuhrmann Hanns-Ludwig Harney Klaus Harney and Andreas Muller The measurement problem of item response theory is the question of how to assign ability parameters to persons and difficulty parameters to items such that the comparison of abilities is independent of the specific set of difficulties . Correspondingly, the comparison of difficulties should be independent of the specific set of abilities . These requirements are called specific objectivity. They are the basis of the Rasch model. It measures and on one and the same scale. The present paper asks the different question of how to assign ability parameters to persons in a way that the comparison of abilities is independent of the position on the scale where the measurement takes place. Correspondingly, the comparison of difficulties should also be independent of the position on the scale where the calibration of difficulties takes place. Again, and measured on one and the same scale. These requirements are called form invariance. They lead to an item response function (IRF) different from that of the Rasch model. It integrates information from and beyond the mere score dependence and also shows specific objectivity (in a generalized mathematical form). The properties of the form invariant item response function are compared to that of the Rasch model, and related to previous work by Warm, Jaynes and Samejima. Moreover, several numerical examples for the use of it are provided.
PubDate: May 2022
- The -prime Radicals in Posets
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 J.Catherine Grace John and B.Elavarasan A relation is a mathematical tool for describing set relationships. Relationships are common in databases and scheduling applications. Science and engineering are designed to help humans make better decisions. To make these choices, we must first understand human expectations, the outcomes of various options, and the degree of confidence. With all of these data, partial orders will be generated. In several fields of engineering and computer science, partial order and lattice theory are now widely used. To mention a few, they are used in cloud computing (vector clocks, global predicate detection), concurrency theory (pomsets, occurrence nets), programming language semantics (fixed-point semantics), and data mining (concept analysis). Other theoretical disciplines benefit from them as well, such as combinatorics, number theory, and group theory. Partially ordered sets emerge naturally when dealing with multidimensional systems of qualitative ordinal variables in social science, especially to solve ranking, prioritising, and assessment concerns. As an alternative to standard techniques, partial order theory and partially ordered sets can be used to generate composite indicators for evaluating well-being, quality of life, and multidimensional poverty. They can be applied in multi-criteria analysis or for decision-making purposes in the study of individual and social desires, including in social choice theory. They're also valuable in social network analysis, where they may be utilized to apply mathematics to explore network topologies and dynamics. The Hasse diagram method, for example, produces a partial order with multiple incomparabilities (lack of order) between pairs of items. This is a common problem in ranking studies, and it can often be avoided by combining object attributes that lead to a complete order. However, such a mix introduces subjectivity and prejudice into the rating process. This work discusses the notion of a -prime radical of a partially ordered set with respect to ideal. In posets, we investigated the concept of -primary ideals. It is investigated how to characterise -primary ideals in relation to -prime radicals. In addition, an ideal's -primary decomposition is constructed.
PubDate: May 2022
- Simulation-Based Assessment of the Effectiveness of Tests for Stationarity
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Vasile-Alexandru Suchar and Luis Gustavo Nardin Non-stationarity potentially comes from many sources and they impact the analysis of a wide range of systems in various fields. There is a large set of statistical tests for checking specific departures from stationarity. This study uses Monte Carlo simulations over artificially generated time series data to assess the effectiveness of 16 statistical tests to detect the real state of a wide variety of time series (i.e., stationary or non-stationary) and to identify their source of non-stationarity, if applicable. Our results show that these tests have a low statistical power outside their scope of operation. Our results also corroborate with previous studies showing that there are effective individual statistical tests to detect stationary time series, but no effective individual tests for detecting non-stationary time series. For example, Dickey-Fuller (DF) family tests are effective in detecting stationary time series or non-stationarity time series with positive unit root, but fail to detect negative unit root as well as trend and break in the mean, variance, and autocorrelation. Stationarity and change point detection tests usually misclassify stationary time series as non-stationary. The Breusch-Pagan BG serial correlation test, the ARCH homoscedasticity test, and the structural change SC tests can help to identify the source of non-stationarity to some extent. This outcome reinforces the current practice of running several tests to determine the real state of a time series, thus highlighting the importance of the selection of complementary statistical tests to correctly identifying the source of non-stationarity.
PubDate: May 2022
- Statistical Inference of Modified Kies Exponential Distribution Using
Censored Data
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Fathy H. Riad This paper deals with obtaining the interval and point estimation to Modified Kies exponential distribution in case of progressive first failure (PFF) censored data. It uses two approaches, classical and non-classical methods of estimation, including the highest posterior density (HPD). We obtained the Maximum Likelihood Estimation of the parameters and the logarithm likelihood function, and we used the maximum likelihood estimation of the parameters as a classical approach. We calculated the confidence intervals for the parameters and the Bootstrap confidence Intervals. We employed the posterior distribution and the Bayesian estimation (BE) under different loss functions (Symmetric loss function, The MCMC usage, and The M-H algorithm). Some results depending on simulation data are adopted to explain estimation methods. We used various censoring schemes and various sample sizes to determine whether the sample size affects the estimation measures. We used different confidence intervals to determine the best and shortest intervals. Also, the major findings in the paper are remarked on in the conclusion section.
PubDate: May 2022
- A Bounded Maximal Function Operator and Its Acting on src=image/13425593_01.gif> Functions
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Raghad S.Shamsah With a novel generation operator known as Spherical Scaling Wavelet Projection Operator, this study proposes new strategies for achieving the scaling wavelet expansion's convergence of functions with respect to almost everywhere under generic hypotheses. Hypotheses of results are based on three types of conditions: Space's function f, Kind of Wavelet functions (spherical) and Wavelet Conditions. The results showed that in the case of and under the assumption that scaling wavelet function of a given multiresolution analysis is spherical wavelet with 0-regularity, the convergence of ) expansions almost everywhere will be achieved under a new kind of partial sums operator. We can examine some properties of spherical scaling wavelet functions like rapidity of decreasing and boundedness. After estimating the bounds of spherical scaling wavelet expansions, we examined the limited (bounds) of this operator. The results are established on the almost everywhere wavelet expansions convergence of space functions. Several techniques were followed to achieve this convergence, such as the bounded condition of the Spherical Hardy-Littlewood maximal operator is achieved using the maximal inequality and Riesz basis functions conditions. The general wavelet expansions' convergence was demonstrated using the spherical scaling wavelet function and several of its fundamental features. In fact, the partial sums in these expansions are dominated in their magnitude by the maximal function operator, which may be applied to establish convergence. The convergence here may be obtained by assuming minimal regularity for a spherical scaling wavelet function . The focus of this research is on recent advances in convergence theory issues with respect to spherical wavelet expansions' partial sums operators. The employment of scaling wavelet basis functions defined on is regarded to be a key in solving convergence problems that occur inside spaces dimension .
PubDate: May 2022
- On Subclasses of Uniformly Convex Spriallike Function Associated with
Poisson Distribution Series
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 K.Marimuthu and J.Uma Geometric Function Theory is one of the major area of mathematics which suggests the significance of geometric ideas and problems in complex analysis. Recently, the univalent functions are given particular attention and they are used to construct linear operators that preserve the class of univalent functions and some of its subclasses. Also, similar attention has been given to distribution series. Many authors have studied about certain subclasses of univalent and bi-univalent functions connected with distribution series like Pascal distribution, Binomial distribution, Poisson distribution, Mittag-Leffler-type Poisson distribution, Geometric distribution, Exponential distribution, Borel distribution, Generalized distribution and Generalized discrete probability distribution to name few. Some of the important results on Uniformly convex spirallike functions (UCF) and Uniformly spirallike functions (USF) related with such a distribution series are also of interest. The main aim of the present investigation is to obtain the necessary and sufficient conditions for Poisson distribution series to belong to the classes and . The inclusion properties associated with Poisson distribution series are taken up for study in this article. Proof of some inequalities on integral function connected to Poisson distribution series has also been discussed. Further, some corollaries and results that follow consequently from the theorems are also analysed.
PubDate: May 2022
- Comparison of Non-Preemptive Priority Queuing Performance Using Fuzzy
Queuing Model and Intuitionistic Fuzzy Queuing Model with Different
Service Rates
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 S. Aarthi and M. Shanmugasundari This study provides non-preemptive priority fuzzy and intuitionistic fuzzy queuing models with unequal service rates. For performance evaluations of the industrial, supply chain, stock management, workstations, data exchange, and telecommunications equipment, non-preemptive priority queues are appropriate. The parameters of non-preemptive priority queues may be fuzzy due to unpredictable causes. The primary goal of this research is to compare the performance of a non-preemptive queuing model applying fuzzy queuing theory and intuitionistic fuzzy queuing theory. The performance metrics in the fuzzy queuing theory model are given as a range of values, however, the intuitionistic fuzzy queuing theory model offers a multitude of values. Both the arrival rate and the service rate are triangular and intuitionistic triangular fuzzy numbers in this case. An analysis is provided to identify the quality metrics by using a developed methodology through which without converting into crisp values we are taking the fuzzy values as it is and to demonstrate the viability of the suggested method, two numerical problems are solved.
PubDate: May 2022
- Exact Run Length Computation on EWMA Control Chart for Stationary Moving
Average Process with Exogenous Variables
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Wannaphon Suriyakat and Kanita Petcharat The exponentially weighted moving average (EWMA) control chart is a popular tool used to monitor and identify slight unnatural variations in the manufacturing, industrial, and service processes. In general, control charts operate under the assumption of normality observation of the attention quality feature, but it is not easy to maintain this assumption in practice. In such situations, the data of random processes are correlated data, such as stock price in the economic field or air pollution data in the environment field. The characteristics and performance of the control chart are measured by the average run length (ARL). In this article, we present the new explicit formula of ARL for EWMA control chart based on MAX(q,r) process. The proposed explicit formula of ARL for the MAX(q,r) process is proved using the Fredholm integral equation technique. Moreover, ARL values are also assessed using the numerical integral equations method based on Gaussian, midpoint, and trapezoidal rules. Banach's fixed point theorem guarantees the existence and uniqueness of the solution. Furthermore, the accuracy of the proposed explicit formula is assessed in absolute percentage relative error compared with the numerical integral equations method. The results found that the explicit formula's ARL values are similar to those obtained using the numerical integral equation method; the absolute percentage relative errors are less than 0.0001 percent. As a result, the essential conclusion is that the explicit formula outperforms the numerical method in computational time. Consequently, the proposed explicit formula and the numerical integral equation have been the alternative approaches for computing ARL values of the EWMA control chart. They would be applied in various fields, including economics, environment, biology, engineering, and others.
PubDate: May 2022
- Likelihood and Bayesian Inference in the Lomax Distribution under
Progressive Censoring
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 A. Baklizi A. Saadati Nik and A. Asgharzadeh The Lomax distribution has been used as a statistical model in several fields, especially for business failure data and reliability engineering. Accurate parameter estimation is very important because it is the base for most inferences from this model. In this paper, we shall study this problem in detail. We developed several points and interval estimators for the parameters of this model assuming the data are type II progressively censored. Specifically, we derive the maximum likelihood estimator and the associated Wald interval. Bayesian point and interval estimators were considered. Since they can't be obtained in a closed form, we used a Markov chains Monte Carlo technique, the so called the Metropolis – Hastings algorithm to obtain approximate Bayes estimators and credible intervals. The asymptotic approximation of Lindley to the Bayes estimator is obtained for the present problem. Moreover, we obtained the least squares and the weighted least squares estimators for the parameters of the Lomax model. Simulation techniques were used to investigate and compare the performance of the various estimators and intervals developed in this paper. We found that the Lindley's approximation to the Bayes estimator has the least mean squared error among all estimators and that the Bayes interval obtained using the Metropolis – Hastings to have better overall performance than the Wald intervals in terms of coverage probabilities and expected interval lengths. Therefore, Bayesian techniques are recommended for inference in this model. An example of real data on total rain volume is given to illustrate the application of the methods developed in this paper.
PubDate: May 2022
- A Descent Conjugate Gradient Method With Global Converges Properties for
Non-Linear Optimization
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Salah Gazi Shareef Iterative methods such as the conjugate gradient method are well known methods for solving non-linear unconstrained minimization problems partially because of their capacity to handle large-scale unconstrained optimization problems rapidly, and partly due to their algebraic representation and implementation in computer programs. The conjugate gradient method has wide applications in a lot of fields such as machine learning, neural networks and many other fields. Fletcher and Reeves [1] expanded the approach to nonlinear problems in 1964. It is considered to be the first nonlinear conjugate gradient technique. Since then, lots of new other conjugate gradient methods have been proposed. In this work, we will propose a new coefficient conjugate gradient method to find the minimum of the non-linear unconstrained optimization problems based on parameter of Hestenes Stiefel. Section one in this work contains the derivative of new method. In section two, we will satisfy the descent and sufficient descent conditions. In section three, we will study the property of the global convergence of the new proposed. In the fourth section, we will give some numerical results by using some known test functions and compare the new method with Hestenes S. to demonstrate the effectiveness of the suggestion method. Finally, we will give conclusions.
PubDate: May 2022
- Simulation Study of Bayesian Hurdle Poisson Regression on the Number of
Deaths from Chronic Filariasis in Indonesia
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Nur Kamilah Sa'diyah Ani Budi Astuti and Maria Bernadetha T. Mitakda One regression model to explain the relationship between predictor and response variable in the form of count is Poisson regression. In the case of certain Poisson with the presence of many zero values, causing overdispersion can be overcome with the Poisson Hurdle model. There is a good method for estimating the parameters on small sample sizes for all distributions, namely the Bayesian method. The response variable of the original data does not follow Poisson distribution, so parameter will be estimated by Bayesian method. The performance of the Bayesian Hurdle Poisson regression can be seen from simulation data on various sample sizes and overdispersion levels generated based on the parameters of original data showing that the Bayesian Hurdle Poisson regression model proposed in this study is suitable for large sample sizes or with varying levels of overdispersion due or because normal distribution is used as prior. Even though the response variable of the simulation data is generated with a Poisson distribution, it still does not follow a Poisson distribution because it's in accordance with the original data. The parameter estimated based on the simulation data is similar to the parameter estimated on the original data (both the estimator of the MLE Hurdle Poisson regression parameter and the parameter estimator of the Bayesian Hurdle Poisson regression). This indicates that the simulation scenario is appropriate.
PubDate: May 2022
- Weibull Distribution as the Choice Model for State-Specific Failure Rates
in HIV/AIDS Progression
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Nahashon Mwirigi Stanley Sewe Mary Wainaina and Richard Simwa This study considered the problem of selecting the best single model for modeling state-specific failure rates in HIV/AIDS progression for patients on antiretroviral therapy with age and gender as risk factors using exponential, twoparameter, and three-parameter Weibull distributions. CD4 count changes in any two consecutive visits, the mean waiting time (μ), and transitional rates (λ) for remaining in the same state or transiting to a better or a worse state were analyzed. Various model selection criteria, namely, Akaike Information Criteria (AIC), Bayesian Information Criteria (BIC), and Log-Likelihood (LL), were used in each specific disease state. The Maximum Likelihood Estimation (MLE) method was applied to obtain the parameters of the distributions used. Plots of State-specific transition rates (λ) depicted constant, increasing, decreasing, and unimodal trends. Three-parameter Weibull distribution was the best for male patients and patients aged (40-69) years transiting in the states 1-2, 3-4, and 4-5, and 1-2, 3-4, and 5-6, respectively, and for male, female patients, and patients aged (40-69), remaining in the same state. Two-parameter Weibull distribution was the best for female patients and patients aged (20-39) years transiting in the states 1-2, 2-3, 4-5, and 1-2, 2-3, 3-4, respectively. Exponential distribution proved inferior to the other two distributions used.
PubDate: May 2022
- The Radii of Starlikeness for Concave Functions
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Munirah Rossdy Rashidah Omar and Shaharuddin Cik Soh Let denote the functions' class that is normalized, analytic, as well as univalent in the unit disc given by . Convex, starlike, as well as close-to-convex functions resemble the main subclasses of , expressed by , as well as , accordingly. Many mathematicians have recently studied radius problems for various classes of functions contained in . The determination of the univalence radius, starlikeness, and convexity for specific special functions in is a relatively new topic in geometric function theory. The problem of determining the radius has been initiated since the 1920s. Mathematicians are still very interested in this, particularly when it comes to certain special functions in . Indeed, many papers investigate the radius of starlikeness for numerous functions. With respect to the open unit disc and class , the class of concave functions , known as , is defined. It is identified as a normalised analytic function , which meets the requirement of having the opening angle of at . A univalent function is known as concave provided that is concave. In other words, we have that is also convex. There is no literature to date on determining the radius of starlikeness for concave univalent functions related to certain rational functions, lune, cardioid, and the exponential equation. Hence, by employing the subordination method, we present new findings on determining several radii of starlikeness for different subclasses of starlike functions for the class of concave univalent functions .
PubDate: May 2022
- Comparison between The Discrimination Frequency of Two Queueing Systems
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Said Taoufiki and Jamal El Achky Each of us has had the experience of being overtaken by another less demanding customer in a queue. And each of us got behind a demanding customer and had to wait a long time. The frequencies of discrimination that appear here are overruns and heavy work. These are two phenomena that accompany queues, and have a great impact on customer satisfaction. Recently, authors have turned to measure queuing fairness based on the idea that a customer may feel anger towards the queuing system, even if he does not stay long on hold if he had one of the two experiences. We have found that this type of approach is more in line with studies provided by sociologists and psychologists. The frequencies of discrimination in a queue are studied for certain models of a single server. But for the case of multi-servers, there is only one study of a two-server Markovian queue. In this article, we wish to generalize this last study and we demonstrate that the result found in the case of two servers remains valid after comparing the discrimination frequencies of two Markov queueing systems to several servers.
PubDate: May 2022
- Traumatic Systolic Blood Pressure Modeling: A Spectral Gaussian Process
Regression Approach with Robust Sample Covariates
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 David Kwamena Mensah Michael Arthur Ofori and Nathaniel Howard Physiological vital signs acquired during traumatic events are informative on the dynamics of the trauma and their relationship with other features such as sample-specific covariates. Non-time dependent covariates may introduce extra challenges in the Gaussian Process () regression, as their main predictors are functions of time. In this regard, the paper introduces the use of Orthogonalized Gnanadesikan-Kettering covariates for handling such predictors within the Gaussian process regression framework. Spectral Bayesian regression is usually based on symmetric spectral frequencies and this may be too restrictive in some applications, especially physiological vital signs modeling. This paper builds on a fast non-standard variational Bayes method using a modified Van der Waerden sparse spectral approximation that allows uncertainty in covariance function hyperparameters to be handled in a standard way. This allows easy extension of Bayesian methods to complex models where non-time dependent predictors are available and the relationship between the smoothness of trend and covariates is of interest. The utility of the methods is illustrated using both simulations and real traumatic systolic blood pressure time series data.
PubDate: May 2022
- Parameter Estimation for Additive Hazard Model Recurrent Event Using
Counting Process Approach
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Triastuti Wuryandari Gunardi and Danardono The Cox regression model is widely used for survival data analysis. The Cox model requires a proportional hazard. If the proportional hazard assumption is doubtful, then the additive hazard model can be used, where the covariates act in an additively to the baseline hazard function. If the observed survival time is more than once for one individual during the observation, it is called a recurrent event. The additive hazard model measures risk difference to the effect of a covariate in absolutely, while the proportional hazards model measure hazard ratio in relatively. The risk coefficients estimation in the additive hazard model mimics the multiplicative hazard model, using partial likelihood methods. The derivation of these estimators, outlined in the technical notes, is based on the counting process approach. The counting process approach was first developed by Aalen on 1975 which combines elements of stochastic integration, martingale theory and counting process theory. The method is applied to study about the effect of supplementation on infant growth and development. Based on the processing results, the factors that affect the growth and development of the infant are gender, treatment and mother's education.
PubDate: May 2022
- Pricing of A European Call Option in Stochastic Volatility Models
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Said Taoufiki and Driss Gretete Volatility occupies a strategic place in the financial markets. In this context of crisis, and with the great movements of the markets, traders have been forced to turn to volatility trading for the potential gain it provides. The Black-Scholes formula for the value of a European option to purchase the underlying depends on a few parameters which are more or less easy to calculate, except for the realized volatility at maturity which makes a problem, because there is no single value, nor an established way to calculate it. In this article, we exploit the Martingale pricing method to find the expected present value of a given asset relative to a riskneutral probability measure. We consider a bond-stock market that evolves according to the dynamics of the Black-Scholes model, with a risk-free interest rate varying with time. Our methodology has effectively directed us towards interesting formulas that we have derived from the exact calculation, giving the present value of the volatility realized over a period of maturity for a European option in a stochastic volatility model.
PubDate: May 2022
- On Generalized Bent and Negabent Functions
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Deepmala Sharma and Sampada Tiwari From the last few years, generalized bent functions gain a lot of attention in research as they have many applications in various fields such as combinatorial design, sequence design theory, cryptography, CDMA communication, etc. A deep and broad study of generalized bent functions with their properties is done in literature. Kumar et al.[11] first gave the concept of generalized bent function. Many researchers studied the properties and characterizations of generalized bent functions. In [2] authors introduced the concept of generalized (-ary) negabent functions and studied some properties of generalized (-ary) negabent functions. In this paper, we study the generalized (-ary) bent functions , where is the ring of integers with mod , is the vector space of dimension over and ≥2 is any positive integer. We discuss several properties of generalized (-ary) bent functions with respect to their nega-Hadamard transform. We also study the relation between generalized nega-Hadamard transforms and generalized nega-autocorrelations. Furthermore, we prove the necessary and sufficient conditions for the bentness and negabentness of generalized (-ary) bent function generated by the secondary construction for , where .
PubDate: May 2022
- Three-Point Block Algorithm for Approximating Duffing Type Differential
Equations
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Ahmad Fadly Nurullah Rasedee Mohammad Hasan Abdul Sathar Najwa Najib Nurhidaya Mohamad Jan Siti Munirah Mohd and Siti Nor Aini Mohd Aslam The current study was conducted to establish a new numerical method for solving Duffing type differential equations. Duffing type differential equations are often linked to damping issues in physical systems, which can be found in control process problems. The proposed method is developed using a three-point block method in backward difference form, which offers an accurate approximation of Duffing type differential equations with less computational cost. Applying an Adam's like predictor-corrector formulation, the three point block method is programmed with a recursive relationship between explicit and implicit coefficients to reduce computational cost. By establishing this recursive relationship, we established a corrector algorithm in terms of the predictor. This eliminates any undesired redundancy in the calculation when obtaining the corrector. The proposed method allows a more efficient solution without any significant loss of accuracy. Four types of Duffing differential equations are selected to test the viability of the method. Numerical results will show efficiency of the three-point block method compared against conventional and more established methods. The outcome of this research is a new method for successfully solving Duffing type differential equation and other ordinary differential equations that are found in the field of science and engineering. An added advantage of the three-point block method is its adaptability to parallel programming.
PubDate: May 2022
- On Invariants of Surfaces with Isometric on Sections
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Sharipov Anvarjon Soliyevich and Topvoldiyev Fayzulla Foziljonovich In one of the directions of classical differential geometry, the properties of geometric objects are studied in their entire range, which is called geometry "in large". Many problems of geometry "in large" are connected with the existence and uniqueness of surfaces with given characteristics. Geometric features can be intrinsic curvature, extrinsic or Gaussian curvature, and other features associated with the surface. The existence of a polyhedron with given curvatures of vertices or with a given development is also a problem of geometry "in large". Therefore, the problem of finding invariants of polyhedra of a certain class and the solution of the problem of the existence and uniqueness of polyhedra with given values of the invariant are relevant. This work is devoted to finding invariants, surfaces isometric on sections. In particular, we study the expansion properties of convex polyhedra that preserve isometry on sections. For such polyhedra, an invariant associated with the vertex of a convex polyhedral angle is found. Using this invariant, we can consider the question of restoring a convex polyhedron with given values of conditional curvature at the vertices. The isometry on section differs from the isometry of surfaces. The isometry of surfaces does not imply the isometry in sections, and vice versa. One of the invariants of surfaces isometric in cross sections is the area of the cylindrical image. This paper presents the properties of the area of a cylindrical image.
PubDate: May 2022
- ()-Anti-Intuitionistic Fuzzy Soft b-Ideals
in BCK/BCI-Algebras
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Aiyared Iampan M. Balamurugan and V. Govindan Among many algebraic structures, algebras of logic form an essential class of algebras. BCK and BCI-algebras are two classes of logical algebras. They were introduced by Imai and Iséki [6, 7] in 1966 and have been extensively investigated by many researchers. The concept of fuzzy soft sets is introduced in [17] to generalize standard soft sets [21]. The concept of intuitionistic fuzzy soft sets is introduced by Maji et al. [18], which is based on a combination of the intuitionistic fuzzy set [2] and soft set models. The first section will discuss the origins and importance of studies in this article. Section 2 will review the definitions of a BCK/BCI-algebra, a soft set, a fuzzy soft set, and an intuitionistic fuzzy soft set and show the essential properties of BCK/BCI-algebras to be applied in the next section. In Section 3, the concept of an anti-intuitionistic fuzzy soft b-ideal (AIFSBI) is discussed in BCK/BCI-algebras, and essential properties are provided. A set of conditions is provided for an AIFSBI to be an anti-intuitionistic fuzzy soft ideal (AIFSI). The definition of quasi-coincidence of an intuitionistic fuzzy soft point with an intuitionistic fuzzy soft set (IFSS) is considered in a more general form. In Section 4, the concepts of an ()-AFSBI and an ()-AIFSBI of are introduced, and some characterizations of ()-AIFSBI are discussed using the concept of an AIFSBI with thresholds. Finally, conditions are given for a ()-AIFSBI to be a (∈,∈)-AIFSBI.
PubDate: May 2022
- Half-Space Model Problem for Navier-Lamé Equations with Surface
Tension
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Sri Maryani Bambang H Guswanto and Hendra Gunawan Recently, we have seen the phenomena in use of partial differential equations (PDEs) especially in fluid dynamic area. The classical approach of the analysis of PDEs were dominated in early nineteenth century. As we know that for PDEs the fundamental theoretical question is whether the model problem consists of equation and its associated side condition is well-posed. There are many ways to investigate that the model problems are well-posed. Because of that reason, in this paper we consider the -boundedness of the solution operator families for Navier-Lamé equation by taking into account the surface tension in a bounded domain of -dimensional Euclidean space (≥ 2) as one way to study the well-posedess. We investigate the -boundedness in half-space domain case. The -boundedness implies not only the generation of analytic semigroup but also the maximal regularity for the initial boundary value problem by using Weis's operator valued Fourier multiplier theorem for time dependent problem. It was known that the maximal regularity class is the powerful tool to prove the well-posesness of the model problem. This result can be used for further research for example to analyze the boundedness of the solution operators of the model problem in bent-half space or general domain case.
PubDate: May 2022
- Half-Sweep Refinement of SOR Iterative Method via Linear Rational Finite
Difference Approximation for Second-Order Linear Fredholm
Integro-Differential Equations
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Ming-Ming Xu Jumat Sulaiman and Nur Afza Mat Ali The numerical solutions of the second-order linear Fredholm integro-differential equations have been considered and discussed based on several discretization schemes. In this paper, the new schemes are developed derived on the hybrid of the three-point half-sweep linear rational finite difference (3HSLRFD) approaches with the half-sweep composite trapezoidal (HSCT) approach. The main advantage of the established schemes is that they discretize the differential terms and integral term of second-order linear Fredholm integro-differential equations into the algebraic equations and generate the corresponding linear system. Furthermore, the half-sweep (HS) concept is combined with the refinement of the successive over-relaxation (RSOR) iterative method to create the new half-sweep successive over-relaxation (HSRSOR) iterative method, which is implemented to get the numerical solution of a system of linear algebraic equations. Apart from that, the classical or full-sweep Gauss-Seidel (FSGS) and full-sweep successive over-relaxation iterative (FSSOR) methods are presented, which serve as the control method in this paper. In the end, we employed FSGS, FSRSOR and HSRSOR methods to obtain numerical solutions of three examples and make a detailed comparison from three aspects of the number of iterations, elapsed time and maximum absolute error. Numerical results demonstrate that FSRSOR and HSRSOR methods have lesser iterations, faster elapsed time, and are more accurate than FSGS. In addition, HSRSOR is the most effective of the three methods. To sum up, this paper has successfully proposed the applicability and superiority of the new HSRSOR method based on 3HSLRFD-HSCT schemes.
PubDate: May 2022
- On Some Properties of Fabulous Fraction Tree
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 A. Dinesh Kumar and R. Sivaraman Among several properties that real numbers possess, this paper deals with the exciting formation of positive rational numbers constructed in the form of a Tree, in which every number has two branches to the left and right from the root number. This tree possesses all positive rational numbers. Hence it consists of infinite numbers. We call this tree "Fraction Tree". We will formally introduce the Fraction Tree and discuss several fascinating properties including proving the one-one correspondence between natural numbers and the entries of the Fraction Tree. In this paper, we shall provide the connection between the entries of the fraction tree and Fibonacci numbers through some specified paths. We have also provided ideas relating the terms of the Fraction Tree with that of continued fractions. Five interesting theorems related to the entries of the Fraction Tree are proved in this paper. The simple rule that is used to construct the Fraction Tree enables us to prove many mathematical properties in this paper. In this sense, one can witness the simplicity and beauty of making deep mathematics through simple and elegant formulations. The Fraction Tree discussed in this paper which is technically called Stern-Brocot Tree has profound applications in Science as diverse as in clock manufacturing in the early days. In particular, Brocot used the entries of the Fraction Tree to decide the gear ratios of mechanical clocks used several decades ago. A simple construction rule provides us with a mathematical structure that is worthy of so many properties and applications. This is the real beauty and charm of mathematics.
PubDate: May 2022
- The Relative (Co)homology Theory through Operator Algebras
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 M. Kozae Samar A. Abo Quota and Alaa H. N. This paper introduces a new idea in the unital involutive Banach algebras and its closed subset. This paper aims to study the cohomology theory of operator algebra. We will study the Banach algebra as an applied example of operator algebra, and the Banach algebra will be denoted by . The definitions of cyclic, simplicial, and dihedral cohomology group of will be introduced. We presented the definition of -relative dihedral cohomology group that is given by: , and we will show that the relation between dihedral and -relative dihedral cohomology group can be obtained from the sequence . Among the principal results that we will explain is the study of some theorems in the relative dihedral cohomology of Banach algebra as a Connes-Tsygan exact sequence, since the relation between the relative Banach dihedral and cyclic cohomology group ( and ) of will be proved as the sequence . Also, we studied and proved some basic notations in the relative cohomology of Banach algebra with unity and defined its properties. So, we showed the Morita invariance theorem in a relative case with maps and , and proved the Connes-Tsygan exact sequence that relates the relative cyclic and dihedral (co)homology of . We proved the Mayer-Vietoris sequence of in a new form in the Banach B-relative dihedral cohomology: . It should be borne in mind that the study of the cohomology theory of operator algebra is concerned with studying the spread of Covid 19.
PubDate: May 2022
- Three-Dimensional Control Charts for Regulating Processes Described by a
Two-Dimensional Normal Distribution
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Kamola Saxibovna Ablazova In the statistical management of processes in the initial phase, the stability of the technological process is determined based on the available samples. If the process is not stable, eliminating possible causes is brought into a statistically controlled position. At the same time, simple Shewhart control charts are used. In practice, some methods bring the process to a stable state (ISO standards, standards of various states). After the process has become stable, the boundaries of control charts are found for further management. Then, with the help of new samples, the process is managed. The article considers a process modeled by a two-dimensional normal distribution. New control charts have been found to check the normality and correlation of two-dimensional random variable components. The process is regulated using these charts, preserving the shape of the density of the individual components of the normal vector and linearity of these components. When constructing control charts, the Kolmogorov-Smirnov type agreement criterion and the Fisher criterion on the strength of the linear coupling of components were used. A concrete example shows the course of the introduction of these charts in production. The work results can be used in the initial phase of regulation and during the control check of the process under study. We used these control charts to assess product quality and quality control coming from the machine that produces the sleeves. It presents statistical methods for analyzing problems in factory practice and solutions for their elimination.
PubDate: May 2022
- Effect of Parameter Estimation on the Performance of Shewhart -joint Chart
Looked at in Terms of the Run Length Distribution
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Ugwu Samson O. Nduka Uchenna C. Eze Nnaemeka M. Odoh Paschal N. and Ugwu Gibson C. Using spread-charts to monitor process variation and thereafter using the -chart to monitor the process mean after is a common practice. To apply these charts independently using estimated 3-sigma limits is common. Recently, some authors considered the application of and R-charts together as a charting scheme, -chart when the standards are known, Case KK, only the mean standard is known, Case KU and both standards unknown, Case UU. The average run length (ARL) performance criterion was used. However, because of the skewed nature of the run length (RL) distribution, many authors have frowned at the use of ARL as a sole performance measure and encouraged the percentiles of the RL distribution instead. Therefore, the cdfs of the RLs of the chart under the cases mentioned will be derived in this work, and the percentiles are used to look at the chart for Case KU and the yet to be considered case of the chart, Case UK where only the process variance is known is included for comparison. These are the contribution to the existing literature. -chart performed better in Case KU than in Case UK and the unconditional in-control median run length described the behavior of the chart better than the in-control ARL.
PubDate: Mar 2022
- Some Results on Number Theory and Analysis
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 B. M. Cerna Maguiña Dik D. Lujerio Garcia Héctor F. Maguiña and Miguel A. Tarazona Giraldo In this work, we obtain bounds for the sum of the integer solutions of quadratic polynomials of two variables of the form where is a given natural number that ends in one. This allows us to decide the primality of a natural number that ends in one. Also we get some results on twin prime numbers. In addition, we use special linear functionals defined on a real Hilbert space of dimension , in which the relation is obtained: , where is a real number for . When or , we manage to address Fermat's Last Theorem and the equation , proving that both equations do not have positive integer solutions. For , the Cauchy-Schwartz Theorem and Young's inequality were proved in an original way.
PubDate: Mar 2022
- The Non-Abelian Tensor Square Graph Associated to a Symmetric Group and
its Perfect Code
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Athirah Zulkarnain Hazzirah Izzati Mat Hassim Nor Haniza Sarmin and Ahmad Erfanian A set of vertices and edges forms a graph. A graph can be associated with groups using the groups' properties for its vertices and edges. The set of vertices of the graph comprises the elements of the group, while the set of edges of the graph is the properties and requirements for the graph. A non-abelian tensor square graph of a group is defined when its vertex set represents the non-tensor centre elements' set of G. Then, two distinguished vertices are connected by an edge if and only if the non-abelian tensor square of these two elements is not equal to the identity of the non-abelian tensor square. This study investigates the non-abelian tensor square graph for the symmetric group of order six. In addition, some properties of this group's non-abelian tensor square graph are computed, including the diameter, the dominating number, and the chromatic number. The perfect code for the non-abelian tensor square graph for a symmetric group of order six is also found in this paper.
PubDate: Mar 2022
- Data Encryption Using Face Antimagic Labeling and Hill Cipher
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 B. Vasuki L. Shobana and B. Roopa An approach to encrypt and decrypt messages is obtained by relating the concepts of graph labeling and cryptography. Among the various types of labelings given in [3], our interest is on face antimagic labeling introduced by Mirka Miller in 2003 [1]. Baca [2] defines a connected plane graph with edge set and face set as face antimagic if there exist positive integers and and a bijection such that the induced mapping , where for a face , is the sum of all for all edges surrounding is also a bijection. In cryptography there are many cryptosystems such as affine cipher, Hill cipher, RSA, knapsack and so on. Amongst these, Hill cipher is chosen for our encryption and decryption. In Hill cipher [8], plaintext letters are grouped into two-letter blocks, with a dummy letter X inserted at the end if needed to make all blocks of the same length, and then replace each letter with its respective ordinal number. Each plaintext block is then replaced by a numeric ciphertext block , where and are different linear combinations of and modulo 26: (mod 26) and (mod 26) with condition as is one. Each number is translated into a cipher text letter which results in cipher text. In this paper, face antimagic labeling on double duplication of graphs along with Hill cipher is used to encrypt and decrypt the message.
PubDate: Mar 2022
- Principal Canonical Correlation Analysis with Missing Data in Small
Samples
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Toru Ogura and Shin-ichi Tsukada Missing data occur in various fields, such as clinical trials and social science. Canonical correlation analysis often used to analyze the correlation between two random vectors, cannot be performed on a dataset with missing data. Canonical correlation coefficients (CCCs) can also be calculated from a covariance matrix. When the covariance matrix can be estimated by excluding (complete-case and available-case analyses) or imputing (multivariate imputation by chained equations, k-nearest neighbor (kNN), and iterative robust model-based imputation) missing data, CCCs are estimated from this covariance matrix. CCCs have bias even with all observation data. Usually, estimated CCCs are even larger than population CCCs when a covariance matrix estimated from a dataset with missing data is used. The purpose is to bring the CCCs estimated from the dataset with missing data close to the population CCCs. The procedure involves three steps. First, principal component analysis is performed on the covariance matrix from the dataset with missing data to obtain the eigenvectors. Second, the covariance matrix is transformed using first to fourth eigenvectors. Finally, the CCCs are calculated from the transformed covariance matrix. CCCs derived using with this procedure are called the principal CCCs (PCCCs), and simulation studies and numerical examples confirmed the effectiveness of the PCCCs estimated from the dataset with missing data. There were many cases in the simulation results where the bias and root-mean-squared error of the PCCC estimated from the missing data based on kNN were the smallest. In the numerical example results, the first PCCC estimated from the missing data based on kNN is close to the first CCC estimated from the dataset comprising all observation data when the correlation between two vectors is low. Therefore, PCCCs based on kNN were recommended.
PubDate: Mar 2022
- The Non-Trivial Zeros of The Riemann Zeta Function through Taylor Series
Expansion and Incomplete Gamma Function
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Jamal Salah Hameed Ur Rehman and Iman Al- Buwaiqi The Riemann zeta function is valid for all complex number , for the line = 1. Euler-Riemann found that the function equals zero for all negative even integers: −2, −4, −6, ... (commonly known as trivial zeros) has an infinite number of zeros in the critical strip of complex numbers between the lines = 0 and = 1. Moreover, it was well known to him that all non-trivial zeros are exhibiting symmetry with respect to the critical line . As a result, Riemann conjectured that all of the non-trivial zeros are on the critical line, this hypothesis is known as the Riemann hypothesis. The Riemann zeta function plays a momentous part while analyzing the number theory and has applications in applied statistics, probability theory and Physics. The Riemann zeta function is closely related to one of the most challenging unsolved problems in mathematics (the Riemann hypothesis) which has been classified as the 8th of Hilbert's 23 problems. This function is useful in number theory for investigating the anomalous behavior of prime numbers. If this theory is proven to be correct, it means we will be able to know the sequential order of the prime numbers. Numerous approaches have been applied towards the solution of this problem, which includes both numerical and geometrical approaches, also the Taylor series of the Riemann zeta function, and the asymptotic properties of its coefficients. Despite the fact that there are around 1013, non-trivial zeros on the critical line, we cannot assume that the Riemann Hypothesis (RH) is necessarily true unless a lucid proof is provided. Indeed, there are differing viewpoints not only on the Riemann Hypothesis's reliability, but also on certain basic conclusions see for example [16] in which the author justifies the location of non-trivial zero subject to the simultaneous occurrence of , and omitting the impact of an indeterminate form , that appears in Riemann's approach. In this study, we also consider the simultaneous occurrence but we adopt an element-wise approach of the Taylor series by expanding for all = 1, 2, 3, ... at the real parts of the non-trivial zeta zeros lying in the critical strip for is a non-trivial zero of , we first expand each term at then at . Then in this sequel, we evoke the simultaneous occurrence of the non-trivial zeta function zeros, on the critical strip by the means of different representations of Zeta function. Consequently, proves that Riemann Hypothesis is likely to be true.
PubDate: Mar 2022
- On The Unconditional Run Length Distribution and Percentiles for The
-chart When The In-control Process Parameter Is Estimated
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Ugwu Samson. O Uchenna Nduka .C Ezra Precious .N Ugwu Gibson .C Odoh Paschal .N and Nwafor Cynthia. N It is well known that the median is a better measure of location in skewed distributions. Run-length (RL) distribution is a skewed distribution, hence, median run-length measures chart performance better than the average run length. Some authors have advocated examination of the entire percentiles of the RL distribution in assessing chart performance. Such works already exist for Shewhart −chart, CUSUM chart, CUSUM and EWMA charts, Hotelling's chi-square, and the two simple Shewhart multivariate non-parametric charts. Similar work on -chart for one- and two-sided lacks in the literature. This work stands in the gap. Therefore, a detailed and comparative study of the one-sided upper and the two-sided -control charts for some m reference samples at fixed sample size and false alarm rate will be considered here using the information from the unconditional RL cdf curve and its percentiles (mainly median). The order of the RL cdf curves of the one-sided upper -chart is independent of the state of the process unlike in the two-sided one. The one-sided upper chart outperformed the two-sided one both in the in-control and in detecting positive shifts. The two-sided -chart is more sensitive in detecting incremental shifts than to decremental shifts.
PubDate: Mar 2022
- Some Inequalities for -times Differentiable
Strongly Convex Functions
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Duygu Dönmez Demir and Gülsüm Şanal The theory of inequality is in a process of continuous development and has become a quite effective and powerful tool in various branches of mathematics to solve many problems. Convex functions are closely related to the theory of inequality, and many important inequalities are the results of the applications of convex functions. Recently, the results obtained for convex functions have been tried to be extended for strongly convex functions. In our previous studies, the perturbed trapezoid inequality obtained for convex functions has been extended to the functions that can be differentiated -times. This study deals with some general identities introduced for -times differentiable strongly convex functions. Besides, new inequalities related to general perturbed trapezoid inequality are constructed. These inequalities are obtained for the classes of functions which th derivatives of absolute values of the mentioned functions are strongly convex. It is seen that new classes of strongly convex functions turn into those obtained for convex functions under certain conditions. Considering the upper bounds obtained for strongly convex functions, it is concluded that it is better than those obtained for convex functions.
PubDate: Mar 2022
- Modified Profile Likelihood Estimation in the Lomax Distribution
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Maisoun Sewailem and Ayman Baklizi In this paper, we consider improving maximum likelihood inference for the scale parameter of the Lomax distribution. The improvement is based on using modifications to the maximum likelihood estimator based on the Barndorff-Nielsen modification of the profile likelihood function. We apply these modifications to obtain improved estimators for the scale parameter of the Lomax distribution in the presence of a nuisance shape parameter. Due to the complicated expression for the Barndorff-Nielsen's modification, several approximations to this modification are considered in this paper, including the modification based on the empirical covariances and the approximation based on using suitably derived approximate ancillary statistics. We obtained the approximations for the Lomax profile likelihood function and the corresponding modified maximum likelihood estimators. They are not available in simple closed forms and can be obtained numerically as roots of some complicated likelihood equations. Comparisons between maximum profile likelihood estimator and modified profile likelihood estimators in terms of their biases and mean squared errors were carried out using simulation techniques. We found that the approximation based on the empirical covariances to have the best performance according to the criteria used. Therefore we recommend to use this modified version of the maximum likelihood estimator for the Lomax scale parameter, especially for small sample sizes with heavy censoring, which is quite common in industrial life testing experiments and reliability studies. An example based on real data is given to illustrate the methods considered in this paper.
PubDate: Mar 2022
- Fractional Variational Orthogonal Collocation Method for the Solution of
Fractional Fredholm Integro-Differential Equation Using Mamadu-Njoseh
Polynomials
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Jonathan Tsetimi and Ebimene James Mamadu The use of orthogonal polynomials as basis functions via a suitable approximation scheme for the solution of many problems in science and technology has been on the increase and quite fascinating. In many numerical schemes, the convergence depends solely on the nature of the basis function adopted. The Mamadu-Njoseh polynomials are orthogonal polynomials developed in 2016 with reference to the weight function, which bears the same convergence rate as that of Chebyshev polynomials. Thus, in this paper, the fractional variational orthogonal collocation method (FVOCM) is proposed for the solution of fractional Fredholm integro-differential equation using Mamadu-Njoseh polynomials (MNP) as basis functions. Here, the proposed method is an elegant mixture of the variational iteration method (VIM) and the orthogonal collocation method (OCM). The VIM is one of the popular methods available to researchers in seeking the solution to both linear and nonlinear differential problems requiring neither linearization nor perturbation to arrive at the required solution. Collocating at the roots of orthogonal polynomials gives birth to the OCM. For the proposed method, the VIM is initiated to generate the required approximations whereby producing the series which is collocated orthogonally to derive the unknown parameters. The numerical results show that the method derives a high accurate and reliable approximation with a high convergence rate. We have also presented the existence and uniqueness of solution of the method. All computational frameworks in this research are performed via MAPLE 18 software.
PubDate: Mar 2022
- Solution of 1st Order Stiff Ordinary Differential Equations Using Feed
Forward Neural Network and Bayesian Regularization Algorithm
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Rootvesh Mehta Sandeep Malhotra Dhiren Pandit and Manoj Sahni A stiff equation is a differential equation for which certain numerical methods are not stable, unless the step length is taken to be extraordinarily small. The stiff differential equation includes few terms that could result in speedy variation in the solution. When integrating a differential equation numerically, the requisite step length should be incredibly small. In the solution curve, much variation can be observed where the solution curve straightens out to approach a line with slope almost zero. The phenomenon of stiffness is observed when the step-size is unacceptably small in a region where the solution curve is very smooth. A lot of work on solving the stiff ordinary differential equations (ODEs) have been done by researchers with numbers of numerical methods that currently exist. Extensive research has been done to unveil the comparison between their rate of convergence, number of computations, accuracy, and capability to solve certain type of test problems. In the present work, an advanced Feed Forward Neural Network (FFNN) and Bayesian regularization algorithm-based method is implemented to solve first order stiff ordinary differential equations and system of ordinary differential equations. Using proposed method, the problems are solved for various time steps and comparisons are made with available analytical solutions and other existing methods. A problem is simulated using proposed FFNN model and accuracy has been acquired with less calculation efforts and time. The outcome of the work is showing good result to use artificial neural network methods to solve various types of stiff differential equations in near future.
PubDate: Mar 2022
- A Branch and Bound Algorithm to Solve Travelling Salesman Problem (TSP)
with Uncertain Parameters
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 S. Dhanasekar Saroj Kumar Dash and Neena Uthaman The core of the theoretical Computing science and mathematics is computational complexity theory. It is usually concerned with the classification of computational problems in to P and NP problems by using their inherent challenges. There is no efficient algorithms for these problems. Travelling Salesman Problem is one of the most discussed problems in Combinatorial Mathematics. To deduct a Hamiltonian cycle in which the cost or time is minimum is the main objective of the TSP. There exist many algorithms to solve it. Since all the existing algorithms are not efficient to solve it, still many researchers are working to produce efficient algorithms. If the description of the parameters is vague, then fuzzy notions which include membership value are applied to model the parameters. Still the modeling does not give the exact representation of the vagueness. The Intuitionistic fuzzy set which includes non-membership value along with membership values in its domain is applied to model the parameters. The decision variables in the TSP, the cost, time or distance are modeled as intuitionistic fuzzy numbers, then the TSP is named as Intuitionistic fuzzy TSP (InFTSP). We develop the intuitionistic fuzzified version of littlewood's formula or branch and bound method to solve the Intuitionistic fuzzy TSP. This method is effective because it involves the simple arithmetic operation of Intuitionistic fuzzy numbers and ranking of intuitionistic fuzzy numbers. Ordering of intuitionistic fuzzy numbers is vital in optimization problems since it is equivalent to the ordering of alternatives. In this article, we used weighted arithmetic mean method to order the fuzzy numbers. Weighted arithmetic mean method satisfies linear property which is a very important characteristic of ranking function. Numerical examples are solved to validate the given algorithm and the results are discussed.
PubDate: Mar 2022
- Weighted Least Squares Estimation for AR(1) Model With Incomplete Data
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Mohamed Khalifa Ahmed Issa Time series forecasting is the main objective in many life applications such as weather prediction, natural phenomena analysis, financial or economic analysis, etc. In real-life data analysis, missing data can be considered as a feature that the researcher faces because of human error, technical damage, or catastrophic natural phenomena, etc. When one or more observations are missing, it might be urgent to estimate the model as well as to estimate the missing values which lead to a better understanding of the data, and more accurate prediction. Different time series require different effective techniques to have better estimates for those missing values. Traditionally, the missing values are simply replaced by mean and mode imputation, deleted or handled using other methods, which are not convenient enough to address missing values, as those methods can cause bias. One of the most popular models used in estimating time-series data is autoregressive models. Autoregressive models forecast the future values in terms of the previous ones. The first-order autoregressive AR (1) model is the one which the current value is based on the immediately preceding value, then estimating parameters of AR (1) with missing observations is an urgent topic in time series analysis. Many approaches have been developed to address the estimation problems in time series such as ordinary least square (OLS), Yule Walker (YW). Therefore, a suggested method will be introduced to estimate the parameter of the model by using weighted least squares. The properties of the (WLS) estimator are investigated. Moreover, a comparison between those methods using AR (1) model with missing observations is conducted through a Monte Carlo simulation at various sample sizes and different proportions of missing observations, this comparison is conducted in terms of mean square error (MSE) and mean absolute error (MAE). The results of the simulation study state that (WLS) estimator can be considered as the preferable method of estimation. Also, time series real data with missing observations were estimated.
PubDate: Mar 2022
- Introduction to Applied Algebra: Book Review of Chapter 8-Linear Equations
(System of Linear Equations)
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Elvis Adam Alhassan Kaiyu Tian and Adjabui Michael This chapter review presents two ideas and techniques in solving Systems of Linear Equations in the most simple minded straightforward manner to enable the student as well as the instructor to follow it independently with very little guidance. The focus is on using simpler and easier approaches such as Determinants; and Elementary Row Operations to solve Systems of Linear Equations. We found the solution set of a few systems of linear equations by a successive ratio of the determinant of all the matrices formed from replacing each column of the coefficient matrix by the right hand side vector and the determinant of the coefficient matrix repeatedly giving the values of the variables in the system in the order in which they appeared. Similarly, we also used the three types of elementary row operations namely; Row Swap; Scalar Multiplication; and Row Sum to find the solution set of systems of linear equations through row echelon form to reduced row echelon form to find the solution set of some systems of linear equations. Technical forms of systems of linear equations were used to illustrate the two approaches in finding their solution sets. In each approach we started by finding the coefficient matrices from the systems of linear equations.
PubDate: Mar 2022
- On Tensor Product and Colorability of Graphs
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Veninstine Vivik J Sheeba Merlin G P. Xavier and Nila Prem JL The idea of graph coloring problem (GCP) plays a vital role in allotment of resources resulting in its proper utilization in saving labor, space, time and cost effective, etc. The concept of GCP for graph is assigning minimum number of colors to its nodes such that adjacent nodes are allotted a different color, the smallest of which is known as its chromatic number . This work considers the approach of taking the tensor product between two graphs which emerges as a complex graph and it drives the idea of dealing with complexity. The load balancing on such complex networks is a hefty task. Amidst the various methods in graph theory the coloring is a quite simpler tool to unveil the intricate challenging networks. Further the node coloring helps in classifying the nodes with least number of classes in any network. So coloring is applied to balance the allocations in such complex network. We construct the tensor product between two graphs like path with wheel and helm, cycle with sunlet and closed helm graphs then structured their nature. The coloring is then applied for the nodes of the extended new graph to determine their optimal bounds. Hence we obtain the chromatic number for the tensor product of , , and .
PubDate: Mar 2022
- Application of the Fast Expansion Method in Space–Related Problems
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Mikhail Ivanovich Popov Aleksey Vasilyevich Skrypnikov Vyacheslav Gennadievich Kozlov Alexey Viktorovich Chernyshov Alexander Danilovich Chernyshov Sergey Yurievich Sablin Vladimir Valentinovich Nikitin and Roman Alexandrovich Druzhinin In the paper, numerical and approximate analytical solutions for the problem of the motion of a spacecraft from a starting point to a final point during a certain time are obtained. The unpowered and powered portions of the flight are considered. For a numerical solution, a finite-difference scheme of the second order of accuracy is constructed. The space-related problem considered in the study is essentially nonlinear, which necessitates the use of trigonometric interpolation methods to replace the task of calculating the Fourier coefficients with the integral formulas by solving the interpolation system. One of the simplest options for trigonometric sine interpolation on a semi-closed segment [–a, a), where the right end is not included in the general system of interpolation points, is considered. In order to maintain the conditions of orthogonality of sines, an even number of 2M calculation points is uniformly applied to the segment. The sine interpolation theorem is proved and a compact formula is given for calculating the interpolation coefficients. A general theory of fast sine expansion is given. It is shown that in this case, the Fourier coefficients decrease much faster with the increase in serial number compared to the Fourier coefficients in the classical case. This property allows reducing the number of terms taken into account in the Fourier series, as well as the amount of computer calculations, and increasing the accuracy of calculations. The analysis of the obtained solutions is carried out, and their comparison with the exact solution of the test problem is proposed. With the same calculation error, the time spent on a computer using the fast expansion method is hundreds of times less than the time spent on classical finite-difference method.
PubDate: Mar 2022
- Generalized Family of Group Chain Sampling Plans Using Minimum Angle
Method (MAM)
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Mohd Azri Pawan Teh Nazrina Aziz and Zakiyah Zain This research develops a generalized family of group chain sampling plans using the minimum angle method (MAM). The MAM is a method whereby both the producer's and consumer's risks are considered when designing the sampling plans. There are three sampling plans nested under the family of group chain acceptance sampling which are group chain sampling plans (GChSP-1), new two-sided group chain sampling plans (NTSGChSP-1), and two-sided group chain sampling plans (TSGChSP-1). The methodology applied is random values of the fraction defectives for both producer and consumer, and the optimal number of groups, is obtained using the Scilab software. The findings reveal that some of the design parameters manage to obtain the corresponding to the smallest angle, and some of the values fail to get the . The obtained in this research guarantees that the producer and the consumer are protected at most 10% from having defective items.
PubDate: Mar 2022
- New Group Chain Sampling Plan (NGChSP-1) for Generalized Exponential
Distribution
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Nazrina Aziz Tan Jia Xin Zakiyah Zain and Mohd Azri Pawan Teh Acceptance criteria are the conditions imposed on any sampling plan to determine whether the lot is accepted or rejected. Group chain sampling plan (GChSP-1) was constructed according to the 5 acceptance criteria; modified group chain sampling plan (MGChSP-1) was derived with 3 acceptance criteria; later new group chain sampling plan (NGChSP-1) was introduced with 4 acceptance criteria where the NGChSP-1 balances the acceptance criteria between the GChSP-1 and MGChSP-1. Producers favor a sampling plan with more acceptance criteria because it reduces the probability of rejecting a good lot (producer risk), whereas consumers may prefer a sampling plan with fewer acceptance criteria as it reduces the probability of accepting a bad lot (consumer risk). The disparity in acceptance criteria creates a conflict between the two main stakeholders in acceptance sampling. In the literature, there are numerous methods available for developing sampling plans. To date, NGChSP-1 was developed using the minimum angle method. In this paper, NGChSP-1 was constructed with the minimizing consumer's risk method for generalized exponential distribution where mean product lifetime is used as quality parameter. There are six phases involved to develop the NGChSP-1 for different design parameters. Result shows the minimum number of groups decrease when the value of design parameters increases. The results of the performance comparison show that the NGChSP-1 is a better sampling plan than the GChSP-1 because it has a smaller number of groups and lower probability of lot acceptance than the GChSP-1. NGChSP-1 should offer better alternatives to industrial practitioners in sectors involving product life test.
PubDate: Mar 2022
- Reversible Jump MCMC Algorithm for Transformed Laplacian AR: Application
in Modeling CO2 Emission Data
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Suparman Hery Suharna Mahyudin Ritonga Fitriana Ibrahim Tedy Machmud Mohd Saifullah Rusiman Yahya Hairun and Idrus Alhaddad Autoregressive (AR) model is applied to model various types of data. For confidential data, data confusion is very important to protect the data from being known by other unauthorized parties. This paper aims to find data modeling with transformations in the AR model. In this AR model, the noise has a Laplace distribution. AR model parameters include order, coefficients, and variance of the noise. The estimation of the AR model parameter is proposed in a Bayesian method by using the reversible jump Markov Chain Monte Carlo (MCMC) algorithm. This paper shows that the posterior distribution of AR model parameters has a complicated equation, so the Bayes estimator cannot be determined analytically. Bayes estimators for AR model parameters are calculated using the reversible jump MCMC algorithm. This algorithm was validated through a simulation study. This algorithm can accurately estimate the parameters of the transformed AR model with Laplacian noise. This algorithm produces an AR model that satisfies the stationary conditions. The novelty in this paper is the use of transformations in the Laplacian AR model to secure research data when the research results are published in a scientific journal. As an example application, the Laplacian AR model was used to model CO2 emission data. The results of this paper can be applied to modeling and forecasting confidential data in various sectors.
PubDate: Mar 2022
- A New Algorithm for Spectral Conjugate Gradient in Nonlinear Optimization
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Ahmed Anwer Mustafa CJG is a nonlinear conjugation gradient. Algorithms have been used to solve large-scale unconstrained enhancement problems. Because of their minimal memory needs and global convergence qualities, they are widely used in a variety of fields. This approach has lately undergone many investigations and modifications to enhance it. In our daily lives, the conjugate gradient is incredibly significant. For example, whatever we do, we strive for the best outcomes, such as the highest profit, the lowest loss, the shortest road, or the shortest time, which are referred to as the minimum and maximum in mathematics, and one of these ways is the process of spectral gradient descent. For multidimensional unbounded objective function, the spectrum conjugated gradient (SCJG) approach is a strong tool. In this study, we describe a revolutionary SCG technique in which performance is quantified. Based on assumptions, we constructed the descent condition, sufficient descent theorem, conjugacy condition, and global convergence criteria using a robust Wolfe and Powell line search. Numerical data and graphs were constructed utilizing benchmark functions, which are often used in many classical functions, to demonstrate the efficacy of the recommended approach. According to numerical statistics, the suggested strategy is more efficient than some current techniques. In addition, we show how the unique method may be utilized to improve solutions and outcomes.
PubDate: Mar 2022
- Estimating Weibull Parameters Using Maximum Likelihood Estimation and
Ordinary Least Squares: Simulation Study and Application on Meteorological
Data
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Nawal Adlina Mohd Ikbal Syafrina Abdul Halim and Norhaslinda Ali Inefficient estimation of distribution parameters for current climate will lead to misleading results in future climate. Maximum likelihood estimation (MLE) is widely used to estimate the parameters. However, MLE is not well performed for the small size. Hence, the objective of this study is to compare the efficiency of MLE with ordinary least squares (OLS) through the simulation study and real data application on wind speed data based on model selection criteria, Akaike information criterion (AIC) and Bayesian information criterion (BIC) values. The Anderson-Darling (AD) test is also performed to validate the proposed distribution. In summary, OLS is better than MLE when dealing with small sample sizes of data and estimating the shape parameter, while MLE is capable of estimating the value of scale parameter. However, both methods are well performed at a large sample size.
PubDate: Mar 2022
- Evolution Equations of Pseudo Spherical Images for Timelike Curves in
Minkowski 3-Space
Abstract: Publication date: Jul 2022
Source:Mathematics and Statistics Volume 10 Number 4 H. S. Abdel-Aziz H. Serry and M. Khalifa Saad The pseudo spherical images of non-lightlike curves in Minkowski geometry are curves on the unit pseudo sphere, which are intimately related to the curvatures of the original ones. These images are obtained by means of Frenet-Serret frame vector fields associated with the curves. This classical topic is a well-known concept in Lorentzian geometry of curves. In this paper, we introduce the pseudo spherical images for a timelike curve in Minkowski 3-space. Our main purpose of the work is to obtain the time evolution equations of the orthonormal frame and curvatures of these images. The compatibility conditions for the evolutions are used. Finally, the theoretical results obtained through this study are given by some important theorems and explained in two computational examples with the corresponding graphs.
PubDate: Jul 2022
- 2-Odd Labeling of Graphs Using Certain Number Theoretic Concepts and Graph
Operations
Abstract: Publication date: Jul 2022
Source:Mathematics and Statistics Volume 10 Number 4 Ajaz Ahmad Pir Tabasum Mushtaq and A. Parthiban Graph theory plays a significant role in a variety of real-world systems. Graph concepts such as labeling and coloring are used to depict a variety of processes and relationships in material, social, biological, physical, and information systems. Specifically, graph labeling is used in communication network addressing, fault-tolerant system design, automatic channel allocation, etc. 2-odd labeling assigns distinct integers to the nodes of in such a manner, that the positive difference of adjacent nodes is either 2 or an odd integer, , . So, is a 2-odd graph if and only if it permits 2-odd labeling. Studying certain important modifications through various graph operations on a given graph is interesting and challenging. These operations mainly modify the underlying graph's structure, so understanding the complex operations that can be done over a graph or a set of graphs is inevitable. The motivation behind the development of this article is to apply the concept of 2-odd labeling on graphs generated by using various graph operations. Further, certain results on 2-odd labeling are also derived using some well-known number theoretic concepts such as the Twin prime conjecture and Goldbach's conjecture, besides recalling a few interesting applications of graph labeling and graph coloring.
PubDate: Jul 2022
- Numerical Solution of Nonlinear Fredholm Integral Equations Using
Half-Sweep Newton-PKSOR Iteration
Abstract: Publication date: Jul 2022
Source:Mathematics and Statistics Volume 10 Number 4 Labiyana Hanif Ali Jumat Sulaiman Azali Saudi and Xu Ming Ming This paper is concerned with producing an efficient numerical method to solve nonlinear Fredholm integral equations using Half-Sweep Newton-PKSOR (HSNPKSOR) iteration. The computation of numerical methods in solving nonlinear equations usually requires immense amounts of computational complexity. By implementing a Half-Sweep approach, the complexity of the calculation is tried to be reduced to produce a more efficient method. For this purpose, the steps of the solution process are discussed beginning with the derivation of nonlinear Fredholm integral equations using a quadrature scheme to get the half-sweep approximation equation. Then, the generated approximation equation is used to develop a nonlinear system. Following that, the formulation of the HSNPKSOR iterative method is constructed to solve nonlinear Fredholm integral equations. To verify the performance of the proposed method, the experimental results were compared with the Full-Sweep Newton-KSOR (FSNKSOR), Half-Sweep Newton-KSOR (HSNKSOR), and Full-Sweep Newton-PKSOR (FSNPKSOR) using three parameters: number of iteration, iteration time, and maximum absolute error. Several examples are used in this study to illustrate the efficiency of the tested methods. Based on the numerical experiment, the results appear that the HSNPKSOR method is effective in solving nonlinear Fredholm integral equations mainly in terms of iteration time compared to rest tested methods.
PubDate: Jul 2022
- Dynamics of Nonlinear Operator Generated by Lebesgue Quadratic Stochastic
Operator with Exponential Measure
Abstract: Publication date: Jul 2022
Source:Mathematics and Statistics Volume 10 Number 4 Nur Zatul Akmar Hamzah Siti Nurlaili Karim Mathuri Selvarajoo and Noor Azida Sahabudin Quadratic stochastic operator (QSO) is a branch of nonlinear operator studies initiated by Bernstein in 1924 through his presentation on population genetics. The study of QSO is still ongoing due to the incomplete understanding of the trajectory behavior of such operators given certain conditions and measures. In this paper, we intend to introduce and investigate a class of QSO named Lebesgue QSO which gets its name from the Lebesgue measure as the measure is used to define the probability measure of such QSO. The broad definition of Lebesgue QSO allows the construction of a new measure as its family of probability measure. We construct a class of Lebesgue QSO with exponential measure generated by 3-partition with three different parameters defined on continual state space . Also, we present the dynamics of such QSO by describing the fixed points and periodic points of the system of equations generated by the defined QSO using a functional analysis approach. The investigation is concluded by the regularity of the operator, where such Lebesgue QSO is either regular or nonregular depending on the parameters and defined measurable partitions. The result of this research allows us to define a new family of functions of the probability measure of Lebesgue QSO and compare their dynamics with the existing Lebesgue QSO.
PubDate: Jul 2022
- A Study on Sylow Theorems for Finding out Possible Subgroups of a Group in
Different Types of Order
Abstract: Publication date: Jul 2022
Source:Mathematics and Statistics Volume 10 Number 4 Md. Abdul Mannan Md. Amanat Ullah Uttam Kumar Dey and Mohammad Alauddin This paper aims at treating a study on Sylow theorem of different algebraic structures as groups, order of a group, subgroups, along with the associated notions of automorphisms group of the dihedral groups, split extensions of groups and vector spaces arises from the varying properties of real and complex numbers. We must have used the Sylow theorems of this work when it's generalized. Here we discuss possible subgroups of a group in different types of order which will give us a practical knowledge to see the applications of the Sylow theorems. In algebraic structures, we deal with operations of addition and multiplication and in order structures, those of greater than, less than and so on. It is through the study of Sylow theorems that we realize the importance of some definitions as like as the exact sequences and split extensions of groups, Sylow p-subgroup and semi-direct product. Thus it has been found necessary and convenient to study these structures in detail. In situations, where it was found that a given situation satisfies the basic axioms of structure and having already known the properties of that structure. Finally, we find out possible subgroups of a group in different types of order for abelian and non-abelian cases.
PubDate: Jul 2022
- The Power and Its Graph Simulations on Discrete and Continuous
Distributions
Abstract: Publication date: Jul 2022
Source:Mathematics and Statistics Volume 10 Number 4 Budi Pratikno Nailatul Azizah and Avita Nur Azizah We determined the power and its graph simulations on the discrete Poisson and Chi-square distributions. There are four important steps of the research methodology summarized as follow: (1) determine the sufficient statistics (if possible), (2) create the rejection area (UMPT test is sometime used), (3) derive the formula of the power, and (4) determine the graphs using the data (in simulation). The formula of the power and their curves are then created using code. The result showed that the power of the illustration of the discrete (Binomial distribution) depended on the number of trials and bound of the rejection area. The curve of the power is sigmoid (-curve) and tends to be zero when parameter shape () is greater than 0.4. It decreases (started from = 0.2) as the parameter theta increases. In the Poisson context, the curve of the power of the Poisson distribution is not -curve, and it only depends on the parameter shape . We note that the curve of the power of the Poisson is quickly to be one for greater than 2 and less than 10. In this case, the size of the Poisson distribution is greater than 0.05, so it is not a reasonable thing even the power is close to be one. In this context, we have to choose the maximum power and minimum size. In the context of Chi-square distribution, the graph of the power and size functions depend on rejection region boundary (). Here, we note that skewness of the -curve is positive as the increases. Similarly, the size also depends on the (and constant), and it decrease as the increases. We here also noted that the power is quickly to be one for large degree of freedom ().
PubDate: Jul 2022
- Analysis of Limiting Ratios of Special Sequences
Abstract: Publication date: Jul 2022
Source:Mathematics and Statistics Volume 10 Number 4 A. Dinesh Kumar and R. Sivaraman In this paper, we have determined the limit of ratio of (n+1)th term to the nth term of famous sequences in mathematics like Fibonacci Sequence, Fibonacci – Like Sequence, Pell's Sequence, Generalized Fibonacci Sequence, Padovan Sequence, Generalized Padovan Sequence, Narayana Sequence, Generalized Narayana Sequence, Generalized Recurrence Relations of Fibonacci – Type sequence, Polygonal Numbers, Catalan Sequence, Cayley numbers, Harmonic Numbers and Partition Numbers. We define this ratio as limiting ratio of the corresponding sequence. Sixteen different classes of special sequences are considered in this paper and we have determined the limiting ratios for each one of them. In particular, we have shown that the limiting ratios of Fibonacci sequence and Fibonacci – Like sequence is the fascinating real number called Golden Ratio which is 1.618 approximately. We have shown that the limiting ratio of Pell's sequence is a real number called Silver Ratio and the limiting ratios for generalized Fibonacci sequence are metallic ratios. We have also obtained the limiting ratios of Padovan and generalized Padovan sequence. The limiting ratio of Narayana sequence happens to be a number called super Golden Ratio which is 1.4655 approximately. We have shown that the limiting ratios of Generalized Narayana sequence are the numbers known as super Metallic Ratios. We have also shown that the limiting ratio of generalized recurrence relation of Fibonacci type is 2 and that of Polygonal numbers and Harmonic numbers are 1. We have proved that the limiting ratio of the famous Catalan sequence and Cayley numbers are 4. Finally, assuming Rademacher's Formula, we have shown that the limiting ratio of Partition numbers is the natural logarithmic base e. We have proved fourteen theorems to derive limiting ratios of various well known sequences in this paper. From these limiting ratio values, we can understand the asymptotic behavior of the terms of all these amusing sequences of numbers in mathematics. The limiting ratio values also provide an opportunity to apply in lots of counting and practical problems.
PubDate: Jul 2022
- A New Ranking Approach for Solving Fuzzy Transportation Problem with
Pentagonal Fuzzy Number
Abstract: Publication date: Jul 2022
Source:Mathematics and Statistics Volume 10 Number 4 V. Vidhya and K. Ganesan In any decision-making process, imprecision is a significant issue. To deal with the ambiguous environment of collective decision-making, various tools and approaches have been created. Fuzzy set theory is one of the most recent approaches for coping with imprecision. The Fuzzy Transportation Problem (FTP) is a well-known network planned linear programming problem which exists in a variety of situations and has received a lot of attention recently. Many authors defined and solved the fuzzy transportation problem with frequently utilized fuzzy numbers such as triangular fuzzy numbers or trapezoidal fuzzy numbers. On the other hand, real-world problems usually involve more than four variables. To tackle these concerns, the pentagonal fuzzy number is applied to the problems. This article proposes an approach to solving transportation problems whose parameters are pentagonal fuzzy numbers without requiring an initial feasible solution. An algorithm based on the core and spread method and an extended MODI method is developed to determine the optimal solution to the problem. The proposed process is based on the approximation method and gives a more efficient result. An illustrated example is used to validate the model. As a result, the proposed methodology is both simpler and more computationally efficient than the existing approaches.
PubDate: Jul 2022
- Subset Intersection Group Testing Strategy
Abstract: Publication date: Jul 2022
Source:Mathematics and Statistics Volume 10 Number 4 Maheswaran Srinivasan Many a time, items can be classified as defective or non-defective and the objective is to identify all the defective items, if any, in the population. The concept of group testing deals with identifying all such defective items using a minimum number of tests. This paper proposes probabilistic group testing through a subset intersection group testing strategy. The proposed algorithm 'Subset Intersection Group Testing Strategy' deals with dividing the whole population, if it is positive, into different rows and columns and individually testing all the defective rows and columns. Through this proposed strategy, the number of group tests is either always one when no defective is found or 1+r+c, where r and c denote the number of rows and columns, when at least one defective is found. The proposed algorithms are validated using simulation for different combinations of group size and the incidence probability of an item being defective (p) and implications are drawn. The results indicate that the average number of total tests required is smaller when p is small and considerably increases as p increases. Therefore, for the smaller values of p, this proposed strategy is more effective. Also, an attempt is made to estimate an upper bound for the number of tests through this strategy in various scenarios.
PubDate: Jul 2022
- Construction and Selection of Double Inspection Single Sampling Plan for
an Independent Process Using Bivariate Poisson Distribution
Abstract: Publication date: Jul 2022
Source:Mathematics and Statistics Volume 10 Number 4 D. Senthilkumar and P. Sabarish There are more sampling concepts active in production Industries, for inspecting the samples and analysing performance of the population. Also the sampling plans reduce errors in the production and produce the error free products. In this study, construction and selection of Double Inspection with reference to Single Sampling Plan i.e., DISSP, by attribute are investigated by using the Bivariate Poisson distribution. The Methodology, DISSP, was proposed based on two quality characteristics of the same sample size, and the planning parameters (n, C1, C2) are based on the operating characteristics, the conventional two-point condition by the planning table parameters (AQL and LQL). It is based on selected quality requirements and risks designed to allow manufacturers to easily determine the required sample size and corresponding acceptance criteria. A Comparison was done based on the efficiency of the plan with an existing single sampling plan and gave a numerical example to expose the operating tables. Also, the study shows the advantages of the proposed plan, and performance of the curves like, Operating characteristics, Average Outgoing Quality, and Average Total Inspection to expose the proposed double inspection sampling plan.
PubDate: Jul 2022
- Bayesian Model Averaging in Modeling of State Specific Failure Rates in
HIV/AIDS Progression
Abstract: Publication date: Jul 2022
Source:Mathematics and Statistics Volume 10 Number 4 Nahashon Mwirigi Prof. Richard Simwa Dr. Mary Wainaina and Dr. Stanley Sewe In modeling HIV/AIDS progression, we carried out a comprehensive investigation into the risk factors for state-specific-failure rates to identify the influential co-variates using Bayesian Model averaging method (BMA). BMA provides a posterior probability via Markov Chain Monte Carlo (MCMC) for each variable that belongs to the model. It accounts for model uncertainty by averaging all plausible models using their posterior probabilities as the weights for model-averaged predictions and estimates of the required parameters. Patients' age, and gender, among other co-variates, have been found to influence the state-specific-failure rates highly. However, the impact of each of the factors on the state specific-failure was not quantified. This paper seeks to evaluate and quantify the contribution of the patient's age and gender, CD4 cell count during any two consecutive visits, and state movement on the state-specific-failure rates for patients transiting either to the same, better or worse state. We used R Studio statistical Programming software to implement the method by applying BMS and BMA packages. State movement had a comparatively large coefficient with a posterior inclusion probability (PIP) of 0.8788 (87.88%). Hence, the most critical variable followed by observation-two-CD4-cell-count with a PIP of 0.1416 (14.16%), age and gender were the last with a PIP of 0.0556 (5.56%) and 0.0510 (5.10%) respectively for patients transiting to the same state. For patients transiting to a better state, the patients' age group dominated with a PIP of 0.9969 (99.69%), followed by patients' gender with a PIP of 0.0608 (6.08%). Patients' CD4 cell count during the second observation had the least PIP of 0.0399 (3.99%). For patients transiting to a worse disease state, patients CD4 cell count during the second observation proved to be the most important, with a PIP of 0.6179(61.79%) followed by state movement with a PIP of 0.2599 (25.99%), patients gender tailed with a PIP of 0.0467 (4.67%).
PubDate: Jul 2022
- Double Duplication of Special Classes of Cycle Related Graphs
Abstract: Publication date: Jul 2022
Source:Mathematics and Statistics Volume 10 Number 4 R.Kuppan and L.Shobana Let be a simple, finite, connected, plane graph with the vertex set , the edge set and the face set ). Martin Baca [1] defined, a connected plane graph with vertex set , edge set and face set to be face antimagic if there exists positive integers and and a bijection : such that the induced mapping : , where for a face , is the sum of all for all edges surrounding is also a bijection. This paper proves the existence of face antimagic labeling for the double duplication of all vertices by edges of gear graph for , grid graph for , where even, prism graph for and the double duplication of all vertices by edges of strong face of triangular snake graph for . The face antimagic labeling for double duplication of special graphs can be used to encrypt and decrypt the messages, which is used as a real time application. In [3], we used face antimagic labeling of strong face of duplication of all vertices by edges of a tree for to encrypt and decrypt thirteen secret numbers which can be extended to double duplication of graphs to encode and decode the numbers, which in turn can be used in military base, ATM and so on.
PubDate: Jul 2022
- Uncertainty Optimization-Based Rough Set for Incomplete Information
Systems
Abstract: Publication date: Jul 2022
Source:Mathematics and Statistics Volume 10 Number 4 Arvind Kumar Sinha and Pradeep Shende Often the information in the surrounding world is incomplete, and such incomplete information gives rise to uncertainties. Pawlak's rough set model is an approach to approximation under uncertainty. It uses a tolerance relation to obtain single granulation of the incomplete information system for approximation. In this work, we extend the single granulation rough set for the incomplete information system to an uncertainty optimization-based rough set (UOBRS). The proposed approach is used to minimize the uncertainty using multiple tolerance relations. We list properties of the UOBRS for incomplete information systems. We compare UOBRS with the classical single granulation rough set (SGRS) and multi-granular rough set (MGRS). We list the basic properties of the UOBMGRS. We introduce the application of the UOBRS for attribute subset selection in case of incomplete information system. We use the measure of approximation quality to assess the uncertainties of the attributes. We compare the approximation quality of the attributes using UOBRS with the approximation quality using SGRS and MGRS. We get higher approximation quality with the less number of attributes using UOBRS as compared to SGRS and MGRS. The proposed method is a novel approach to dealing with incomplete information systems for more effective dataset analysis.
PubDate: Jul 2022
- Perfect Codes in the Spanning and Induced Subgraphs of the Unity Product
Graph
Abstract: Publication date: Jul 2022
Source:Mathematics and Statistics Volume 10 Number 4 Mohammad Hassan Mudaber Nor Haniza Sarmin and Ibrahim Gambo The unity product graph of a ring is a graph which is obtained by setting the set of unit elements of as the vertex set. The two distinct vertices and are joined by an edge if and only if . The subgraphs of a unity product graph which are obtained by the vertex and edge deletions are said to be its induced and spanning subgraphs, respectively. A subset of the vertex set of induced (spanning) subgraph of a unity product graph is called perfect code if the closed neighbourhood of , forms a partition of the vertex set as runs through . In this paper, we determine the perfect codes in the induced and spanning subgraphs of the unity product graphs associated with some commutative rings with identity. As a result, we characterize the rings in such a way that the spanning subgraphs admit a perfect code of order cardinality of the vertex set. In addition, we establish some sharp lower and upper bounds for the order of to be a perfect code admitted by the induced and spanning subgraphs of the unity product graphs.
PubDate: Jul 2022
- Cluster Analysis on Various Cluster Validity Indexes with Average Linkage
Method and Euclidean Distance (Study on Compliant Paying Behavior of Bank
X Customers in Indonesia 2021)
Abstract: Publication date: Jul 2022
Source:Mathematics and Statistics Volume 10 Number 4 Solimun Solimun and Adji Achmad Rinaldo Fernades This study aims to examine the differences in various cluster validity indexes in the grouping of credit customers at Bank X Malang City, Indonesia using the average linkage and Euclidean distance methods. This study uses primary data with the variables used are service quality, environment, mode, willingness to pay, and obedient paying behavior obtained through a questionnaire with a Likert scale through purposive sampling distributed to 100 respondents. The data are then analyzed by clusters using the ward linkage and Euclidean distance methods on various validity cluster indexes, including the Silhouette Index, Krzanowski-Lai, Dunn, Gap, Davies-Bouldin, Index C, Global Sillhouette, Goodman-Kruskal in this study used as a tool analysis. This study uses R software. The results show that the Krzanowski-Lai, Dunn, Gap, Global Sillhouette, and Goodman-Kruskal indices have the same cluster members, as well as the Silhouette and Davies-Bouldin indices. The best cluster indexes are the Silhouette and Davies-Bouldin indexes. All validity indices produce variance between and within the same cluster. The novelty of this study is to compare 8 validity indices, namely Sillhouette Index, Krzanowski-Lai, Dunn, Gap, Davies-Bouldin, Index C, Global Sillhouette, and Goodman-Kruskal simultaneously.
PubDate: Jul 2022
- Analysis of IBFS for Transportation Problem by Using Various Methods
Abstract: Publication date: Jul 2022
Source:Mathematics and Statistics Volume 10 Number 4 S. K. Sharma and Keshav Goel The supply, demand and transportation cost in transportation problem cannot be obtained by all existing methods directly. In the existing literature, various methods have been proposed for calculating transportation cost. In this paper, we are comparing various methods for measuring the optimal cost. The objective of this paper is obtaining IBFS of real-life problems by various methods. In this paper, we include various methods such as AMM (Arithmetic Mean Method), ASM (Assigning Shortest Minimax Method) etc. The Initial Basic Feasible solution is one of the most important parts for analyzing the optimal cost of transportation Problem. For many applications of transportation problem such as image registration and wrapping, reflector design seismic tomography and reflection seismology etc, we analyze the transportation cost. TP is used to find the best solution in such a way in which product produced at several sources (origins) are supply to the various destinations. To fulfil all requirement of destination at lowest cost possible is the main objective of a transportation problem. All transport companies are looking forward to adopting a new approach for minimizing the cost. Along these lines, it is essential just as an adequate condition for the transportation problem to have an attainable arrangement. A numerical example is solved by different approaches for obtaining IBFS.
PubDate: Jul 2022
- Advancement of Generalized Method of Moment Estimation (GMM) For Spatial
Dynamic Panel Simultaneous Equations Models with Fixed Time Effect
Abstract: Publication date: Jul 2022
Source:Mathematics and Statistics Volume 10 Number 4 Dwi Endah Kusrini Setiawan Heri Kuswanto and Budi Nurani Ruchjana This research paper aims to form and estimate the spatial dynamic panel simultaneous equations models (SDPS) with fixed time effect that potentially have heteroscedasticity cases. The model formed with the individual effect is not eliminated but placed in the error model to accommodate cases of heteroscedasticity in the model. GMM with two stages least square (2SLS) method for the single equation is deliberately chosen as the estimation method for the SDPS model because it can eliminate heterogeneity cases in the model. The effectiveness of the estimate is seen based on the value of RMSE (Root Mean Square Error), mean and standard deviation (SD) of bias estimate by simulating Monte Carlo 100 times with different parameter pairs and different pairs N and T can also be concluded that parameter scenario changes do not give much effect on the mean bias value and SD bias. The SDPS model shows that the consistency of the estimated parameter values can be achieved easily if the number of T is added. Changes in the number of N and T indicate that the greater the N and T, the smaller RMSE value tends to be.
PubDate: Jul 2022
- Solving Lorenz System by Using Lower Order Symmetrized Runge-Kutta Methods
Abstract: Publication date: Jul 2022
Source:Mathematics and Statistics Volume 10 Number 4 N. Adan N. Razali N. A. Zainuri N. A. Ismail A. Gorgey and N. I. Hamdan Runge-Kutta is a widely used numerical method for solving the non-linear Lorenz system. This study focuses on solving the Lorenz equations with the classical parameter values by using the lower order symmetrized Runge-Kutta methods, Implicit Midpoint Rule (IMR), and Implicit Trapezoidal Rule (ITR). We show the construction of the symmetrical method and present the numerical experiments based on the two methods without symmetrization, with one- and two-step active symmetrization in a constant step size setting. For our numerical experiments, we use MATLAB software to solve and plot the graphical solutions of the Lorenz system. We compare the oscillatory behaviour of the solutions and it appears that IMR and two-step active IMR turn out to be chaotic while the rest turn out to be non-chaotic. We also compare the accuracy and efficiency of the methods and the result shows that IMR performs better than the symmetrizers, while two-step active ITR performs better than ITR and one-step active ITR. Based on the results, we conclude that different implicit numerical methods with different steps of active symmetrization can significantly impact the solutions of the non-linear Lorenz system. Since most study on solving the Lorenz system is based on explicit time schemes, we hope this study can motivate other researchers to analyze the Lorenz equations further by using Runge-Kutta methods based on implicit time schemes.
PubDate: Jul 2022
- Markowitz Random Set and Its Application to the Paris Stock Market Prices
Abstract: Publication date: Jul 2022
Source:Mathematics and Statistics Volume 10 Number 4 Ahssaine Bourakadi Naima Soukher Baraka Achraf Chakir and Driss Mentagui In this paper, we will combine random set theory and portfolio theory, through the estimation of the lower bound of the Markowitz random set based on the Mean-Variance Analysis of Asset Portfolios Approach, which represents the efficient frontier of a portfolio. There are several Markowitz optimization approaches, of which we denote the most known and used in the modern theory of portfolio, namely, the Markowitz's approach, the Markowitz Sharpe's approach and the Markowitz and Perold's approach, generally these methods are based on the minimization of the variance of the return of a portfolio. On the other hand, the method used in this paper is completely different from those denoted above, because it is based on the theory of random sets, which allowed us to have the mathematical structure and the graphic of the Markowitz set. The graphical representation of the Markowitz set gives us an idea of the investment region. This region, called the investment zone, contains the stocks in which the rational investor can choose to invest. Mathematical and statistical estimation techniques are used in this paper to find the explicit form of the Markowitz random set, and to study its elements in function of the signs of the estimated parameters. Finally, we will apply the results found to the case of the returns of a portfolio composed of 200 assets from the Paris Stock Market Prices. The results obtained by this simulation allow us to have an idea on the stocks to recommend to the investors. In order to optimize their choices, these stocks are those which will be located above the curve of the hyperbola which represents the Markowitz set.
PubDate: Jul 2022
- Limit Theorems for The Sums of Random Variables in A Special Form
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Azam A. Imomov and Zuhriddin A. Nazarov In this paper, we consider some functionals of the sums of independent identically distributed random variables. The functionals of the sums are important in probabilistic models and stochastic branching systems. In connection with the application in various probabilistic models and stochastic branching systems, we are interested in the fulfillment of the law of large numbers and the Central limit theorem for these sums. The main hypotheses of the paper are the presence of second order moments of the variables and the fulfillment of the Lindeberg condition is considered. The research object and subject of this paper consists of specially generated random variables using the sums of non-bound random variables. In total, 6 different sums in a special form were studied in the paper and this sum was not previously studied by other scientists. The purpose of the paper is to examine whether these sums in a special form satisfy the terms of the law of large numbers and the Central limit theorem. The main result of the paper is to show that the law of large numbers and the terms of the classical limit theorem are fulfilled in some cases. The results obtained in the paper are of theoretical importance, The Central limit theorem analogues proved here are applications of Lindeberg theorem. The results can be applied to the determination of the fluctuation of immigration branching systems as well as the asymptotic state of autoregression processes. At the same time, from the main results obtained in the paper it can be used in practical lessons conducted on the theory of probability. The results of the paper will be an important guide for young researchers. Important theorems proved in the paper can be used in probability theory, stochastic branching systems and other practical problems.
PubDate: Jan 2022
- Fuzzy EOQ Model for Time Varying Deterioration and Exponential Time
Dependent Demand Rate under Inflation
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 K.Geetha and S.P.Reshma In this study we have discussed a fuzzy eoq model for deteriorating products with time varying deterioration under inflation and exponential time dependent demand rate. Shortages are not allowed in this fuzzy eoq model and the impact of inflation is investigated. An inventory model is used to determine whether the order quantity is more than or equal to a predetermined quantity for declining items.The optimal solution for the existing model is derived by taking truncated taylor’s series approximation for finding closed form optimal solution. The cost of deterioration, cost of ordering, cost of holding and the time taken to settle the delay in account are considered using triangular fuzzy numbers. In this study the fuzzy triangular numbers are used to estimate the optimal order quantity and cycle duration. Furthermore we have used graded mean integration method and signed distance approach to defuzzify these values. To validate our model numerical examples are discussed for all cases with the help of sensitivity analysis for different parameters. Finally, a higher decay rate results in a shorter ideal cycle time as well as higher overall relevant cost is established. The presented model can be used to predict demand as a quadratic function of time,stock level time dependent demand, selling price, and other variables.
PubDate: Jan 2022
- Newton-PKSOR with Quadrature Scheme in Solving Nonlinear Fredholm Integral
Equations
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Labiyana Hanif Ali Jumat Sulaiman and Azali Saudi In this study, we applied Newton method with a new version of KSOR, called PKSOR to form NPKSOR in solving nonlinear second kind Fredholm integral equations. A new version of KSOR is an update to the KSOR method with two relaxation parameters. The properties of KSOR helps in enlargement of the solution domain so the relaxation parameter can take the value . With PKSOR, the relaxation parameter in KSOR is treated into two different relaxation paramaters as and which resulting lower number of iteration compared to the KSOR method. By combining the Newton method with PKSOR, we intend to from more efficient method to solve the nonlinear Fredholm integral equations. The discretization part of this study is done using first-order quadrature scheme to develop a nonlinear system. We formulate the solution of the nonlinear system using the given approach by reducing it to a linear system and then solving it using iterative methods to obtain an approximate solution. Furthermore, we compare the results of the proposed methods with NKSOR and NGS methods on three examples. Based on our findings, the NPKSOR method is more efficient than NKSOR and NGS methods. By implementing the NPKSOR method, we can boost the convergence rate of the iteration by considering two relaxation parameters, resulting in a lower number of iteration and computational time.
PubDate: Jan 2022
- Modelling of Cointegration with Student's T-errors
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Nimitha John and Balakrishna Narayana Two or more non-stationary time series are said to be co-integrated if a certain linear combination of them becomes stationary. Identification of co-integrating relationships among the relevant time series helps the researchers to develop efficient forecasting methods. The classical approach of analyzing such series is to express the co-integrating time series in the form of error correction models with Gaussian errors. However, the modeling and analysis of cointegration in the presence of non-normal errors needs to be developed as most of the real time series in the field of finance and economics deviates from the assumption of normality. This paper focuses on modeling of a bivariate cointegration with a student's-t distributed error. The co-integrating vector obtained from the error correction equation is estimated using the method of maximum likelihood. A unit root test of first order non stationary process with student's t-errors is also defined. The resulting estimators are used to construct test procedures for testing the unit root and cointegration associated with two time series. The likelihood equations are all solved using numerical approaches because the estimating equations do not have an explicit solution. A simulation study is carried out to illustrate the finite sample properties of the model. The simulation experiments show that the estimates perform reasonably well. The applicability of the model is illustrated by analyzing the data on time series of Bombay stock exchange indices and crude oil prices and found that the proposed model is a good fit for the data sets.
PubDate: Jan 2022
- Expectation-Maximization Algorithm Estimation Method in Automated Model
Selection Procedure for Seemingly Unrelated Regression Equations Models
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Nur Azulia Kamarudin Suzilah Ismail and Norhayati Yusof Model selection is the process of choosing a model from a set of possible models. The model's ability to generalise means it can fit both current and future data. Despite numerous emergences of procedures in selecting models automatically, there has been a lack of studies on procedures in selecting multiple equations models, particularly seemingly unrelated regression equations (SURE) models. Hence, this study concentrates on an automated model selection procedure for the SURE model by integrating the expectation-maximization (EM) algorithm estimation method, named SURE(EM)-Autometrics. This extension procedure was originally initiated from Autometrics, which is only applicable for a single equation. To assess the performance of SURE(EM)-Autometrics, simulation analysis was conducted under two strengths of correlation among equations and two levels of significance for a two-equation model with up to 18 variables in the initial general unrestricted model (GUM). Three econometric models have been utilised as a testbed for true specification search. The results were divided into four categories where a tight significance level of 1% had contributed a high percentage of all equations in the model containing variables precisely comparable to the true specifications. Then, an empirical comparison of four model selection techniques was conducted using water quality index (WQI) data. System selection to select all equations in the model simultaneously proved to be more efficient than single equation selection. SURE(EM)-Autometrics dominated the comparison by being at the top of the rankings for most of the error measures. Hence, the integration of EM algorithm estimation is appropriate in improving the performance of automated model selection procedures for multiple equations models.
PubDate: Jan 2022
- The Power of Test of Jennrich Statistic with Robust Methods in Testing the
Equality of Correlation Matrices
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Bahtiar Jamili Zaini and Shamshuritawati Md Sharif Jennrich statistic is a method that can be used to test the equality of 2 or more independent correlation matrices. However, Jennrich statistic begins to be problematic when there is presence of outliers that could lead to invalid results. When exiting outliers in data, Jennrich statistic implications will affect Type I errors and will reduce the power of test. To overcome the presence of outliers, this study suggests the use of robust methods as an alternative method and therefore, it will integrate the estimator into Jennrich statistic. Thus, it can improve the testing performance of correlation matrix hypotheses in relation to outlier problems. Therefore, this study proposes 3 statistical tests, namely Js-statistic, Jm-statistic, and Jmad-statistic that can be used to test the equation of 2 or more correlation matrices. The performance of the proposed method is assessed using the power of test. The results show that Jm-statistic and Jmad-statistic can overcome outlier problems into Jennrich statistic in testing the correlation matrix hypothesis. Jmad-statistic is also superior in testing the correlation matrix hypothesis for different sample sizes, especially those involving 10% outliers.
PubDate: Jan 2022
- Solving Multi-Response Problem Using Goal Programming Approach and
Quantile Regression
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Sara Abdel Baset Ramadan Hamed Maha El-Ashram and Zakaria Abdel Samea Response surface methodology (RSM) is a group of mathematical and statistical techniques helpful for improving, developing and optimizing processes. It also has important uses in the design, development and formulation of new products. Moreover, it has a great help in the enhancement of existing products. (RSM) is a method used to discover response functions, which meet and fulfill all quality diagnostics simultaneously. Most applications have more than one response; the main problem is multi-response optimization (MRO). The classical methods used to solve the Multi-Response Optimization problem do not guarantee optimal designs and solutions. Besides, they take a long time and depend on the researcher's judgment. Therefore, some researchers used a Goal Programming-based method; however, they still do not guarantee an optimal solution. This study aims to form a goal programming model derived from a chance constrained approach using quantile regression to deal with outliers not normal and errors. It describes the relationship between responses and control variables at distinctive points in the response conditional distribution; it also considers the uncertainty problem and presents an illustrative example and simulation study for the suggested model.
PubDate: Jan 2022
- Some Properties of BP-Space
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Ahmed Talip Hussein and Emad Allawi Shallal Y. Imai, K. Iseki [4], and K. Iseki [5] presented types from summary algebras which are called BCK-algebras and BCI-algebras. It is known that the brand of BCK algebras is a suitable subtype from the type from BCI-algebras. The researchers Q. P. Hu [2] & X. Li [3] presented a width type from essence algebras: BCH- algebras. They have exhibited that the type of BCI-algebras is a suitable subtype of the type of BCH-algebras. Moreover, J. Neggers and H. S. K [9] presented the connotation from d - algebras that are else popularization from BCK-algebras, inspected kinsmen amidst d-algebras & BCK-algebras. They calculated diversified topologies to research from lattices but they did not discuss the experience of making the binary operation of d- algebra continuous. Topological set notions are famous and yet accurate by numerous mathematicians. Even global topographical algebraic structure is sought by several writers. We realize a Tb-algebra, get it several ownerships of such build, the generality significant flavors and arrive to realize a new gender of spaces designated BP- space, where we arrived the results. Let be B-space and is periodic proportional. Then is a compact set in and = , . Also If is an invariant under , then , and are invariant under for every Q in if is also. If the function is closed (one to one) then , () is invariant under and the set of interior points of is invariant under , if the function is open and .
PubDate: Jan 2022
- Solving Differential Equations of Fractional Order Using Combined Adomian
Decomposition Method with Kamal Integral Transformation
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Muhamad Deni Johansyah Asep K Supriatna Endang Rusyaman and Jumadil Saputra The differential equation is an equation that involves the derivative (derivatives) of the dependent variable with respect to the independent variable (variables). The derivative represents nothing but a rate of change, and the differential equation helps us present a relationship between the changing quantity with respect to the change in another quantity. The Adomian decomposition method is one of the iterative methods that can be used to solve differential equations, both integer and fractional order, linear or nonlinear, ordinary or partial. This method can be combined with integral transformations, such as Laplace, Sumudu, Natural, Elzaki, Mohand, Kashuri-Fundo, and Kamal. The main objective of this research is to solve differential equations of fractional order using a combination of the Adomian decomposition method with the Kamal integral transformation. Furthermore, the solution of the fractional differential equation using the combined method of the Adomian decomposition method and the Kamal integral transformation was investigated. The main finding of our study shows that the combined method of the Adomian decomposition method and the Kamal integral transformation is very accurate in solving differential equations of fractional order. The present results are original and new for solving differential equations of fractional order. The results attained in this paper confirm the illustrative example has been solved to show the efficiency of the proposed method.
PubDate: Jan 2022
- Fuzzy Number – A New Hypothesis and Solution of Fuzzy Equations
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Vijay C. Makwana Vijay. P. Soni Nayan I. Patel and Manoj Sahni In this paper, a new hypothesis of fuzzy number has been proposed which is more precise and direct. This new proposed approach is considered as an equivalence class on set of real numbers R with its algebraic structure and its properties along with theoretical study and computational results. Newly defined hypothesis provides a well-structured summary that offers both a deeper knowledge about the theory of fuzzy numbers and an extensive view on its algebra. We defined field of newly defined fuzzy numbers which opens new era in future for fuzzy mathematics. It is shown that, by using newly defined fuzzy number and its membership function, we are able to solve fuzzy equations in an uncertain environment. We have illustrated solution of fuzzy linear and quadratic equations using the defined new fuzzy number. This can be extended to higher order polynomial equations in future. The linear fuzzy equations have numerous applications in science and engineering. We may develop some iterative methods for system of fuzzy linear equations in a very simple and ordinary way by using this new methodology. This is an innovative and purposefulness study of fuzzy numbers along with replacement of this newly defined fuzzy number with ordinary fuzzy number.
PubDate: Jan 2022
- A Griffith Crack at the Interface of an Isotropic and Orthotropic Half
Space Bonded Together
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 A. K. Awasthi Rachna and Harpreet Kaur In the past 53 years, many efforts have been contributed to develop and demonstrate the properties of reinforced composite materials. The ever-increasing use of composite materials through engineering structures needs the proper analysis of the mechanical response of these structures. In the proposed work, we have an exact form of Stress components and Displacement components to a Griffith crack at the interface of an Isotropic and Orthotropic half-space bounded together. The expression was evaluated in the vicinity of crack tips by using Fourier transform method but here these components have been evaluated with the help of Fredholm integral equations and then reduce to the coupled Fredholm integral equations. In this paper, we use the problem of Lowengrub and Sneddon and reduce it to dual integral equations. Solution of these equations through the use of the method of Srivastava and Lowengrub is reduced to coupled Fredholm integral equation. Further reduces the problem to decoupled Fredholm integral equation of 2nd kind. We get the solution of dual integral equations and the problem is reduced to coupled Fredholm integral equation. We find the solution of the Fredholm integral equation and reduce it to decoupled Fredholm integral equation of 2nd kind. The Physical interest in fracture design criterion is due to Stress and crack opening Displacement components. In the end, we can easily calculate the Stress components and Displacement components in the exact form.
PubDate: Jan 2022
- Outcomes of Common Fixed Point Theorems in S-metric Space
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Katta.Mallaiah and Veladi Srinivas In the present paper, we establish the existence of two unique common fixed point theorems with a new contractive condition for four self-mappings in the S-metric space. First, we establish a common fixed-point theorem by using weaker conditions such as compatible mappings of type-(E) and subsequentially continuous mappings. Further, in the next theorem, we use another set of weaker conditions like sub-compatible and sub-sequentially continuous mappings, which are weaker than occasionally weak compatible mappings. Moreover, it is observed that the mappings in these two theorems are sub-sequentially continuous, but these mappings are neither continuous nor reciprocally continuous mappings. These two results will extend and generalize the existing results of [7] and [9] in the S-metric space. Furthermore, we also provide some suitable examples to justify our outcomes.
PubDate: Jan 2022
- An Approach to Solve Multi Attribute Decision-making Problem Based on the
New Possibility Measure of Picture Fuzzy Numbers
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 K. Deva and S. Mohanaselvi A picture fuzzy set is a more powerful tool to deal with uncertainties in the given information as compared to fuzzy set and intuitionistic fuzzy set and has energetic applications in decision-making. The aim of this study is to develop a new possibility measure for ranking picture fuzzy numbers and then some of its basic properties are proved. The proposed method provides the same ranking order as the score function in the literature. Moreover, the new possibility measure can provide additional information for the relative comparison of the picture fuzzy numbers. A picture fuzzy multi attribute decision-making problem is solved based on the possibility matrix generated by the proposed method after being aggregated using picture fuzzy Einstein weighted averaging aggregation operator. To verify the importance of the proposed method, an picture fuzzy multi attribute decision-making strategy is presented along with an application for selecting suitable alternative. The superiority of the proposed method and limitations of the existing methods are discussed with the help of a comparative study. Finally, a numerical example and comparative analysis are provided to illustrate the practicality and feasibility of the proposed method.
PubDate: Jan 2022
- A Basic Dimensional Representation of Artin Braid Group , and a General
Burau Representation
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Arash Pourkia Braid groups and their representations are at the center of study, not only in low-dimensional topology, but also in many other branches of mathematics and theoretical physics. Burau representation of the Artin braid group which has two versions, reduced and unreduced, has been the focus of extensive study and research since its discovery in 1930's. It remains as one of the very important representations for the braid group. Partly, because of its connections to the Alexander polynomial which is one of the first and most useful invariants for knots and links. In the present work, we show that interesting representations of braid group could be achieved using a simple and intuitive approach, where we simply analyse the path of strands in a braid and encode the over-crossings, under-crossings or no-crossings into some parameters. More precisely, at each crossing, where, for example, the strand crosses over the strand we assign t to the top strand and b to the bottom strand. We consider the parameter t as a relative weight given to strand relative to , hence the position for t in the matrix representation. Similarly, the parameter b is a relative weight given to strand relative to , hence the position for b in the matrix representation. We show this simple path analyzing approach that leads us to an interesting simple representation. Next, we show that following the same intuitive approach, only by introducing an additional parameter, we can greatly improve the representation into the one with much smaller kernel. This more general representation includes the unreduced Burau representation, as a special case. Our new path analyzing approach has the advantage that it applies a very simple and intuitive method capturing the fundamental interactions of the strands in a braid. In this approach we intuitively follow each strand in a braid and create a history for the strand as it interacts with other strands via over-crossings, under-crossings or no-crossings. This, directly, leads us to the desired representations.
PubDate: Jan 2022
- On Recent Advances in Divisor Cordial Labeling of Graphs
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Vishally Sharma and A. Parthiban An assignment of intergers to the vertices of a graph subject to certain constraints is called a vertex labeling of . Different types of graph labeling techniques are used in the field of coding theory, cryptography, radar, missile guidance, -ray crystallography etc. A DCL of is a bijective function from node set of to such that for each edge , we allot 1 if divides or divides & 0 otherwise, then the absolute difference between the number of edges having 1 & the number of edges having 0 do not exceed 1, i.e., . If permits a DCL, then it is called a DCG. A complete graph , is a graph on nodes in which any 2 nodes are adjacent and lilly graph is formed by joining , sharing a common node. i.e., , where is a complete bipartite graph & is a path on nodes. In this paper, we propose an interesting conjecture concerning DCL for a given , besides, discussing certain general results concerning DCL of complete graph -related graphs. We also prove that admits a DCL for all . Further, we establish the DCL of some -related graphs in the context of some graph operations such as duplication of a node by an edge, node by a node, extension of a node by a node, switching of a node, degree splitting graph, & barycentric subdivision of the given .
PubDate: Jan 2022
- Viscosity Analysis of Lubricating Oil Through the Solution of Exponential
Fractional Differential Equations
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Endang Rusyaman Kankan Parmikanti Diah Chaerani and Khoirunnisa Rohadatul Aisy Muslihin Lubricating oil is still a primary need for people dealing with machines. The important thing of lubricating oil is viscosity which is closely related to surface tension. Fluid viscosity states the measure of friction in the fluid, while surface tension is the tendency of the fluid to stretch due to attractive forces between the molecules (cohesion). We want to know how and to what extent the relationship between viscosity and surface tension of the lubricating oil is. This paper will discuss the analysis of a model in the form of an exponential fractional differential equation that states the relationship between surface tension and viscosity of lubricating oil. The Modified Homotopy Perturbation Method (MHPM) will be used to determine the solution of the fractional differential equation. This study indicates a relationship between viscosity and surface tension in the form of fractional differential equation in which the existence and uniqueness of the solution are guaranteed. From the analysis of the solution function both analytically and geometrically supported by empirical data, it can be concluded that there is a strong exponential relationship between viscosity and surface tension in lubricating oil.
PubDate: Jan 2022
- A Goal Programming Approach for Generalized Calibration Weights Estimation
in Stratified Random Sampling
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Siham Rabee Ramadan Hamed Ragaa Kassem and Mahmoud Rashwaan Calibration estimation approach is a widely used method for increasing the precision of the estimates of population parameters. It works by modifying the design weights as little as possible by minimizing a given distance function to the calibrated weights respecting a set of constraints related to specified auxiliary variables. This paper proposes a goal programming approach for generalized calibration estimation. In the generalized calibration estimation, multi study variables will be considered by incorporating multi auxiliary variables. Almost all calibration estimation's literature proposed calibrated estimators for the population mean of only one study variable. And nevertheless, up to researcher's knowledge, there is no study that considers calibration estimation approach for multi study variables. According to the correlation structure between the study variables, estimating the calibrated weights will be formulated in two different models. The theory of the proposed approach is presented and the calibrated weights are estimated. A simulation study is conducted in order to evaluate the performance of the proposed approach in the different scenarios compared by some existing calibration estimators. The Simulation results of the four generated populations show that the proposed approach is more flexible and efficient compared to classical methods.
PubDate: Jan 2022
- Explicit Formulas and Numerical Integral Equation of ARL for SARX(P,r)L
Model Based on CUSUM Chart
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Suvimol Phanyaem The Cumulative Sum (CUSUM) chart is widely used and has many applications in different fields such as finance, medical, engineering, and other fields. In real applications, there are many situations in which the observations of random processes are serially correlated, such as a hospital admission in the medical field, a share price in the economic field, or a daily rainfall in the environmental field. The common characteristic of control charts that has been used to evaluate the performance of control charts is the Average Run Length (ARL). The primary goals of this paper are to derive the explicit formula and develop the numerical integral equation of the ARL for the CUSUM chart when observations are seasonal autoregressive models with exogenous variable, SARX(P,r)L with exponential white noise. The Fredholm Integral Equation has been used for solving the explicit formula of ARL, and we used numerical methods including the Midpoint rule, the Trapezoidal rule, the Simpson's rule, and the Gaussian rule to approximate the numerical integral equation of ARL. The uniqueness of solutions is guaranteed by using Banach's Fixed Point Theorem. In addition, the proposed explicit formula was compared with their numerical methods in terms of the absolute percentage difference to verify the accuracy of the ARL results and the computational time (CPU). The results obtained indicate that the ARL from the explicit formula is close to the numerical integral equation with an absolute percentage difference of less than 1%. We found an excellent agreement between the explicit formulas and the numerical integral equation solutions. An important conclusion of this study was that the explicit formulas outperformed the numerical integral equation methods in terms of CPU time. Consequently, the proposed explicit formulas and the numerical integral equation have been the alternative methods for finding the ARL of the CUSUM control chart and would be of use in fields like biology, engineering, physics, medical, and social sciences, among others.
PubDate: Jan 2022
- Solving Ordinary Differential Equations (ODEs) Using Least Square Method
Based on Wang Ball Curves
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Abdul Hadi Bhatti and Sharmila Binti Karim Numerical methods are regularly established for the better approximate solutions of the ordinary differential equations (ODEs). The best approximate solution of ODEs can be obtained by error reduction between the approximate solution and exact solution. To improve the error accuracy, the representations of Wang Ball curves are proposed through the investigation of their control points by using the Least Square Method (LSM). The control points of Wang Ball curves are calculated by minimizing the residual function using LSM. The residual function is minimized by reducing the residual error where it is measured by the sum of the square of the residual function of the Wang Ball curve's control points. The approximate solution of ODEs is obtained by exploring and determining the control points of Wang Ball curves. Two numerical examples of initial value problem (IVP) and boundary value problem (BVP) are illustrated to demonstrate the proposed method in terms of error. The results of the numerical examples by using the proposed method show that the error accuracy is improved compared to the existing study of Bézier curves. Successfully, the convergence analysis is conducted with a two-point boundary value problem for the proposed method.
PubDate: Jan 2022
- Prediction Variance Properties of Third-Order Response Surface Designs in
the Hypersphere
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Abimibola Victoria Oladugba and Brenda Mbouamba Yankam The variance dispersion graphs (VDGs) and the fraction of design space (FDS) graphs are two graphical methods that effectively describe and evaluate the points of best and worst prediction capability of a design using the scaled prediction variance properties. These graphs are often utilized as an alternative to the single-value criteria such as D- and E- when they fail to describe the true nature of designs. In this paper, the VDGs and FDS graphs of third-order orthogonal uniform composite designs (OUCD4) and orthogonal array composite designs (OACD4) using the scaled-prediction variance properties in the spherical region for 2 to 7 factors are studied throughout the design region and over a fraction of space. Single-valued criteria such as D-, A- and G-optimality are also studied. The results obtained show that the OUCD4 is more optimal than the OACD4 in terms of D-, A- and G-optimality. The OUCD4 was shown to possess a more stable and uniform scaled-prediction variance throughout the design region and over a fraction of design space than the OACD4 although the stability of both designs slightly deteriorated towards the extremes.
PubDate: Jan 2022
- Study of the New Finite Mixture of Weibull Extension Model:
Identifiability, Properties and Estimation
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Noura S. Mohamed Moshira A. Ismail and Sanaa A. Ismail Finite mixture models have been used in many fields of statistical analysis such as pattern recognition, clustering and survival analysis, and have been extensively applied in different scientific areas such as marketing, economics, medicine, genetics and social sciences. Introducing mixtures of new generalized lifetime distributions that exhibit important hazard shapes is a major field of research aiming at fitting and analyzing a wider variety of data sets. The main objective of this article is to present a full mathematical study of the properties of the new finite mixture of the three-parameter Weibull extension model, considered as a generalization of the standard Weibull distribution. The new proposed mixture model exhibits a bathtub-shaped hazard rate among other important shapes in reliability applications. We analytically prove the identifiability of the new mixture and investigate its mathematical properties and hazard rate function. Maximum likelihood estimation of the model parameters is considered. The Kolmogrov-Smirnov test statistic is used to fit two famous data sets from mechanical engineering to the proposed model, the Aarset data and the Meeker and Escobar datasets. Results show that the two-component version of the proposed mixture is a superior fit compared to various lifetime distributions, either one-component or two-component lifetime distributions. The new proposed mixture is a significant statistical tool to study lifetime data sets in numerous fields of study.
PubDate: Jan 2022
- A Simulation of an Elastic Filament Using Kirchhoff Model
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Saimir Tola Alfred Daci and Gentian Zavalani This paper presents numerical simulations and comparisons between different approaches concerning elastic thin rods. Elastic rods are ideal for modeling the stretching, bending, and twisting deformations of such long and thin elastic materials. The static solution of Kirchhoff's equations [2] is produced using ODE45 solver where Kirchhoff and reference system equations are combined instantaneously. Solutions using formulations are based on Euler's elastica theory [1] which determines the deformed centerline of the rod by solving a boundary-value problem, on the Discreet Elastic Rod method using Bishop frame (DER) [5,6] which is based on discrete differential geometry, it starts with a discrete energy formulation and obtains the forces and equations of motion by taking the derivative of energies. Instead of discretizing smooth equations, DER solves discrete equations and obeys geometrical exactness. Using DER we measure torsion as the difference of angles between the material and the Bishop frame of the rod so that no additional degree of freedom is needed to represent the torsional behavior. We found excellent agreement between our Kirchhoff-based solution and numerical results obtained by the other methods. In our numerical results, we include simulation of the rod under the action of the terminal moment and illustrations of the gravity effects.
PubDate: Jan 2022
- Stratification Methods for an Auxiliary Variable Model-Based Allocation
under a Superpopulation Model
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Bhuwaneshwar Kumar Gupt Mankupar Swer Md. Irphan Ahamed B. K. Singh and Kh. Herachandra Singh In this paper, the problem of optimum stratification of heteroscedastic populations in stratified sampling is considered for a known allocation under Simple Random Sampling With and Without Replacement (SRSWR & SRSWOR) design. The known allocation used in the problem is one of the model-based allocations proposed by Gupt [1,2] under a superpopulation model considered by Hanurav [3], Rao [4], and Gupt and Rao [5] which was modified by the author (Gupt [1,2]) to a more general form. The problem of finding optimum boundary points of stratification (OBPS) in stratifying populations considered here is based on an auxiliary variable which is highly correlated with the study variable. Equations giving the OBPS have been derived by minimizing the variance of estimator of the population mean. Since the equations giving OBPS are implicit and difficult for solving, some methods of finding approximately optimum boundary points of stratification (AOBPS) have also been obtained as the solutions of the equations giving OBPS. While deriving equations giving OBPS and methods of finding AOBPS, basic statistical definitions, tools of calculus, analytic functions and tools of algebra are used. While examining the efficiencies of the proposed methods of stratification, they are tested in a few generated populations and a live population. All the proposed methods of stratification are found to be efficient and suitable for practical applications. In this study, although the proposed methods are obtained under a heteroscedastic superpopulation model for level of heteroscedasticity one, the methods have shown robustness in empirical investigation in varied levels of heteroscedastic populations. The stratification methods proposed here are new as they are derived for an allocation, under the superpopulation model, which has never been used earlier by any researcher in the field of construction of strata in stratified sampling. The proposed methods may be a fascinating piece of work for researchers amidst the vigorously progressing theoretical research in the area of stratified sampling. Besides, by virtue of exhibiting high efficiencies in the performance of the methods, the work may provide a practically feasible solution in the planning of socio-economic survey.
PubDate: Jan 2022
- Accuracy and Efficiency of Symmetrized Implicit Midpoint Rule for Solving
the Water Tank System Problems
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 M. F. Zairul Fuaad N. Razali H. Hishamuddin and A. Jedi The accuracy and efficiency of water tank system problems can be determined by comparing the Symmetrized Implicit Midpoint Rule (IMR) with the IMR. Static and dynamic analyses are part of a mathematical model that uses energy conservation to generate a nonlinear Ordinary Differential Equation. Static analysis provides optimal working points, while dynamic analysis outputs an overview of the system behaviour. The procedure mentioned is tested on two water tank designs, namely, cylindrical and rectangular tanks with two different parameters. The Symmetrized IMR is used in this study. Results show that the two-step active Symmetrized IMR applied on the proposed mathematical model is precise and efficient and can be used for the design of appropriate controls. The cylindrical water tank model takes the fastest time in emptying the water tank. The approach of the various water tank models shows an increase in accuracy and efficiency in the range of parameters used for practical model applications. The results of the numerical method show that the two-step Symmetrized IMR provides more precise stability, accuracy and efficiency for the fixed step size measurements compared with other numerical methods.
PubDate: Jan 2022