Similar Journals
Mathematics and Statistics
Number of Followers: 2 Open Access journal ISSN (Print) 2332-2071 - ISSN (Online) 2332-2144 Published by Horizon Research Publishing [50 journals] |
- Weibull Distribution as the Choice Model for State-Specific Failure Rates
in HIV/AIDS Progression
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Nahashon Mwirigi Stanley Sewe Mary Wainaina and Richard Simwa This study considered the problem of selecting the best single model for modeling state-specific failure rates in HIV/AIDS progression for patients on antiretroviral therapy with age and gender as risk factors using exponential, twoparameter, and three-parameter Weibull distributions. CD4 count changes in any two consecutive visits, the mean waiting time (μ), and transitional rates (λ) for remaining in the same state or transiting to a better or a worse state were analyzed. Various model selection criteria, namely, Akaike Information Criteria (AIC), Bayesian Information Criteria (BIC), and Log-Likelihood (LL), were used in each specific disease state. The Maximum Likelihood Estimation (MLE) method was applied to obtain the parameters of the distributions used. Plots of State-specific transition rates (λ) depicted constant, increasing, decreasing, and unimodal trends. Three-parameter Weibull distribution was the best for male patients and patients aged (40-69) years transiting in the states 1-2, 3-4, and 4-5, and 1-2, 3-4, and 5-6, respectively, and for male, female patients, and patients aged (40-69), remaining in the same state. Two-parameter Weibull distribution was the best for female patients and patients aged (20-39) years transiting in the states 1-2, 2-3, 4-5, and 1-2, 2-3, 3-4, respectively. Exponential distribution proved inferior to the other two distributions used.
PubDate: May 2022
- The Radii of Starlikeness for Concave Functions
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Munirah Rossdy Rashidah Omar and Shaharuddin Cik Soh Let denote the functions' class that is normalized, analytic, as well as univalent in the unit disc given by . Convex, starlike, as well as close-to-convex functions resemble the main subclasses of , expressed by , as well as , accordingly. Many mathematicians have recently studied radius problems for various classes of functions contained in . The determination of the univalence radius, starlikeness, and convexity for specific special functions in is a relatively new topic in geometric function theory. The problem of determining the radius has been initiated since the 1920s. Mathematicians are still very interested in this, particularly when it comes to certain special functions in . Indeed, many papers investigate the radius of starlikeness for numerous functions. With respect to the open unit disc and class , the class of concave functions , known as , is defined. It is identified as a normalised analytic function , which meets the requirement of having the opening angle of at . A univalent function is known as concave provided that is concave. In other words, we have that is also convex. There is no literature to date on determining the radius of starlikeness for concave univalent functions related to certain rational functions, lune, cardioid, and the exponential equation. Hence, by employing the subordination method, we present new findings on determining several radii of starlikeness for different subclasses of starlike functions for the class of concave univalent functions .
PubDate: May 2022
- Comparison between The Discrimination Frequency of Two Queueing Systems
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Said Taoufiki and Jamal El Achky Each of us has had the experience of being overtaken by another less demanding customer in a queue. And each of us got behind a demanding customer and had to wait a long time. The frequencies of discrimination that appear here are overruns and heavy work. These are two phenomena that accompany queues, and have a great impact on customer satisfaction. Recently, authors have turned to measure queuing fairness based on the idea that a customer may feel anger towards the queuing system, even if he does not stay long on hold if he had one of the two experiences. We have found that this type of approach is more in line with studies provided by sociologists and psychologists. The frequencies of discrimination in a queue are studied for certain models of a single server. But for the case of multi-servers, there is only one study of a two-server Markovian queue. In this article, we wish to generalize this last study and we demonstrate that the result found in the case of two servers remains valid after comparing the discrimination frequencies of two Markov queueing systems to several servers.
PubDate: May 2022
- Traumatic Systolic Blood Pressure Modeling: A Spectral Gaussian Process
Regression Approach with Robust Sample Covariates
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 David Kwamena Mensah Michael Arthur Ofori and Nathaniel Howard Physiological vital signs acquired during traumatic events are informative on the dynamics of the trauma and their relationship with other features such as sample-specific covariates. Non-time dependent covariates may introduce extra challenges in the Gaussian Process () regression, as their main predictors are functions of time. In this regard, the paper introduces the use of Orthogonalized Gnanadesikan-Kettering covariates for handling such predictors within the Gaussian process regression framework. Spectral Bayesian regression is usually based on symmetric spectral frequencies and this may be too restrictive in some applications, especially physiological vital signs modeling. This paper builds on a fast non-standard variational Bayes method using a modified Van der Waerden sparse spectral approximation that allows uncertainty in covariance function hyperparameters to be handled in a standard way. This allows easy extension of Bayesian methods to complex models where non-time dependent predictors are available and the relationship between the smoothness of trend and covariates is of interest. The utility of the methods is illustrated using both simulations and real traumatic systolic blood pressure time series data.
PubDate: May 2022
- Parameter Estimation for Additive Hazard Model Recurrent Event Using
Counting Process Approach
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Triastuti Wuryandari Gunardi and Danardono The Cox regression model is widely used for survival data analysis. The Cox model requires a proportional hazard. If the proportional hazard assumption is doubtful, then the additive hazard model can be used, where the covariates act in an additively to the baseline hazard function. If the observed survival time is more than once for one individual during the observation, it is called a recurrent event. The additive hazard model measures risk difference to the effect of a covariate in absolutely, while the proportional hazards model measure hazard ratio in relatively. The risk coefficients estimation in the additive hazard model mimics the multiplicative hazard model, using partial likelihood methods. The derivation of these estimators, outlined in the technical notes, is based on the counting process approach. The counting process approach was first developed by Aalen on 1975 which combines elements of stochastic integration, martingale theory and counting process theory. The method is applied to study about the effect of supplementation on infant growth and development. Based on the processing results, the factors that affect the growth and development of the infant are gender, treatment and mother's education.
PubDate: May 2022
- Pricing of A European Call Option in Stochastic Volatility Models
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Said Taoufiki and Driss Gretete Volatility occupies a strategic place in the financial markets. In this context of crisis, and with the great movements of the markets, traders have been forced to turn to volatility trading for the potential gain it provides. The Black-Scholes formula for the value of a European option to purchase the underlying depends on a few parameters which are more or less easy to calculate, except for the realized volatility at maturity which makes a problem, because there is no single value, nor an established way to calculate it. In this article, we exploit the Martingale pricing method to find the expected present value of a given asset relative to a riskneutral probability measure. We consider a bond-stock market that evolves according to the dynamics of the Black-Scholes model, with a risk-free interest rate varying with time. Our methodology has effectively directed us towards interesting formulas that we have derived from the exact calculation, giving the present value of the volatility realized over a period of maturity for a European option in a stochastic volatility model.
PubDate: May 2022
- On Generalized Bent and Negabent Functions
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Deepmala Sharma and Sampada Tiwari From the last few years, generalized bent functions gain a lot of attention in research as they have many applications in various fields such as combinatorial design, sequence design theory, cryptography, CDMA communication, etc. A deep and broad study of generalized bent functions with their properties is done in literature. Kumar et al.[11] first gave the concept of generalized bent function. Many researchers studied the properties and characterizations of generalized bent functions. In [2] authors introduced the concept of generalized (-ary) negabent functions and studied some properties of generalized (-ary) negabent functions. In this paper, we study the generalized (-ary) bent functions , where is the ring of integers with mod , is the vector space of dimension over and ≥2 is any positive integer. We discuss several properties of generalized (-ary) bent functions with respect to their nega-Hadamard transform. We also study the relation between generalized nega-Hadamard transforms and generalized nega-autocorrelations. Furthermore, we prove the necessary and sufficient conditions for the bentness and negabentness of generalized (-ary) bent function generated by the secondary construction for , where .
PubDate: May 2022
- Three-Point Block Algorithm for Approximating Duffing Type Differential
Equations
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Ahmad Fadly Nurullah Rasedee Mohammad Hasan Abdul Sathar Najwa Najib Nurhidaya Mohamad Jan Siti Munirah Mohd and Siti Nor Aini Mohd Aslam The current study was conducted to establish a new numerical method for solving Duffing type differential equations. Duffing type differential equations are often linked to damping issues in physical systems, which can be found in control process problems. The proposed method is developed using a three-point block method in backward difference form, which offers an accurate approximation of Duffing type differential equations with less computational cost. Applying an Adam's like predictor-corrector formulation, the three point block method is programmed with a recursive relationship between explicit and implicit coefficients to reduce computational cost. By establishing this recursive relationship, we established a corrector algorithm in terms of the predictor. This eliminates any undesired redundancy in the calculation when obtaining the corrector. The proposed method allows a more efficient solution without any significant loss of accuracy. Four types of Duffing differential equations are selected to test the viability of the method. Numerical results will show efficiency of the three-point block method compared against conventional and more established methods. The outcome of this research is a new method for successfully solving Duffing type differential equation and other ordinary differential equations that are found in the field of science and engineering. An added advantage of the three-point block method is its adaptability to parallel programming.
PubDate: May 2022
- On Invariants of Surfaces with Isometric on Sections
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Sharipov Anvarjon Soliyevich and Topvoldiyev Fayzulla Foziljonovich In one of the directions of classical differential geometry, the properties of geometric objects are studied in their entire range, which is called geometry "in large". Many problems of geometry "in large" are connected with the existence and uniqueness of surfaces with given characteristics. Geometric features can be intrinsic curvature, extrinsic or Gaussian curvature, and other features associated with the surface. The existence of a polyhedron with given curvatures of vertices or with a given development is also a problem of geometry "in large". Therefore, the problem of finding invariants of polyhedra of a certain class and the solution of the problem of the existence and uniqueness of polyhedra with given values of the invariant are relevant. This work is devoted to finding invariants, surfaces isometric on sections. In particular, we study the expansion properties of convex polyhedra that preserve isometry on sections. For such polyhedra, an invariant associated with the vertex of a convex polyhedral angle is found. Using this invariant, we can consider the question of restoring a convex polyhedron with given values of conditional curvature at the vertices. The isometry on section differs from the isometry of surfaces. The isometry of surfaces does not imply the isometry in sections, and vice versa. One of the invariants of surfaces isometric in cross sections is the area of the cylindrical image. This paper presents the properties of the area of a cylindrical image.
PubDate: May 2022
- ()-Anti-Intuitionistic Fuzzy Soft b-Ideals
in BCK/BCI-Algebras
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Aiyared Iampan M. Balamurugan and V. Govindan Among many algebraic structures, algebras of logic form an essential class of algebras. BCK and BCI-algebras are two classes of logical algebras. They were introduced by Imai and Iséki [6, 7] in 1966 and have been extensively investigated by many researchers. The concept of fuzzy soft sets is introduced in [17] to generalize standard soft sets [21]. The concept of intuitionistic fuzzy soft sets is introduced by Maji et al. [18], which is based on a combination of the intuitionistic fuzzy set [2] and soft set models. The first section will discuss the origins and importance of studies in this article. Section 2 will review the definitions of a BCK/BCI-algebra, a soft set, a fuzzy soft set, and an intuitionistic fuzzy soft set and show the essential properties of BCK/BCI-algebras to be applied in the next section. In Section 3, the concept of an anti-intuitionistic fuzzy soft b-ideal (AIFSBI) is discussed in BCK/BCI-algebras, and essential properties are provided. A set of conditions is provided for an AIFSBI to be an anti-intuitionistic fuzzy soft ideal (AIFSI). The definition of quasi-coincidence of an intuitionistic fuzzy soft point with an intuitionistic fuzzy soft set (IFSS) is considered in a more general form. In Section 4, the concepts of an ()-AFSBI and an ()-AIFSBI of are introduced, and some characterizations of ()-AIFSBI are discussed using the concept of an AIFSBI with thresholds. Finally, conditions are given for a ()-AIFSBI to be a (∈,∈)-AIFSBI.
PubDate: May 2022
- Half-Space Model Problem for Navier-Lamé Equations with Surface
Tension
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Sri Maryani Bambang H Guswanto and Hendra Gunawan Recently, we have seen the phenomena in use of partial differential equations (PDEs) especially in fluid dynamic area. The classical approach of the analysis of PDEs were dominated in early nineteenth century. As we know that for PDEs the fundamental theoretical question is whether the model problem consists of equation and its associated side condition is well-posed. There are many ways to investigate that the model problems are well-posed. Because of that reason, in this paper we consider the -boundedness of the solution operator families for Navier-Lamé equation by taking into account the surface tension in a bounded domain of -dimensional Euclidean space (≥ 2) as one way to study the well-posedess. We investigate the -boundedness in half-space domain case. The -boundedness implies not only the generation of analytic semigroup but also the maximal regularity for the initial boundary value problem by using Weis's operator valued Fourier multiplier theorem for time dependent problem. It was known that the maximal regularity class is the powerful tool to prove the well-posesness of the model problem. This result can be used for further research for example to analyze the boundedness of the solution operators of the model problem in bent-half space or general domain case.
PubDate: May 2022
- Half-Sweep Refinement of SOR Iterative Method via Linear Rational Finite
Difference Approximation for Second-Order Linear Fredholm
Integro-Differential Equations
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Ming-Ming Xu Jumat Sulaiman and Nur Afza Mat Ali The numerical solutions of the second-order linear Fredholm integro-differential equations have been considered and discussed based on several discretization schemes. In this paper, the new schemes are developed derived on the hybrid of the three-point half-sweep linear rational finite difference (3HSLRFD) approaches with the half-sweep composite trapezoidal (HSCT) approach. The main advantage of the established schemes is that they discretize the differential terms and integral term of second-order linear Fredholm integro-differential equations into the algebraic equations and generate the corresponding linear system. Furthermore, the half-sweep (HS) concept is combined with the refinement of the successive over-relaxation (RSOR) iterative method to create the new half-sweep successive over-relaxation (HSRSOR) iterative method, which is implemented to get the numerical solution of a system of linear algebraic equations. Apart from that, the classical or full-sweep Gauss-Seidel (FSGS) and full-sweep successive over-relaxation iterative (FSSOR) methods are presented, which serve as the control method in this paper. In the end, we employed FSGS, FSRSOR and HSRSOR methods to obtain numerical solutions of three examples and make a detailed comparison from three aspects of the number of iterations, elapsed time and maximum absolute error. Numerical results demonstrate that FSRSOR and HSRSOR methods have lesser iterations, faster elapsed time, and are more accurate than FSGS. In addition, HSRSOR is the most effective of the three methods. To sum up, this paper has successfully proposed the applicability and superiority of the new HSRSOR method based on 3HSLRFD-HSCT schemes.
PubDate: May 2022
- On Some Properties of Fabulous Fraction Tree
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 A. Dinesh Kumar and R. Sivaraman Among several properties that real numbers possess, this paper deals with the exciting formation of positive rational numbers constructed in the form of a Tree, in which every number has two branches to the left and right from the root number. This tree possesses all positive rational numbers. Hence it consists of infinite numbers. We call this tree "Fraction Tree". We will formally introduce the Fraction Tree and discuss several fascinating properties including proving the one-one correspondence between natural numbers and the entries of the Fraction Tree. In this paper, we shall provide the connection between the entries of the fraction tree and Fibonacci numbers through some specified paths. We have also provided ideas relating the terms of the Fraction Tree with that of continued fractions. Five interesting theorems related to the entries of the Fraction Tree are proved in this paper. The simple rule that is used to construct the Fraction Tree enables us to prove many mathematical properties in this paper. In this sense, one can witness the simplicity and beauty of making deep mathematics through simple and elegant formulations. The Fraction Tree discussed in this paper which is technically called Stern-Brocot Tree has profound applications in Science as diverse as in clock manufacturing in the early days. In particular, Brocot used the entries of the Fraction Tree to decide the gear ratios of mechanical clocks used several decades ago. A simple construction rule provides us with a mathematical structure that is worthy of so many properties and applications. This is the real beauty and charm of mathematics.
PubDate: May 2022
- The Relative (Co)homology Theory through Operator Algebras
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 M. Kozae Samar A. Abo Quota and Alaa H. N. This paper introduces a new idea in the unital involutive Banach algebras and its closed subset. This paper aims to study the cohomology theory of operator algebra. We will study the Banach algebra as an applied example of operator algebra, and the Banach algebra will be denoted by . The definitions of cyclic, simplicial, and dihedral cohomology group of will be introduced. We presented the definition of -relative dihedral cohomology group that is given by: , and we will show that the relation between dihedral and -relative dihedral cohomology group can be obtained from the sequence . Among the principal results that we will explain is the study of some theorems in the relative dihedral cohomology of Banach algebra as a Connes-Tsygan exact sequence, since the relation between the relative Banach dihedral and cyclic cohomology group ( and ) of will be proved as the sequence . Also, we studied and proved some basic notations in the relative cohomology of Banach algebra with unity and defined its properties. So, we showed the Morita invariance theorem in a relative case with maps and , and proved the Connes-Tsygan exact sequence that relates the relative cyclic and dihedral (co)homology of . We proved the Mayer-Vietoris sequence of in a new form in the Banach B-relative dihedral cohomology: . It should be borne in mind that the study of the cohomology theory of operator algebra is concerned with studying the spread of Covid 19.
PubDate: May 2022
- Three-Dimensional Control Charts for Regulating Processes Described by a
Two-Dimensional Normal Distribution
Abstract: Publication date: May 2022
Source:Mathematics and Statistics Volume 10 Number 3 Kamola Saxibovna Ablazova In the statistical management of processes in the initial phase, the stability of the technological process is determined based on the available samples. If the process is not stable, eliminating possible causes is brought into a statistically controlled position. At the same time, simple Shewhart control charts are used. In practice, some methods bring the process to a stable state (ISO standards, standards of various states). After the process has become stable, the boundaries of control charts are found for further management. Then, with the help of new samples, the process is managed. The article considers a process modeled by a two-dimensional normal distribution. New control charts have been found to check the normality and correlation of two-dimensional random variable components. The process is regulated using these charts, preserving the shape of the density of the individual components of the normal vector and linearity of these components. When constructing control charts, the Kolmogorov-Smirnov type agreement criterion and the Fisher criterion on the strength of the linear coupling of components were used. A concrete example shows the course of the introduction of these charts in production. The work results can be used in the initial phase of regulation and during the control check of the process under study. We used these control charts to assess product quality and quality control coming from the machine that produces the sleeves. It presents statistical methods for analyzing problems in factory practice and solutions for their elimination.
PubDate: May 2022
- Effect of Parameter Estimation on the Performance of Shewhart -joint Chart
Looked at in Terms of the Run Length Distribution
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Ugwu Samson O. Nduka Uchenna C. Eze Nnaemeka M. Odoh Paschal N. and Ugwu Gibson C. Using spread-charts to monitor process variation and thereafter using the -chart to monitor the process mean after is a common practice. To apply these charts independently using estimated 3-sigma limits is common. Recently, some authors considered the application of and R-charts together as a charting scheme, -chart when the standards are known, Case KK, only the mean standard is known, Case KU and both standards unknown, Case UU. The average run length (ARL) performance criterion was used. However, because of the skewed nature of the run length (RL) distribution, many authors have frowned at the use of ARL as a sole performance measure and encouraged the percentiles of the RL distribution instead. Therefore, the cdfs of the RLs of the chart under the cases mentioned will be derived in this work, and the percentiles are used to look at the chart for Case KU and the yet to be considered case of the chart, Case UK where only the process variance is known is included for comparison. These are the contribution to the existing literature. -chart performed better in Case KU than in Case UK and the unconditional in-control median run length described the behavior of the chart better than the in-control ARL.
PubDate: Mar 2022
- Some Results on Number Theory and Analysis
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 B. M. Cerna Maguiña Dik D. Lujerio Garcia Héctor F. Maguiña and Miguel A. Tarazona Girlado In this work, we obtain bounds for the sum of the integer solutions of quadratic polynomials of two variables of the form where is a given natural number that ends in one. This allows us to decide the primality of a natural number that ends in one. Also we get some results on twin prime numbers. In addition, we use special linear functionals defined on a real Hilbert space of dimension , in which the relation is obtained: , where is a real number for . When or , we manage to address Fermat's Last Theorem and the equation , proving that both equations do not have positive integer solutions. For , the Cauchy-Schwartz Theorem and Young's inequality were proved in an original way.
PubDate: Mar 2022
- The Non-Abelian Tensor Square Graph Associated to a Symmetric Group and
its Perfect Code
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Athirah Zulkarnain Hazzirah Izzati Mat Hassim Nor Haniza Sarmin and Ahmad Erfanian A set of vertices and edges forms a graph. A graph can be associated with groups using the groups' properties for its vertices and edges. The set of vertices of the graph comprises the elements of the group, while the set of edges of the graph is the properties and requirements for the graph. A non-abelian tensor square graph of a group is defined when its vertex set represents the non-tensor centre elements' set of G. Then, two distinguished vertices are connected by an edge if and only if the non-abelian tensor square of these two elements is not equal to the identity of the non-abelian tensor square. This study investigates the non-abelian tensor square graph for the symmetric group of order six. In addition, some properties of this group's non-abelian tensor square graph are computed, including the diameter, the dominating number, and the chromatic number. The perfect code for the non-abelian tensor square graph for a symmetric group of order six is also found in this paper.
PubDate: Mar 2022
- Data Encryption Using Face Antimagic Labeling and Hill Cipher
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 B. Vasuki L. Shobana and B. Roopa An approach to encrypt and decrypt messages is obtained by relating the concepts of graph labeling and cryptography. Among the various types of labelings given in [3], our interest is on face antimagic labeling introduced by Mirka Miller in 2003 [1]. Baca [2] defines a connected plane graph with edge set and face set as face antimagic if there exist positive integers and and a bijection such that the induced mapping , where for a face , is the sum of all for all edges surrounding is also a bijection. In cryptography there are many cryptosystems such as affine cipher, Hill cipher, RSA, knapsack and so on. Amongst these, Hill cipher is chosen for our encryption and decryption. In Hill cipher [8], plaintext letters are grouped into two-letter blocks, with a dummy letter X inserted at the end if needed to make all blocks of the same length, and then replace each letter with its respective ordinal number. Each plaintext block is then replaced by a numeric ciphertext block , where and are different linear combinations of and modulo 26: (mod 26) and (mod 26) with condition as is one. Each number is translated into a cipher text letter which results in cipher text. In this paper, face antimagic labeling on double duplication of graphs along with Hill cipher is used to encrypt and decrypt the message.
PubDate: Mar 2022
- Principal Canonical Correlation Analysis with Missing Data in Small
Samples
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Toru Ogura and Shin-ichi Tsukada Missing data occur in various fields, such as clinical trials and social science. Canonical correlation analysis often used to analyze the correlation between two random vectors, cannot be performed on a dataset with missing data. Canonical correlation coefficients (CCCs) can also be calculated from a covariance matrix. When the covariance matrix can be estimated by excluding (complete-case and available-case analyses) or imputing (multivariate imputation by chained equations, k-nearest neighbor (kNN), and iterative robust model-based imputation) missing data, CCCs are estimated from this covariance matrix. CCCs have bias even with all observation data. Usually, estimated CCCs are even larger than population CCCs when a covariance matrix estimated from a dataset with missing data is used. The purpose is to bring the CCCs estimated from the dataset with missing data close to the population CCCs. The procedure involves three steps. First, principal component analysis is performed on the covariance matrix from the dataset with missing data to obtain the eigenvectors. Second, the covariance matrix is transformed using first to fourth eigenvectors. Finally, the CCCs are calculated from the transformed covariance matrix. CCCs derived using with this procedure are called the principal CCCs (PCCCs), and simulation studies and numerical examples confirmed the effectiveness of the PCCCs estimated from the dataset with missing data. There were many cases in the simulation results where the bias and root-mean-squared error of the PCCC estimated from the missing data based on kNN were the smallest. In the numerical example results, the first PCCC estimated from the missing data based on kNN is close to the first CCC estimated from the dataset comprising all observation data when the correlation between two vectors is low. Therefore, PCCCs based on kNN were recommended.
PubDate: Mar 2022
- The Non-Trivial Zeros of The Riemann Zeta Function through Taylor Series
Expansion and Incomplete Gamma Function
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Jamal Salah Hameed Ur Rehman and Iman Al- Buwaiqi The Riemann zeta function is valid for all complex number , for the line = 1. Euler-Riemann found that the function equals zero for all negative even integers: −2, −4, −6, ... (commonly known as trivial zeros) has an infinite number of zeros in the critical strip of complex numbers between the lines = 0 and = 1. Moreover, it was well known to him that all non-trivial zeros are exhibiting symmetry with respect to the critical line . As a result, Riemann conjectured that all of the non-trivial zeros are on the critical line, this hypothesis is known as the Riemann hypothesis. The Riemann zeta function plays a momentous part while analyzing the number theory and has applications in applied statistics, probability theory and Physics. The Riemann zeta function is closely related to one of the most challenging unsolved problems in mathematics (the Riemann hypothesis) which has been classified as the 8th of Hilbert's 23 problems. This function is useful in number theory for investigating the anomalous behavior of prime numbers. If this theory is proven to be correct, it means we will be able to know the sequential order of the prime numbers. Numerous approaches have been applied towards the solution of this problem, which includes both numerical and geometrical approaches, also the Taylor series of the Riemann zeta function, and the asymptotic properties of its coefficients. Despite the fact that there are around 1013, non-trivial zeros on the critical line, we cannot assume that the Riemann Hypothesis (RH) is necessarily true unless a lucid proof is provided. Indeed, there are differing viewpoints not only on the Riemann Hypothesis's reliability, but also on certain basic conclusions see for example [16] in which the author justifies the location of non-trivial zero subject to the simultaneous occurrence of , and omitting the impact of an indeterminate form , that appears in Riemann's approach. In this study, we also consider the simultaneous occurrence but we adopt an element-wise approach of the Taylor series by expanding for all = 1, 2, 3, ... at the real parts of the non-trivial zeta zeros lying in the critical strip for is a non-trivial zero of , we first expand each term at then at . Then in this sequel, we evoke the simultaneous occurrence of the non-trivial zeta function zeros, on the critical strip by the means of different representations of Zeta function. Consequently, proves that Riemann Hypothesis is likely to be true.
PubDate: Mar 2022
- On The Unconditional Run Length Distribution and Percentiles for The
-chart When The In-control Process Parameter Is Estimated
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Ugwu Samson. O Uchenna Nduka .C Ezra Precious .N Ugwu Gibson .C Odoh Paschal .N and Nwafor Cynthia. N It is well known that the median is a better measure of location in skewed distributions. Run-length (RL) distribution is a skewed distribution, hence, median run-length measures chart performance better than the average run length. Some authors have advocated examination of the entire percentiles of the RL distribution in assessing chart performance. Such works already exist for Shewhart −chart, CUSUM chart, CUSUM and EWMA charts, Hotelling's chi-square, and the two simple Shewhart multivariate non-parametric charts. Similar work on -chart for one- and two-sided lacks in the literature. This work stands in the gap. Therefore, a detailed and comparative study of the one-sided upper and the two-sided -control charts for some m reference samples at fixed sample size and false alarm rate will be considered here using the information from the unconditional RL cdf curve and its percentiles (mainly median). The order of the RL cdf curves of the one-sided upper -chart is independent of the state of the process unlike in the two-sided one. The one-sided upper chart outperformed the two-sided one both in the in-control and in detecting positive shifts. The two-sided -chart is more sensitive in detecting incremental shifts than to decremental shifts.
PubDate: Mar 2022
- Some Inequalities for -times Differentiable
Strongly Convex Functions
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Duygu Dönmez Demir and Gülsüm Şanal The theory of inequality is in a process of continuous development and has become a quite effective and powerful tool in various branches of mathematics to solve many problems. Convex functions are closely related to the theory of inequality, and many important inequalities are the results of the applications of convex functions. Recently, the results obtained for convex functions have been tried to be extended for strongly convex functions. In our previous studies, the perturbed trapezoid inequality obtained for convex functions has been extended to the functions that can be differentiated -times. This study deals with some general identities introduced for -times differentiable strongly convex functions. Besides, new inequalities related to general perturbed trapezoid inequality are constructed. These inequalities are obtained for the classes of functions which th derivatives of absolute values of the mentioned functions are strongly convex. It is seen that new classes of strongly convex functions turn into those obtained for convex functions under certain conditions. Considering the upper bounds obtained for strongly convex functions, it is concluded that it is better than those obtained for convex functions.
PubDate: Mar 2022
- Modified Profile Likelihood Estimation in the Lomax Distribution
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Maisoun Sewailem and Ayman Baklizi In this paper, we consider improving maximum likelihood inference for the scale parameter of the Lomax distribution. The improvement is based on using modifications to the maximum likelihood estimator based on the Barndorff-Nielsen modification of the profile likelihood function. We apply these modifications to obtain improved estimators for the scale parameter of the Lomax distribution in the presence of a nuisance shape parameter. Due to the complicated expression for the Barndorff-Nielsen's modification, several approximations to this modification are considered in this paper, including the modification based on the empirical covariances and the approximation based on using suitably derived approximate ancillary statistics. We obtained the approximations for the Lomax profile likelihood function and the corresponding modified maximum likelihood estimators. They are not available in simple closed forms and can be obtained numerically as roots of some complicated likelihood equations. Comparisons between maximum profile likelihood estimator and modified profile likelihood estimators in terms of their biases and mean squared errors were carried out using simulation techniques. We found that the approximation based on the empirical covariances to have the best performance according to the criteria used. Therefore we recommend to use this modified version of the maximum likelihood estimator for the Lomax scale parameter, especially for small sample sizes with heavy censoring, which is quite common in industrial life testing experiments and reliability studies. An example based on real data is given to illustrate the methods considered in this paper.
PubDate: Mar 2022
- Fractional Variational Orthogonal Collocation Method for the Solution of
Fractional Fredholm Integro-Differential Equation Using Mamadu-Njoseh
Polynomials
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Jonathan Tsetimi and Ebimene James Mamadu The use of orthogonal polynomials as basis functions via a suitable approximation scheme for the solution of many problems in science and technology has been on the increase and quite fascinating. In many numerical schemes, the convergence depends solely on the nature of the basis function adopted. The Mamadu-Njoseh polynomials are orthogonal polynomials developed in 2016 with reference to the weight function, which bears the same convergence rate as that of Chebyshev polynomials. Thus, in this paper, the fractional variational orthogonal collocation method (FVOCM) is proposed for the solution of fractional Fredholm integro-differential equation using Mamadu-Njoseh polynomials (MNP) as basis functions. Here, the proposed method is an elegant mixture of the variational iteration method (VIM) and the orthogonal collocation method (OCM). The VIM is one of the popular methods available to researchers in seeking the solution to both linear and nonlinear differential problems requiring neither linearization nor perturbation to arrive at the required solution. Collocating at the roots of orthogonal polynomials gives birth to the OCM. For the proposed method, the VIM is initiated to generate the required approximations whereby producing the series which is collocated orthogonally to derive the unknown parameters. The numerical results show that the method derives a high accurate and reliable approximation with a high convergence rate. We have also presented the existence and uniqueness of solution of the method. All computational frameworks in this research are performed via MAPLE 18 software.
PubDate: Mar 2022
- Solution of 1st Order Stiff Ordinary Differential Equations Using Feed
Forward Neural Network and Bayesian Regularization Algorithm
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Rootvesh Mehta Sandeep Malhotra Dhiren Pandit and Manoj Sahni A stiff equation is a differential equation for which certain numerical methods are not stable, unless the step length is taken to be extraordinarily small. The stiff differential equation includes few terms that could result in speedy variation in the solution. When integrating a differential equation numerically, the requisite step length should be incredibly small. In the solution curve, much variation can be observed where the solution curve straightens out to approach a line with slope almost zero. The phenomenon of stiffness is observed when the step-size is unacceptably small in a region where the solution curve is very smooth. A lot of work on solving the stiff ordinary differential equations (ODEs) have been done by researchers with numbers of numerical methods that currently exist. Extensive research has been done to unveil the comparison between their rate of convergence, number of computations, accuracy, and capability to solve certain type of test problems. In the present work, an advanced Feed Forward Neural Network (FFNN) and Bayesian regularization algorithm-based method is implemented to solve first order stiff ordinary differential equations and system of ordinary differential equations. Using proposed method, the problems are solved for various time steps and comparisons are made with available analytical solutions and other existing methods. A problem is simulated using proposed FFNN model and accuracy has been acquired with less calculation efforts and time. The outcome of the work is showing good result to use artificial neural network methods to solve various types of stiff differential equations in near future.
PubDate: Mar 2022
- A Branch and Bound Algorithm to Solve Travelling Salesman Problem (TSP)
with Uncertain Parameters
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 S. Dhanasekar Saroj Kumar Dash and Neena Uthaman The core of the theoretical Computing science and mathematics is computational complexity theory. It is usually concerned with the classification of computational problems in to P and NP problems by using their inherent challenges. There is no efficient algorithms for these problems. Travelling Salesman Problem is one of the most discussed problems in Combinatorial Mathematics. To deduct a Hamiltonian cycle in which the cost or time is minimum is the main objective of the TSP. There exist many algorithms to solve it. Since all the existing algorithms are not efficient to solve it, still many researchers are working to produce efficient algorithms. If the description of the parameters is vague, then fuzzy notions which include membership value are applied to model the parameters. Still the modeling does not give the exact representation of the vagueness. The Intuitionistic fuzzy set which includes non-membership value along with membership values in its domain is applied to model the parameters. The decision variables in the TSP, the cost, time or distance are modeled as intuitionistic fuzzy numbers, then the TSP is named as Intuitionistic fuzzy TSP (InFTSP). We develop the intuitionistic fuzzified version of littlewood's formula or branch and bound method to solve the Intuitionistic fuzzy TSP. This method is effective because it involves the simple arithmetic operation of Intuitionistic fuzzy numbers and ranking of intuitionistic fuzzy numbers. Ordering of intuitionistic fuzzy numbers is vital in optimization problems since it is equivalent to the ordering of alternatives. In this article, we used weighted arithmetic mean method to order the fuzzy numbers. Weighted arithmetic mean method satisfies linear property which is a very important characteristic of ranking function. Numerical examples are solved to validate the given algorithm and the results are discussed.
PubDate: Mar 2022
- Weighted Least Squares Estimation for AR(1) Model With Incomplete Data
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Mohamed Khalifa Ahmed Issa Time series forecasting is the main objective in many life applications such as weather prediction, natural phenomena analysis, financial or economic analysis, etc. In real-life data analysis, missing data can be considered as a feature that the researcher faces because of human error, technical damage, or catastrophic natural phenomena, etc. When one or more observations are missing, it might be urgent to estimate the model as well as to estimate the missing values which lead to a better understanding of the data, and more accurate prediction. Different time series require different effective techniques to have better estimates for those missing values. Traditionally, the missing values are simply replaced by mean and mode imputation, deleted or handled using other methods, which are not convenient enough to address missing values, as those methods can cause bias. One of the most popular models used in estimating time-series data is autoregressive models. Autoregressive models forecast the future values in terms of the previous ones. The first-order autoregressive AR (1) model is the one which the current value is based on the immediately preceding value, then estimating parameters of AR (1) with missing observations is an urgent topic in time series analysis. Many approaches have been developed to address the estimation problems in time series such as ordinary least square (OLS), Yule Walker (YW). Therefore, a suggested method will be introduced to estimate the parameter of the model by using weighted least squares. The properties of the (WLS) estimator are investigated. Moreover, a comparison between those methods using AR (1) model with missing observations is conducted through a Monte Carlo simulation at various sample sizes and different proportions of missing observations, this comparison is conducted in terms of mean square error (MSE) and mean absolute error (MAE). The results of the simulation study state that (WLS) estimator can be considered as the preferable method of estimation. Also, time series real data with missing observations were estimated.
PubDate: Mar 2022
- Introduction to Applied Algebra: Book Review of Chapter 8-Linear Equations
(System of Linear Equations)
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Elvis Adam Alhassan Kaiyu Tian and Adjabui Michael This chapter review presents two ideas and techniques in solving Systems of Linear Equations in the most simple minded straightforward manner to enable the student as well as the instructor to follow it independently with very little guidance. The focus is on using simpler and easier approaches such as Determinants; and Elementary Row Operations to solve Systems of Linear Equations. We found the solution set of a few systems of linear equations by a successive ratio of the determinant of all the matrices formed from replacing each column of the coefficient matrix by the right hand side vector and the determinant of the coefficient matrix repeatedly giving the values of the variables in the system in the order in which they appeared. Similarly, we also used the three types of elementary row operations namely; Row Swap; Scalar Multiplication; and Row Sum to find the solution set of systems of linear equations through row echelon form to reduced row echelon form to find the solution set of some systems of linear equations. Technical forms of systems of linear equations were used to illustrate the two approaches in finding their solution sets. In each approach we started by finding the coefficient matrices from the systems of linear equations.
PubDate: Mar 2022
- On Tensor Product and Colorability of Graphs
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Veninstine Vivik J Sheeba Merlin G P. Xavier and Nila Prem JL The idea of graph coloring problem (GCP) plays a vital role in allotment of resources resulting in its proper utilization in saving labor, space, time and cost effective, etc. The concept of GCP for graph is assigning minimum number of colors to its nodes such that adjacent nodes are allotted a different color, the smallest of which is known as its chromatic number . This work considers the approach of taking the tensor product between two graphs which emerges as a complex graph and it drives the idea of dealing with complexity. The load balancing on such complex networks is a hefty task. Amidst the various methods in graph theory the coloring is a quite simpler tool to unveil the intricate challenging networks. Further the node coloring helps in classifying the nodes with least number of classes in any network. So coloring is applied to balance the allocations in such complex network. We construct the tensor product between two graphs like path with wheel and helm, cycle with sunlet and closed helm graphs then structured their nature. The coloring is then applied for the nodes of the extended new graph to determine their optimal bounds. Hence we obtain the chromatic number for the tensor product of , , and .
PubDate: Mar 2022
- Application of the Fast Expansion Method in Space–Related Problems
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Mikhail Ivanovich Popov Aleksey Vasilyevich Skrypnikov Vyacheslav Gennadievich Kozlov Alexey Viktorovich Chernyshov Alexander Danilovich Chernyshov Sergey Yurievich Sablin Vladimir Valentinovich Nikitin and Roman Alexandrovich Druzhinin In the paper, numerical and approximate analytical solutions for the problem of the motion of a spacecraft from a starting point to a final point during a certain time are obtained. The unpowered and powered portions of the flight are considered. For a numerical solution, a finite-difference scheme of the second order of accuracy is constructed. The space-related problem considered in the study is essentially nonlinear, which necessitates the use of trigonometric interpolation methods to replace the task of calculating the Fourier coefficients with the integral formulas by solving the interpolation system. One of the simplest options for trigonometric sine interpolation on a semi-closed segment [–a, a), where the right end is not included in the general system of interpolation points, is considered. In order to maintain the conditions of orthogonality of sines, an even number of 2M calculation points is uniformly applied to the segment. The sine interpolation theorem is proved and a compact formula is given for calculating the interpolation coefficients. A general theory of fast sine expansion is given. It is shown that in this case, the Fourier coefficients decrease much faster with the increase in serial number compared to the Fourier coefficients in the classical case. This property allows reducing the number of terms taken into account in the Fourier series, as well as the amount of computer calculations, and increasing the accuracy of calculations. The analysis of the obtained solutions is carried out, and their comparison with the exact solution of the test problem is proposed. With the same calculation error, the time spent on a computer using the fast expansion method is hundreds of times less than the time spent on classical finite-difference method.
PubDate: Mar 2022
- Generalized Family of Group Chain Sampling Plans Using Minimum Angle
Method (MAM)
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Mohd Azri Pawan Teh Nazrina Aziz and Zakiyah Zain This research develops a generalized family of group chain sampling plans using the minimum angle method (MAM). The MAM is a method whereby both the producer's and consumer's risks are considered when designing the sampling plans. There are three sampling plans nested under the family of group chain acceptance sampling which are group chain sampling plans (GChSP-1), new two-sided group chain sampling plans (NTSGChSP-1), and two-sided group chain sampling plans (TSGChSP-1). The methodology applied is random values of the fraction defectives for both producer and consumer, and the optimal number of groups, is obtained using the Scilab software. The findings reveal that some of the design parameters manage to obtain the corresponding to the smallest angle, and some of the values fail to get the . The obtained in this research guarantees that the producer and the consumer are protected at most 10% from having defective items.
PubDate: Mar 2022
- New Group Chain Sampling Plan (NGChSP-1) for Generalized Exponential
Distribution
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Nazrina Aziz Tan Jia Xin Zakiyah Zain and Mohd Azri Pawan Teh Acceptance criteria are the conditions imposed on any sampling plan to determine whether the lot is accepted or rejected. Group chain sampling plan (GChSP-1) was constructed according to the 5 acceptance criteria; modified group chain sampling plan (MGChSP-1) was derived with 3 acceptance criteria; later new group chain sampling plan (NGChSP-1) was introduced with 4 acceptance criteria where the NGChSP-1 balances the acceptance criteria between the GChSP-1 and MGChSP-1. Producers favor a sampling plan with more acceptance criteria because it reduces the probability of rejecting a good lot (producer risk), whereas consumers may prefer a sampling plan with fewer acceptance criteria as it reduces the probability of accepting a bad lot (consumer risk). The disparity in acceptance criteria creates a conflict between the two main stakeholders in acceptance sampling. In the literature, there are numerous methods available for developing sampling plans. To date, NGChSP-1 was developed using the minimum angle method. In this paper, NGChSP-1 was constructed with the minimizing consumer's risk method for generalized exponential distribution where mean product lifetime is used as quality parameter. There are six phases involved to develop the NGChSP-1 for different design parameters. Result shows the minimum number of groups decrease when the value of design parameters increases. The results of the performance comparison show that the NGChSP-1 is a better sampling plan than the GChSP-1 because it has a smaller number of groups and lower probability of lot acceptance than the GChSP-1. NGChSP-1 should offer better alternatives to industrial practitioners in sectors involving product life test.
PubDate: Mar 2022
- Reversible Jump MCMC Algorithm for Transformed Laplacian AR: Application
in Modeling CO2 Emission Data
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Suparman Hery Suharna Mahyudin Ritonga Fitriana Ibrahim Tedy Machmud Mohd Saifullah Rusiman Yahya Hairun and Idrus Alhaddad Autoregressive (AR) model is applied to model various types of data. For confidential data, data confusion is very important to protect the data from being known by other unauthorized parties. This paper aims to find data modeling with transformations in the AR model. In this AR model, the noise has a Laplace distribution. AR model parameters include order, coefficients, and variance of the noise. The estimation of the AR model parameter is proposed in a Bayesian method by using the reversible jump Markov Chain Monte Carlo (MCMC) algorithm. This paper shows that the posterior distribution of AR model parameters has a complicated equation, so the Bayes estimator cannot be determined analytically. Bayes estimators for AR model parameters are calculated using the reversible jump MCMC algorithm. This algorithm was validated through a simulation study. This algorithm can accurately estimate the parameters of the transformed AR model with Laplacian noise. This algorithm produces an AR model that satisfies the stationary conditions. The novelty in this paper is the use of transformations in the Laplacian AR model to secure research data when the research results are published in a scientific journal. As an example application, the Laplacian AR model was used to model CO2 emission data. The results of this paper can be applied to modeling and forecasting confidential data in various sectors.
PubDate: Mar 2022
- A New Algorithm for Spectral Conjugate Gradient in Nonlinear Optimization
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Ahmed Anwer Mustafa CJG is a nonlinear conjugation gradient. Algorithms have been used to solve large-scale unconstrained enhancement problems. Because of their minimal memory needs and global convergence qualities, they are widely used in a variety of fields. This approach has lately undergone many investigations and modifications to enhance it. In our daily lives, the conjugate gradient is incredibly significant. For example, whatever we do, we strive for the best outcomes, such as the highest profit, the lowest loss, the shortest road, or the shortest time, which are referred to as the minimum and maximum in mathematics, and one of these ways is the process of spectral gradient descent. For multidimensional unbounded objective function, the spectrum conjugated gradient (SCJG) approach is a strong tool. In this study, we describe a revolutionary SCG technique in which performance is quantified. Based on assumptions, we constructed the descent condition, sufficient descent theorem, conjugacy condition, and global convergence criteria using a robust Wolfe and Powell line search. Numerical data and graphs were constructed utilizing benchmark functions, which are often used in many classical functions, to demonstrate the efficacy of the recommended approach. According to numerical statistics, the suggested strategy is more efficient than some current techniques. In addition, we show how the unique method may be utilized to improve solutions and outcomes.
PubDate: Mar 2022
- Estimating Weibull Parameters Using Maximum Likelihood Estimation and
Ordinary Least Squares: Simulation Study and Application on Meteorological
Data
Abstract: Publication date: Mar 2022
Source:Mathematics and Statistics Volume 10 Number 2 Nawal Adlina Mohd Ikbal Syafrina Abdul Halim and Norhaslinda Ali Inefficient estimation of distribution parameters for current climate will lead to misleading results in future climate. Maximum likelihood estimation (MLE) is widely used to estimate the parameters. However, MLE is not well performed for the small size. Hence, the objective of this study is to compare the efficiency of MLE with ordinary least squares (OLS) through the simulation study and real data application on wind speed data based on model selection criteria, Akaike information criterion (AIC) and Bayesian information criterion (BIC) values. The Anderson-Darling (AD) test is also performed to validate the proposed distribution. In summary, OLS is better than MLE when dealing with small sample sizes of data and estimating the shape parameter, while MLE is capable of estimating the value of scale parameter. However, both methods are well performed at a large sample size.
PubDate: Mar 2022
- Limit Theorems for The Sums of Random Variables in A Special Form
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Azam A. Imomov and Zuhriddin A. Nazarov In this paper, we consider some functionals of the sums of independent identically distributed random variables. The functionals of the sums are important in probabilistic models and stochastic branching systems. In connection with the application in various probabilistic models and stochastic branching systems, we are interested in the fulfillment of the law of large numbers and the Central limit theorem for these sums. The main hypotheses of the paper are the presence of second order moments of the variables and the fulfillment of the Lindeberg condition is considered. The research object and subject of this paper consists of specially generated random variables using the sums of non-bound random variables. In total, 6 different sums in a special form were studied in the paper and this sum was not previously studied by other scientists. The purpose of the paper is to examine whether these sums in a special form satisfy the terms of the law of large numbers and the Central limit theorem. The main result of the paper is to show that the law of large numbers and the terms of the classical limit theorem are fulfilled in some cases. The results obtained in the paper are of theoretical importance, The Central limit theorem analogues proved here are applications of Lindeberg theorem. The results can be applied to the determination of the fluctuation of immigration branching systems as well as the asymptotic state of autoregression processes. At the same time, from the main results obtained in the paper it can be used in practical lessons conducted on the theory of probability. The results of the paper will be an important guide for young researchers. Important theorems proved in the paper can be used in probability theory, stochastic branching systems and other practical problems.
PubDate: Jan 2022
- Fuzzy EOQ Model for Time Varying Deterioration and Exponential Time
Dependent Demand Rate under Inflation
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 K.Geetha and S.P.Reshma In this study we have discussed a fuzzy eoq model for deteriorating products with time varying deterioration under inflation and exponential time dependent demand rate. Shortages are not allowed in this fuzzy eoq model and the impact of inflation is investigated. An inventory model is used to determine whether the order quantity is more than or equal to a predetermined quantity for declining items.The optimal solution for the existing model is derived by taking truncated taylor’s series approximation for finding closed form optimal solution. The cost of deterioration, cost of ordering, cost of holding and the time taken to settle the delay in account are considered using triangular fuzzy numbers. In this study the fuzzy triangular numbers are used to estimate the optimal order quantity and cycle duration. Furthermore we have used graded mean integration method and signed distance approach to defuzzify these values. To validate our model numerical examples are discussed for all cases with the help of sensitivity analysis for different parameters. Finally, a higher decay rate results in a shorter ideal cycle time as well as higher overall relevant cost is established. The presented model can be used to predict demand as a quadratic function of time,stock level time dependent demand, selling price, and other variables.
PubDate: Jan 2022
- Newton-PKSOR with Quadrature Scheme in Solving Nonlinear Fredholm Integral
Equations
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Labiyana Hanif Ali Jumat Sulaiman and Azali Saudi In this study, we applied Newton method with a new version of KSOR, called PKSOR to form NPKSOR in solving nonlinear second kind Fredholm integral equations. A new version of KSOR is an update to the KSOR method with two relaxation parameters. The properties of KSOR helps in enlargement of the solution domain so the relaxation parameter can take the value . With PKSOR, the relaxation parameter in KSOR is treated into two different relaxation paramaters as and which resulting lower number of iteration compared to the KSOR method. By combining the Newton method with PKSOR, we intend to from more efficient method to solve the nonlinear Fredholm integral equations. The discretization part of this study is done using first-order quadrature scheme to develop a nonlinear system. We formulate the solution of the nonlinear system using the given approach by reducing it to a linear system and then solving it using iterative methods to obtain an approximate solution. Furthermore, we compare the results of the proposed methods with NKSOR and NGS methods on three examples. Based on our findings, the NPKSOR method is more efficient than NKSOR and NGS methods. By implementing the NPKSOR method, we can boost the convergence rate of the iteration by considering two relaxation parameters, resulting in a lower number of iteration and computational time.
PubDate: Jan 2022
- Modelling of Cointegration with Student's T-errors
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Nimitha John and Balakrishna Narayana Two or more non-stationary time series are said to be co-integrated if a certain linear combination of them becomes stationary. Identification of co-integrating relationships among the relevant time series helps the researchers to develop efficient forecasting methods. The classical approach of analyzing such series is to express the co-integrating time series in the form of error correction models with Gaussian errors. However, the modeling and analysis of cointegration in the presence of non-normal errors needs to be developed as most of the real time series in the field of finance and economics deviates from the assumption of normality. This paper focuses on modeling of a bivariate cointegration with a student's-t distributed error. The co-integrating vector obtained from the error correction equation is estimated using the method of maximum likelihood. A unit root test of first order non stationary process with student's t-errors is also defined. The resulting estimators are used to construct test procedures for testing the unit root and cointegration associated with two time series. The likelihood equations are all solved using numerical approaches because the estimating equations do not have an explicit solution. A simulation study is carried out to illustrate the finite sample properties of the model. The simulation experiments show that the estimates perform reasonably well. The applicability of the model is illustrated by analyzing the data on time series of Bombay stock exchange indices and crude oil prices and found that the proposed model is a good fit for the data sets.
PubDate: Jan 2022
- Expectation-Maximization Algorithm Estimation Method in Automated Model
Selection Procedure for Seemingly Unrelated Regression Equations Models
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Nur Azulia Kamarudin Suzilah Ismail and Norhayati Yusof Model selection is the process of choosing a model from a set of possible models. The model's ability to generalise means it can fit both current and future data. Despite numerous emergences of procedures in selecting models automatically, there has been a lack of studies on procedures in selecting multiple equations models, particularly seemingly unrelated regression equations (SURE) models. Hence, this study concentrates on an automated model selection procedure for the SURE model by integrating the expectation-maximization (EM) algorithm estimation method, named SURE(EM)-Autometrics. This extension procedure was originally initiated from Autometrics, which is only applicable for a single equation. To assess the performance of SURE(EM)-Autometrics, simulation analysis was conducted under two strengths of correlation among equations and two levels of significance for a two-equation model with up to 18 variables in the initial general unrestricted model (GUM). Three econometric models have been utilised as a testbed for true specification search. The results were divided into four categories where a tight significance level of 1% had contributed a high percentage of all equations in the model containing variables precisely comparable to the true specifications. Then, an empirical comparison of four model selection techniques was conducted using water quality index (WQI) data. System selection to select all equations in the model simultaneously proved to be more efficient than single equation selection. SURE(EM)-Autometrics dominated the comparison by being at the top of the rankings for most of the error measures. Hence, the integration of EM algorithm estimation is appropriate in improving the performance of automated model selection procedures for multiple equations models.
PubDate: Jan 2022
- The Power of Test of Jennrich Statistic with Robust Methods in Testing the
Equality of Correlation Matrices
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Bahtiar Jamili Zaini and Shamshuritawati Md Sharif Jennrich statistic is a method that can be used to test the equality of 2 or more independent correlation matrices. However, Jennrich statistic begins to be problematic when there is presence of outliers that could lead to invalid results. When exiting outliers in data, Jennrich statistic implications will affect Type I errors and will reduce the power of test. To overcome the presence of outliers, this study suggests the use of robust methods as an alternative method and therefore, it will integrate the estimator into Jennrich statistic. Thus, it can improve the testing performance of correlation matrix hypotheses in relation to outlier problems. Therefore, this study proposes 3 statistical tests, namely Js-statistic, Jm-statistic, and Jmad-statistic that can be used to test the equation of 2 or more correlation matrices. The performance of the proposed method is assessed using the power of test. The results show that Jm-statistic and Jmad-statistic can overcome outlier problems into Jennrich statistic in testing the correlation matrix hypothesis. Jmad-statistic is also superior in testing the correlation matrix hypothesis for different sample sizes, especially those involving 10% outliers.
PubDate: Jan 2022
- Solving Multi-Response Problem Using Goal Programming Approach and
Quantile Regression
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Sara Abdel Baset Ramadan Hamed Maha El-Ashram and Zakaria Abdel Samea Response surface methodology (RSM) is a group of mathematical and statistical techniques helpful for improving, developing and optimizing processes. It also has important uses in the design, development and formulation of new products. Moreover, it has a great help in the enhancement of existing products. (RSM) is a method used to discover response functions, which meet and fulfill all quality diagnostics simultaneously. Most applications have more than one response; the main problem is multi-response optimization (MRO). The classical methods used to solve the Multi-Response Optimization problem do not guarantee optimal designs and solutions. Besides, they take a long time and depend on the researcher's judgment. Therefore, some researchers used a Goal Programming-based method; however, they still do not guarantee an optimal solution. This study aims to form a goal programming model derived from a chance constrained approach using quantile regression to deal with outliers not normal and errors. It describes the relationship between responses and control variables at distinctive points in the response conditional distribution; it also considers the uncertainty problem and presents an illustrative example and simulation study for the suggested model.
PubDate: Jan 2022
- Some Properties of BP-Space
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Ahmed Talip Hussein and Emad Allawi Shallal Y. Imai, K. Iseki [4], and K. Iseki [5] presented types from summary algebras which are called BCK-algebras and BCI-algebras. It is known that the brand of BCK algebras is a suitable subtype from the type from BCI-algebras. The researchers Q. P. Hu [2] & X. Li [3] presented a width type from essence algebras: BCH- algebras. They have exhibited that the type of BCI-algebras is a suitable subtype of the type of BCH-algebras. Moreover, J. Neggers and H. S. K [9] presented the connotation from d - algebras that are else popularization from BCK-algebras, inspected kinsmen amidst d-algebras & BCK-algebras. They calculated diversified topologies to research from lattices but they did not discuss the experience of making the binary operation of d- algebra continuous. Topological set notions are famous and yet accurate by numerous mathematicians. Even global topographical algebraic structure is sought by several writers. We realize a Tb-algebra, get it several ownerships of such build, the generality significant flavors and arrive to realize a new gender of spaces designated BP- space, where we arrived the results. Let be B-space and is periodic proportional. Then is a compact set in and = , . Also If is an invariant under , then , and are invariant under for every Q in if is also. If the function is closed (one to one) then , () is invariant under and the set of interior points of is invariant under , if the function is open and .
PubDate: Jan 2022
- Solving Differential Equations of Fractional Order Using Combined Adomian
Decomposition Method with Kamal Integral Transformation
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Muhamad Deni Johansyah Asep K Supriatna Endang Rusyaman and Jumadil Saputra The differential equation is an equation that involves the derivative (derivatives) of the dependent variable with respect to the independent variable (variables). The derivative represents nothing but a rate of change, and the differential equation helps us present a relationship between the changing quantity with respect to the change in another quantity. The Adomian decomposition method is one of the iterative methods that can be used to solve differential equations, both integer and fractional order, linear or nonlinear, ordinary or partial. This method can be combined with integral transformations, such as Laplace, Sumudu, Natural, Elzaki, Mohand, Kashuri-Fundo, and Kamal. The main objective of this research is to solve differential equations of fractional order using a combination of the Adomian decomposition method with the Kamal integral transformation. Furthermore, the solution of the fractional differential equation using the combined method of the Adomian decomposition method and the Kamal integral transformation was investigated. The main finding of our study shows that the combined method of the Adomian decomposition method and the Kamal integral transformation is very accurate in solving differential equations of fractional order. The present results are original and new for solving differential equations of fractional order. The results attained in this paper confirm the illustrative example has been solved to show the efficiency of the proposed method.
PubDate: Jan 2022
- Fuzzy Number – A New Hypothesis and Solution of Fuzzy Equations
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Vijay C. Makwana Vijay. P. Soni Nayan I. Patel and Manoj Sahni In this paper, a new hypothesis of fuzzy number has been proposed which is more precise and direct. This new proposed approach is considered as an equivalence class on set of real numbers R with its algebraic structure and its properties along with theoretical study and computational results. Newly defined hypothesis provides a well-structured summary that offers both a deeper knowledge about the theory of fuzzy numbers and an extensive view on its algebra. We defined field of newly defined fuzzy numbers which opens new era in future for fuzzy mathematics. It is shown that, by using newly defined fuzzy number and its membership function, we are able to solve fuzzy equations in an uncertain environment. We have illustrated solution of fuzzy linear and quadratic equations using the defined new fuzzy number. This can be extended to higher order polynomial equations in future. The linear fuzzy equations have numerous applications in science and engineering. We may develop some iterative methods for system of fuzzy linear equations in a very simple and ordinary way by using this new methodology. This is an innovative and purposefulness study of fuzzy numbers along with replacement of this newly defined fuzzy number with ordinary fuzzy number.
PubDate: Jan 2022
- A Griffith Crack at the Interface of an Isotropic and Orthotropic Half
Space Bonded Together
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 A. K. Awasthi Rachna and Harpreet Kaur In the past 53 years, many efforts have been contributed to develop and demonstrate the properties of reinforced composite materials. The ever-increasing use of composite materials through engineering structures needs the proper analysis of the mechanical response of these structures. In the proposed work, we have an exact form of Stress components and Displacement components to a Griffith crack at the interface of an Isotropic and Orthotropic half-space bounded together. The expression was evaluated in the vicinity of crack tips by using Fourier transform method but here these components have been evaluated with the help of Fredholm integral equations and then reduce to the coupled Fredholm integral equations. In this paper, we use the problem of Lowengrub and Sneddon and reduce it to dual integral equations. Solution of these equations through the use of the method of Srivastava and Lowengrub is reduced to coupled Fredholm integral equation. Further reduces the problem to decoupled Fredholm integral equation of 2nd kind. We get the solution of dual integral equations and the problem is reduced to coupled Fredholm integral equation. We find the solution of the Fredholm integral equation and reduce it to decoupled Fredholm integral equation of 2nd kind. The Physical interest in fracture design criterion is due to Stress and crack opening Displacement components. In the end, we can easily calculate the Stress components and Displacement components in the exact form.
PubDate: Jan 2022
- Outcomes of Common Fixed Point Theorems in S-metric Space
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Katta.Mallaiah and Veladi Srinivas In the present paper, we establish the existence of two unique common fixed point theorems with a new contractive condition for four self-mappings in the S-metric space. First, we establish a common fixed-point theorem by using weaker conditions such as compatible mappings of type-(E) and subsequentially continuous mappings. Further, in the next theorem, we use another set of weaker conditions like sub-compatible and sub-sequentially continuous mappings, which are weaker than occasionally weak compatible mappings. Moreover, it is observed that the mappings in these two theorems are sub-sequentially continuous, but these mappings are neither continuous nor reciprocally continuous mappings. These two results will extend and generalize the existing results of [7] and [9] in the S-metric space. Furthermore, we also provide some suitable examples to justify our outcomes.
PubDate: Jan 2022
- An Approach to Solve Multi Attribute Decision-making Problem Based on the
New Possibility Measure of Picture Fuzzy Numbers
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 K. Deva and S. Mohanaselvi A picture fuzzy set is a more powerful tool to deal with uncertainties in the given information as compared to fuzzy set and intuitionistic fuzzy set and has energetic applications in decision-making. The aim of this study is to develop a new possibility measure for ranking picture fuzzy numbers and then some of its basic properties are proved. The proposed method provides the same ranking order as the score function in the literature. Moreover, the new possibility measure can provide additional information for the relative comparison of the picture fuzzy numbers. A picture fuzzy multi attribute decision-making problem is solved based on the possibility matrix generated by the proposed method after being aggregated using picture fuzzy Einstein weighted averaging aggregation operator. To verify the importance of the proposed method, an picture fuzzy multi attribute decision-making strategy is presented along with an application for selecting suitable alternative. The superiority of the proposed method and limitations of the existing methods are discussed with the help of a comparative study. Finally, a numerical example and comparative analysis are provided to illustrate the practicality and feasibility of the proposed method.
PubDate: Jan 2022
- A Basic Dimensional Representation of Artin Braid Group , and a General
Burau Representation
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Arash Pourkia Braid groups and their representations are at the center of study, not only in low-dimensional topology, but also in many other branches of mathematics and theoretical physics. Burau representation of the Artin braid group which has two versions, reduced and unreduced, has been the focus of extensive study and research since its discovery in 1930's. It remains as one of the very important representations for the braid group. Partly, because of its connections to the Alexander polynomial which is one of the first and most useful invariants for knots and links. In the present work, we show that interesting representations of braid group could be achieved using a simple and intuitive approach, where we simply analyse the path of strands in a braid and encode the over-crossings, under-crossings or no-crossings into some parameters. More precisely, at each crossing, where, for example, the strand crosses over the strand we assign t to the top strand and b to the bottom strand. We consider the parameter t as a relative weight given to strand relative to , hence the position for t in the matrix representation. Similarly, the parameter b is a relative weight given to strand relative to , hence the position for b in the matrix representation. We show this simple path analyzing approach that leads us to an interesting simple representation. Next, we show that following the same intuitive approach, only by introducing an additional parameter, we can greatly improve the representation into the one with much smaller kernel. This more general representation includes the unreduced Burau representation, as a special case. Our new path analyzing approach has the advantage that it applies a very simple and intuitive method capturing the fundamental interactions of the strands in a braid. In this approach we intuitively follow each strand in a braid and create a history for the strand as it interacts with other strands via over-crossings, under-crossings or no-crossings. This, directly, leads us to the desired representations.
PubDate: Jan 2022
- On Recent Advances in Divisor Cordial Labeling of Graphs
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Vishally Sharma and A. Parthiban An assignment of intergers to the vertices of a graph subject to certain constraints is called a vertex labeling of . Different types of graph labeling techniques are used in the field of coding theory, cryptography, radar, missile guidance, -ray crystallography etc. A DCL of is a bijective function from node set of to such that for each edge , we allot 1 if divides or divides & 0 otherwise, then the absolute difference between the number of edges having 1 & the number of edges having 0 do not exceed 1, i.e., . If permits a DCL, then it is called a DCG. A complete graph , is a graph on nodes in which any 2 nodes are adjacent and lilly graph is formed by joining , sharing a common node. i.e., , where is a complete bipartite graph & is a path on nodes. In this paper, we propose an interesting conjecture concerning DCL for a given , besides, discussing certain general results concerning DCL of complete graph -related graphs. We also prove that admits a DCL for all . Further, we establish the DCL of some -related graphs in the context of some graph operations such as duplication of a node by an edge, node by a node, extension of a node by a node, switching of a node, degree splitting graph, & barycentric subdivision of the given .
PubDate: Jan 2022
- Viscosity Analysis of Lubricating Oil Through the Solution of Exponential
Fractional Differential Equations
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Endang Rusyaman Kankan Parmikanti Diah Chaerani and Khoirunnisa Rohadatul Aisy Muslihin Lubricating oil is still a primary need for people dealing with machines. The important thing of lubricating oil is viscosity which is closely related to surface tension. Fluid viscosity states the measure of friction in the fluid, while surface tension is the tendency of the fluid to stretch due to attractive forces between the molecules (cohesion). We want to know how and to what extent the relationship between viscosity and surface tension of the lubricating oil is. This paper will discuss the analysis of a model in the form of an exponential fractional differential equation that states the relationship between surface tension and viscosity of lubricating oil. The Modified Homotopy Perturbation Method (MHPM) will be used to determine the solution of the fractional differential equation. This study indicates a relationship between viscosity and surface tension in the form of fractional differential equation in which the existence and uniqueness of the solution are guaranteed. From the analysis of the solution function both analytically and geometrically supported by empirical data, it can be concluded that there is a strong exponential relationship between viscosity and surface tension in lubricating oil.
PubDate: Jan 2022
- A Goal Programming Approach for Generalized Calibration Weights Estimation
in Stratified Random Sampling
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Siham Rabee Ramadan Hamed Ragaa Kassem and Mahmoud Rashwaan Calibration estimation approach is a widely used method for increasing the precision of the estimates of population parameters. It works by modifying the design weights as little as possible by minimizing a given distance function to the calibrated weights respecting a set of constraints related to specified auxiliary variables. This paper proposes a goal programming approach for generalized calibration estimation. In the generalized calibration estimation, multi study variables will be considered by incorporating multi auxiliary variables. Almost all calibration estimation's literature proposed calibrated estimators for the population mean of only one study variable. And nevertheless, up to researcher's knowledge, there is no study that considers calibration estimation approach for multi study variables. According to the correlation structure between the study variables, estimating the calibrated weights will be formulated in two different models. The theory of the proposed approach is presented and the calibrated weights are estimated. A simulation study is conducted in order to evaluate the performance of the proposed approach in the different scenarios compared by some existing calibration estimators. The Simulation results of the four generated populations show that the proposed approach is more flexible and efficient compared to classical methods.
PubDate: Jan 2022
- Explicit Formulas and Numerical Integral Equation of ARL for SARX(P,r)L
Model Based on CUSUM Chart
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Suvimol Phanyaem The Cumulative Sum (CUSUM) chart is widely used and has many applications in different fields such as finance, medical, engineering, and other fields. In real applications, there are many situations in which the observations of random processes are serially correlated, such as a hospital admission in the medical field, a share price in the economic field, or a daily rainfall in the environmental field. The common characteristic of control charts that has been used to evaluate the performance of control charts is the Average Run Length (ARL). The primary goals of this paper are to derive the explicit formula and develop the numerical integral equation of the ARL for the CUSUM chart when observations are seasonal autoregressive models with exogenous variable, SARX(P,r)L with exponential white noise. The Fredholm Integral Equation has been used for solving the explicit formula of ARL, and we used numerical methods including the Midpoint rule, the Trapezoidal rule, the Simpson's rule, and the Gaussian rule to approximate the numerical integral equation of ARL. The uniqueness of solutions is guaranteed by using Banach's Fixed Point Theorem. In addition, the proposed explicit formula was compared with their numerical methods in terms of the absolute percentage difference to verify the accuracy of the ARL results and the computational time (CPU). The results obtained indicate that the ARL from the explicit formula is close to the numerical integral equation with an absolute percentage difference of less than 1%. We found an excellent agreement between the explicit formulas and the numerical integral equation solutions. An important conclusion of this study was that the explicit formulas outperformed the numerical integral equation methods in terms of CPU time. Consequently, the proposed explicit formulas and the numerical integral equation have been the alternative methods for finding the ARL of the CUSUM control chart and would be of use in fields like biology, engineering, physics, medical, and social sciences, among others.
PubDate: Jan 2022
- Solving Ordinary Differential Equations (ODEs) Using Least Square Method
Based on Wang Ball Curves
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Abdul Hadi Bhatti and Sharmila Binti Karim Numerical methods are regularly established for the better approximate solutions of the ordinary differential equations (ODEs). The best approximate solution of ODEs can be obtained by error reduction between the approximate solution and exact solution. To improve the error accuracy, the representations of Wang Ball curves are proposed through the investigation of their control points by using the Least Square Method (LSM). The control points of Wang Ball curves are calculated by minimizing the residual function using LSM. The residual function is minimized by reducing the residual error where it is measured by the sum of the square of the residual function of the Wang Ball curve's control points. The approximate solution of ODEs is obtained by exploring and determining the control points of Wang Ball curves. Two numerical examples of initial value problem (IVP) and boundary value problem (BVP) are illustrated to demonstrate the proposed method in terms of error. The results of the numerical examples by using the proposed method show that the error accuracy is improved compared to the existing study of Bézier curves. Successfully, the convergence analysis is conducted with a two-point boundary value problem for the proposed method.
PubDate: Jan 2022
- Prediction Variance Properties of Third-Order Response Surface Designs in
the Hypersphere
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Abimibola Victoria Oladugba and Brenda Mbouamba Yankam The variance dispersion graphs (VDGs) and the fraction of design space (FDS) graphs are two graphical methods that effectively describe and evaluate the points of best and worst prediction capability of a design using the scaled prediction variance properties. These graphs are often utilized as an alternative to the single-value criteria such as D- and E- when they fail to describe the true nature of designs. In this paper, the VDGs and FDS graphs of third-order orthogonal uniform composite designs (OUCD4) and orthogonal array composite designs (OACD4) using the scaled-prediction variance properties in the spherical region for 2 to 7 factors are studied throughout the design region and over a fraction of space. Single-valued criteria such as D-, A- and G-optimality are also studied. The results obtained show that the OUCD4 is more optimal than the OACD4 in terms of D-, A- and G-optimality. The OUCD4 was shown to possess a more stable and uniform scaled-prediction variance throughout the design region and over a fraction of design space than the OACD4 although the stability of both designs slightly deteriorated towards the extremes.
PubDate: Jan 2022
- Study of the New Finite Mixture of Weibull Extension Model:
Identifiability, Properties and Estimation
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Noura S. Mohamed Moshira A. Ismail and Sanaa A. Ismail Finite mixture models have been used in many fields of statistical analysis such as pattern recognition, clustering and survival analysis, and have been extensively applied in different scientific areas such as marketing, economics, medicine, genetics and social sciences. Introducing mixtures of new generalized lifetime distributions that exhibit important hazard shapes is a major field of research aiming at fitting and analyzing a wider variety of data sets. The main objective of this article is to present a full mathematical study of the properties of the new finite mixture of the three-parameter Weibull extension model, considered as a generalization of the standard Weibull distribution. The new proposed mixture model exhibits a bathtub-shaped hazard rate among other important shapes in reliability applications. We analytically prove the identifiability of the new mixture and investigate its mathematical properties and hazard rate function. Maximum likelihood estimation of the model parameters is considered. The Kolmogrov-Smirnov test statistic is used to fit two famous data sets from mechanical engineering to the proposed model, the Aarset data and the Meeker and Escobar datasets. Results show that the two-component version of the proposed mixture is a superior fit compared to various lifetime distributions, either one-component or two-component lifetime distributions. The new proposed mixture is a significant statistical tool to study lifetime data sets in numerous fields of study.
PubDate: Jan 2022
- A Simulation of an Elastic Filament Using Kirchhoff Model
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Saimir Tola Alfred Daci and Gentian Zavalani This paper presents numerical simulations and comparisons between different approaches concerning elastic thin rods. Elastic rods are ideal for modeling the stretching, bending, and twisting deformations of such long and thin elastic materials. The static solution of Kirchhoff's equations [2] is produced using ODE45 solver where Kirchhoff and reference system equations are combined instantaneously. Solutions using formulations are based on Euler's elastica theory [1] which determines the deformed centerline of the rod by solving a boundary-value problem, on the Discreet Elastic Rod method using Bishop frame (DER) [5,6] which is based on discrete differential geometry, it starts with a discrete energy formulation and obtains the forces and equations of motion by taking the derivative of energies. Instead of discretizing smooth equations, DER solves discrete equations and obeys geometrical exactness. Using DER we measure torsion as the difference of angles between the material and the Bishop frame of the rod so that no additional degree of freedom is needed to represent the torsional behavior. We found excellent agreement between our Kirchhoff-based solution and numerical results obtained by the other methods. In our numerical results, we include simulation of the rod under the action of the terminal moment and illustrations of the gravity effects.
PubDate: Jan 2022
- Stratification Methods for an Auxiliary Variable Model-Based Allocation
under a Superpopulation Model
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 Bhuwaneshwar Kumar Gupt Mankupar Swer Md. Irphan Ahamed B. K. Singh and Kh. Herachandra Singh In this paper, the problem of optimum stratification of heteroscedastic populations in stratified sampling is considered for a known allocation under Simple Random Sampling With and Without Replacement (SRSWR & SRSWOR) design. The known allocation used in the problem is one of the model-based allocations proposed by Gupt [1,2] under a superpopulation model considered by Hanurav [3], Rao [4], and Gupt and Rao [5] which was modified by the author (Gupt [1,2]) to a more general form. The problem of finding optimum boundary points of stratification (OBPS) in stratifying populations considered here is based on an auxiliary variable which is highly correlated with the study variable. Equations giving the OBPS have been derived by minimizing the variance of estimator of the population mean. Since the equations giving OBPS are implicit and difficult for solving, some methods of finding approximately optimum boundary points of stratification (AOBPS) have also been obtained as the solutions of the equations giving OBPS. While deriving equations giving OBPS and methods of finding AOBPS, basic statistical definitions, tools of calculus, analytic functions and tools of algebra are used. While examining the efficiencies of the proposed methods of stratification, they are tested in a few generated populations and a live population. All the proposed methods of stratification are found to be efficient and suitable for practical applications. In this study, although the proposed methods are obtained under a heteroscedastic superpopulation model for level of heteroscedasticity one, the methods have shown robustness in empirical investigation in varied levels of heteroscedastic populations. The stratification methods proposed here are new as they are derived for an allocation, under the superpopulation model, which has never been used earlier by any researcher in the field of construction of strata in stratified sampling. The proposed methods may be a fascinating piece of work for researchers amidst the vigorously progressing theoretical research in the area of stratified sampling. Besides, by virtue of exhibiting high efficiencies in the performance of the methods, the work may provide a practically feasible solution in the planning of socio-economic survey.
PubDate: Jan 2022
- Accuracy and Efficiency of Symmetrized Implicit Midpoint Rule for Solving
the Water Tank System Problems
Abstract: Publication date: Jan 2022
Source:Mathematics and Statistics Volume 10 Number 1 M. F. Zairul Fuaad N. Razali H. Hishamuddin and A. Jedi The accuracy and efficiency of water tank system problems can be determined by comparing the Symmetrized Implicit Midpoint Rule (IMR) with the IMR. Static and dynamic analyses are part of a mathematical model that uses energy conservation to generate a nonlinear Ordinary Differential Equation. Static analysis provides optimal working points, while dynamic analysis outputs an overview of the system behaviour. The procedure mentioned is tested on two water tank designs, namely, cylindrical and rectangular tanks with two different parameters. The Symmetrized IMR is used in this study. Results show that the two-step active Symmetrized IMR applied on the proposed mathematical model is precise and efficient and can be used for the design of appropriate controls. The cylindrical water tank model takes the fastest time in emptying the water tank. The approach of the various water tank models shows an increase in accuracy and efficiency in the range of parameters used for practical model applications. The results of the numerical method show that the two-step Symmetrized IMR provides more precise stability, accuracy and efficiency for the fixed step size measurements compared with other numerical methods.
PubDate: Jan 2022
- Unbounded Toeplitz Operators with Rational Symbols
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Domenico P.L. Castrigiano Unbounded (and bounded) Toeplitz operators (TO) with rational symbols are analysed in detail showing that they are densely defined closed and have finite dimensional kernels and deficiency spaces. The latter spaces as well as the domains, ranges, spectral and Fredholm points are determined. In particular, in the symmetric case, i.e., for a real rational symbol the deficiency spaces and indices are explicitly available. — The concluding section gives a brief overview on the research on unbounded TO in order to locate the present contribution. Regarding properties of unbounded TO in general, it furnishes some new results recalling the close relationship to Wiener-Hopf operators and, in case of semiboundedness, to singular operators of Hilbert transformation type. Specific symbols considered in the literature admit further analysis. Some conclusions are drawn for semibounded integrable and real square-integrable symbols. There is an approach to semibounded TO, which starts from closable semibounded forms related to a Toeplitz matrix. The Friedrichs extension of the TO associated with such a form is studied. Finally, analytic TO and Toeplitz-like operators are briefly examined, which in general differ from the TO treated here.
PubDate: Sep 2021
- Unique Common Tripled Fixed Point for Three Mappings in src=image/13424616_01.gif> Spaces
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 K. Kumara Swamy Swatmaram Bipan Hazarika and P. Sumati Kumari It has been a century since the Banach fixed point theorem was established, and because of this, the result is the progenitor in some ways. This seems essential to revisit fixed point theorems in specific and in light of most of those. Those are numerous and prevalent in mathematics, as we will demonstrate. Fixed point theorems can be noticed in advanced mathematics, economics, micro-structures, geometry, dynamics, computational mathematics, and differential equations. space is to broaden and extrapolate the paradigm of the concept of metric space. The characteristic of a space, in essence, is to comprehend the topological features of three points rather than two points via the perimeter of a triangle, where the metric indicates the distance between two points. The domain of space is significantly larger than that of the class of space. Hence we utilised this generalized space in order to obtain common tripled fixed point for three mappings using rational type contractions in the setting of spaces. Recently, Khomadram et al have developed coupled fixed point theorems in spaces via rational type contractions. The main aim of our paper is to broaden and extrapolate the paradigm of Khomadram's results into tripled fixed point theorems. Therefore, examples are offered to support our findings.
PubDate: Sep 2021
- Numerical Solution of Ostrovsky Equation over Variable Topography Passes
through Critical Point Using Pseudospectral Method
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Nik Nur Amiza Nik Ismail Azwani Alias and Fatimah Noor Harun Internal solitary waves have been documented in several parts of the world. This paper intends to look at the effects of the variable topography and rotation on the evolution of the internal waves of depression. Here, the wave is considered to be propagating in a two-layer fluid system with the background topography is assumed to be rapidly and slowly varying. Therefore, the appropriate mathematical model to describe this situation is the variable-coefficient Ostrovsky equation. In particular, the study is interested in the transition of the internal solitary wave of depression when there is a polarity change under the influence of background rotation. The numerical results using the Pseudospectral method show that, over time, the internal solitary wave of elevation transforms into the internal solitary wave of depression as it propagates down a decreasing slope and changes its polarity. However, if the background rotation is considered, the internal solitary waves decompose and form a wave packet and its envelope amplitude decreases slowly due to the decreasing bottom surface. The numerical solutions show that the combination effect of variable topography and rotation when passing through the critical point affected the features and speed of the travelling solitary waves.
PubDate: Sep 2021
- A Note on Some Integrals by Malmsten and Bierens de Haan
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Robert Reynolds and Allan Stauffer Carl Johan Malmsten (1846) and David Beirens de Haan (1847) published work containing some interesting integrals. While no formal derivations of the integrals in his book Nouvelles Tables d'Intègrales Dèfines are available in current literature deriving and evaluating such formulae are useful in all aspects of science and engineering whenever such formulae are used. Formulae in the book of Bierens de Haan are used in connection with certain potential problems where there is the need to determine the vector potential of two parallel, infinitely long, tubular rectangular conductors carrying cur-rents in opposite directions. In this current work we supply formal derivations for some of these integrals along with deriving some special cases as new integrals in order to expand upon the book of Bierens de haan to aid in potential research where these formulae are applicable. Updating book of integrals is always a useful exercise as it keeps the volume accurate and more useful for potential readers and researchers. Formal derivations are also useful as they help in verifying the correctness of integrals in such volumes. The definite integral we derived in this work is given by (1) in terms of the Lerch function, where the parameters a; k; m; and p are general complex numbers subject to their restrictions. This formal derivation is then used to derive the correct version of a definite integral transform along with new formulae. Some of the results in this work are new.
PubDate: Sep 2021
- Structural Properties of the Essential Ideal Graph of src=image/13424240_01.gif>
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 P Jamsheena and A V Chithra Let be a commutative ring with unity. The essential ideal graph of , denoted by , is a graph with vertex set consisting of all nonzero proper ideals of A and two vertices and are adjacent whenever is an essential ideal. An essential ideal of a ring is an ideal of (), having nonzero intersection with every other ideal of . The set contains all the maximal ideals of . The Jacobson radical of , , is the set of intersection of all maximal ideals of . The comaximal ideal graph of , denoted by , is a simple graph with vertices as proper ideals of A not contained in and the vertices and are associated with an edge whenever . In this paper, we study the structural properties of the graph by using the ring theoretic concepts. We obtain a characterization for to be isomorphic to the comaximal ideal graph . Moreover, we derive the structure theorem of and determine graph parameters like clique number, chromatic number and independence number. Also, we characterize the perfectness of and determine the values of for which is split and claw-free, Eulerian and Hamiltonian. In addition, we show that the finite essential ideal graph of any non-local ring is isomorphic to for some .
PubDate: Sep 2021
- An Approximate Solution to Predator-prey Models Using The Differential
Transform Method and Multi-step Differential Transform Method, in
Comparison with Results of The Classical Runge-kutta Method
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Adeniji A A Noufe H. A Mkolesia A C and Shatalov M Y Predator-prey models are the building blocks of the ecosystems as biomasses are grown out of their resource masses. Different relationships exist between these models as different interacting species compete, metamorphosis occurs and migrate strategically aiming for resources to sustain their struggle to exist. To numerically investigate these assumptions, ordinary differential equations are formulated, and a variety of methods are used to obtain and compare approximate solutions against exact solutions, although most numerical methods often require heavy computations that are time-consuming. In this paper, the traditional differential transform (DTM) method is implemented to obtain a numerical approximate solution to prey-predator models. The solution obtained with DTM is convergent locally within a small domain. The multi-step differential transform method (MSDTM) is a technique that improves DTM in the sense that it increases its interval of convergence of the series expansion. One predator-one prey and two-predator-one prey models are considered with a quadratic term which signifies other food sources for its feeding. The result obtained numerically and graphically shows point DTM diverges. The advantage of the new algorithm is that the obtained series solution converges for wide time regions and the solutions obtained from DTM and MSDTM are compared with solutions obtained using the classical Runge-Kutta method of order four. The results demonstrated is that MSDTM computes fast, is reliable and gives good results compared to the solutions obtained using the classical Runge-Kutta method.
PubDate: Sep 2021
- The Fractional Residual Power Series Method for Solving a System of Linear
Fractional Fredholm Integro-differential Equations
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Prapart Pue-on In this manuscript, the fractional residual power series (FRPS) method is employed in solving a system of linear fractional Fredholm integro-differential equations. The significant role of this system in various fields has attracted the attention of researchers for a decade. The definition of fractional derivative here is described in the Caputo sense. The proposed method relies on the generalized Taylor series expansion as well as the fact that the fractional derivative of stationary is zero. The process starts by constructing a residual function by supposing the finite order of an approximate power series solution that prescribes the initial conditions. Then, utilizing some conditions, the residual functions are converted to a linear system for the power series coefficients. Solving the linear system reveals the coefficients of the fractional power series solution. Finally, by substituting these coefficients into the supposed form of a solution, the approximate fractional power series solutions are derived. This technique has the advantage of being able to be applied directly to the problem and spending less time on computation. It is not only an easy method for implementation of the problem, but also provides productive results after a few iterations. Some problems with known solutions emphasize the procedure's simplicity and reliability. Moreover, the obtained exact solution demonstrated the efficiency and accuracy of the presented method.
PubDate: Sep 2021
- Estimating the Entropy and Residual Entropy of a Lomax Distribution under
Generalized Type-II Hybrid Censoring
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Mahmoud Riad Mahmoud Moshera. A. M. Ahmad and Badiaa. S. Kh. Mohamed The Lomax distribution (or Pareto II) was first introduced by K. S. Lomax in 1954. It can be readily applied to a wide range of situations including applications in the analysis of the business failure life time data, economics, and actuarial science, income and wealth inequality, size of cities, engineering, lifetime, and reliability modeling. In his pioneering paper, Shannon 1948 defined the notion of entropy as a mathematical measure of information, which is sometimes called Shannon entropy in his honor. He laid the groundwork for a new branch of mathematics in which the notion of entropy plays a fundamental role over different areas of applications such as statistics, information theory, financial analysis, and data compression. [Ebrahimi and Pellerey 14] introduced the residual entropy function because the entropy shouldn't be applied to a system that has survived for some units of time, and therefore, the residual entropy is used to measure the ageing and characterize, classify and order lifetime distributions. In this paper, the estimation of the entropy and residual entropy of a two parameter Lomax distribution under a generalized Type-II hybrid censoring scheme are introduced. The maximum likelihood estimation for the entropy is provided and the Bayes estimation for the residual entropy is obtained. Simulation studies to assess the performance of the estimates with different sample sizes are described, finally conclusions are discussed.
PubDate: Sep 2021
- A Moment Based Approximation for Expected Number of Renewals for
Non-Negligible Repair
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Dilcu Barnes and Saeed Maghsoodloo This paper focuses on the renewal function which is simply the mathematical expectation of number of renewals in a stochastic process. Renewal functions are important, and they have various applications in many fields. However, obtaining an analytical expression for the renewal function may be very complicated and even impossible. Therefore, researchers focused on developing approximation methods for them. The purpose of this paper is to explore the renewal functions for non-negligible repair for the most common reliability underlying distributions using the first four raw moments of the failure and repair distributions. This article gives the approximate number of cycles, number of failures and the resulting availability for particular distributions assuming Mean Time to Repair is not negligible and that Time to Restore, or repair has a probability density function denoted as r(t). When Mean Time to Repair is not negligible and Time to Restore has a probability density function denoted as r(t), the expected number of failures, cycles and the resulting availability were obtained by taking the Laplace transforms of corresponding renewal functions. An approximation method for obtaining the expected number of cycles, number of failures and availability using raw moments of failure and repair distributions are provided. Results show that the method produces very accurate results for especially large values of time t.
PubDate: Sep 2021
- Prediction Variance Capabilities of Third-Order Response Surface Designs
for Cuboidal Regions
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Brenda Mbouamba Yankam and Abimibola Victoria Oladugba Experimenters often evaluate the steadiness and consistency of designs over the region of interest by means of the prediction variance capabilities using the variance dispersion graph and the fraction of design space graph. The variance dispersion graph and the fraction of design space graph effectively describe the prediction variance capabilities of a design in the region of interest. However, the prediction variance capabilities of third-order response surface designs have not been studied in the literature. In this paper, the prediction variance capabilities of two third-order response surface designs term augmented orthogonal uniform composite designs and orthogonal array composite designs in the cuboidal region for 3≤k≤7 with center points are examined. The prediction variance capabilities are evaluated using the variance dispersion graph and the fraction of design space graph. Also, D-, E-, G-and T-optimality criteria are used in evaluating these designs in terms of single-value criterion. The results obtained show that the augmented orthogonal uniform composite designs have better prediction variance capabilities in the cuboidal region in the terms of the variance dispersion graphs for factors 3 and 4. The augmented orthogonal uniform composite designs also have better prediction variance capabilities for 3≤k≤7 compare to the orthogonal array composite designs in terms of the fraction of design space graph. The augmented orthogonal uniform composite designs are shown to be superior over the orthogonal array composite designs in terms of D-, E-, G-and T-optimality criteria for single-value criterion. This shows that the performances of the prediction variance capabilities of third-order response surface designs can be clearly visualized by means of the variance dispersion graph and fraction of design space and should be consider over the single-value criteria even though the single value-criteria show some degree of design performance. The augmented orthogonal uniform composite design is should often be considered in experimentation over the orthogonal array composite design since the augmented orthogonal uniform composite design performance better.
PubDate: Sep 2021
- Triangle Conics, Cubics and Possible Applications in Cryptography
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Veronika Starodub Ruslan V. Skuratovskii and Sergii S. Podpriatov We research triangle cubics and conics in classical geometry with elements of projective geometry. In recent years, N.J. Wildberger has actively dealt with this topic using an algebraic perspective. Triangle conics were also studied in detail by H.M. Cundy and C.F. Parry recently. The main task of the article is development of a method for creating curves that pass through triangle centers. During the research, it was noticed that some different triangle centers in distinct triangles coincide. The simplest example: an incenter in a base triangle is an orthocenter in an excentral triangle. This is the key for creating an algorithm. Indeed, we can match points belonging to one curve (base curve) with other points of another triangle. Therefore, we get a new fascinating geometrical object. During the research number of new triangle conics and cubics are derived, their properties in Euclidian space are considered. In addition, it is discussed corollaries of the obtained theorems in projective geometry, which proves that all of the discovered results could be transferred to the projective plane. It is well known that many modern cryptosystems can be naturally transformed into elliptic curves. We investigate the class of curves applicable in cryptography.
PubDate: Sep 2021
- Category of Submodules of a Uniserial Module
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Fitriani Indah Emilia Wijayanti Budi Surodjo Sri Wahyuni and Ahmad Faisol Let R be a ring, K,M be R-modules, L a uniserial R-module, and X a submodule of L. The triple (K,L,M) is said to be X-sub-exact at L if the sequence K→X→M is exact. Let σ(K,L,M) is a set of all submodules Y of L such that (K,L,M) is Y -sub-exact. The sub-exact sequence is a generalization of an exact sequence. We collect all triple (K,L,M) such that (K,L,M) is an X-sub exact sequence, where X is a maximal element of σ(K,L,M). In a uniserial module, all submodules can be compared under inclusion. So, we can find the maximal element of σ(K,L,M). In this paper, we prove that the set σ(K,L,M) form a category, and we denoted it by CL. Furthermore, we prove that CY is a full subcategory of CL, for every submodule Y of L. Next, we show that if L is a uniserial module, then CL is a pre-additive category. Every morphism in CL has kernel under some conditions. Since a module factor of L is not a submodule of L, every morphism in a category CL does not have a cokernel. So, CL is not an abelian category. Moreover, we investigate a monic X-sub-exact and an epic X-sub-exact sequence. We prove that the triple (K,L,M) is a monic X-sub-exact if and only if the triple Z-modules (, , ) is a monic -sub-exact sequence, for all R-modules N. Furthermore, the triple (K,L,M) is an epic X-sub-exact if and only if the triple Z-modules (, , ) is a monic -sub-exact, for all R-module N.
PubDate: Sep 2021
- Mellin Transform of an Exponential Fourier Transform Expressed in Terms of
the Lerch Function
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Robert Reynolds and Allan Stauffer The aim of this paper is to provide a table of definite integrals which includes both known and new integrals. This work is important because we provide a formal derivation for integrals in [7] not currently present in literature along with new integrals. By deriving new integrals we hope to expand the current list of integral formulae which could assist in research where applicable. The authors apply their contour integral method [9] to an integral in [8] to achieve this new integral formula in terms of the Lerch function. In this present work, the authors provide a formal derivation for an interesting Exponential Fourier transform and express it in terms of the Lerch function. The Exponential Fourier transform has many real world applications namely, in the field of Electrical engineering, in the work of electrical transients by [10] and in the field of Civil engineering, in the work of stress analysis of boundary load on soil by [11]. The definite integral we derived in this work is given by (1) where the variables . This formal derivation is then used to derive the correct version of a definite integral transform along with new formulae. Some of the results in this work are new.
PubDate: Sep 2021
- The Relative Rank of Transformation Semigroups with Restricted Range on a
Finite Chain
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Kittisak Tinpun Let S be a semigroup and let G be a subset of S. A set G is a generating set G of S which is denoted by . The rank of S is the minimal size or the minimal cardinality of a generating set of S, i.e. rank . In last twenty years, the rank of semigroups is worldwide studied by many researchers. Then it lead to a new definition of rank that is called the relative rank of S modulo U is the minimal size of a subset such that generates S, i.e. rank . A set with is called generating set of S modulo U. The idea of the relative rank was generalized from the concept of the rank of a semigroup and it was firstly introduced by Howie, Ruskuc and Higgins in 1998. Let X be a finite chain and let Y be a subchain of X. We denote the semigroup of full transformations on X under the composition of functions. Let be the set of all transformations from X to Y which is so-called the transformation semigroup with restricted range Y. It was firstly introduced and studied by Symons in 1975. Many results in were extended to results in . In this paper, we focus on the relative rank of semigroup and the semigroup of all orientation-preserving transformations in . In Section 2.1, we determine the relative rank of modulo the semigroup of all order-preserving or order-reversing transformations. In Section 2.2, we describe the results of the relative rank of modulo the semigroup . In Section 2.3, we determine the relative rank of modulo the semigroup of all orientation-preserving or orientation-reversing transformations. Moreover, we obtain that the relative rank modulo and modulo are equal.
PubDate: Sep 2021
- On New Generalized Fuzzy Directed Divergence Measure and Its Application
in Decision Making Problem
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Bhagwan Dass Vijay Prakash Tomar Krishan Kumar and Vikas Ranga The concept of fuzzy sets presented by Zadeh has conquered an enormous achievement in numerous fields. Uncertainty in real world is ubiquitous. Entropy is an important tool with uncertainty and fuzziness. In this article, we propose new measure of directed divergence on fuzzy set. The extension of the fuzzy sets and one that integrated with other theories have been applied by some researchers. To prove the validity of measure, some axioms are proved. Using the proposed measure, we generate a method about decision making criteria and give a suitable method. In this article, we describe directed divergence measure for fuzzy set. Properties of proposed measure are discussed. In the real world, the multicriteria decision making is a very practical method and has a wide range of uses. By using multicriteria decision making, we can find best choice among the given criteria. In recent years, many researchers extensively apply fuzzy directed divergence for multicriteria decision making. Also some researchers defined the application of parameterized Hesitant Fuzzy Soft Set theory in decision making. In this article, we shall investigate the multiple criteria decision making problem under fuzzy environment. Application of introduced measure is given for decision making problem. A numerical example is given for decision making problem. In a fuzzy multicriteria problem, the analysis is given by an illustration example of the new define approach regarding admission preference of a student for post graduate course of science stream.
PubDate: Sep 2021
- Choice of Strata Boundaries for Allocation Proportional to Stratum Cluster
Totals in Stratified Cluster Sampling
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Bhuwaneshwar Kumar Gupt F. Lalthlamuanpuii and Md. Irphan Ahamed In survey planning, sometimes, there arises situation to use cluster sampling because of nature of spatial relationship between elements of population or physical feature of land over which elements are dispersed or unavailability of reliable list of elements. At the same time, there requires technique and strategy for ensuring precision of the sample in representing the parent population. Although several theoretical cum practical works have been done in cluster sampling, stratified sampling and stratified cluster sampling, so far, the problem of stratified cluster sampling for a study variable based on an auxiliary variable, which is required in practice, has never been approached. For the first time, this paper deals with the problem of optimum stratification of population of clusters in cluster sampling with clusters of equal size of a characteristic y under study based on highly correlated concomitant variable x for allocation proportional to stratum cluster totals under a super population model. Equations giving optimum strata boundaries (OSB) for dividing population, in which sampling unit of the population is a cluster, are obtained by minimising sampling variance of the estimator of population mean. As the equations are implicit in nature, a few methods of finding approximately optimum strata boundaries (AOSB) are deduced from the equations giving OSB. In deriving the equations, mathematical tools of calculus and algebra are used in addition to statistical methods of finding conditional expectation of variance. All the proposed methods of stratification are empirically examined by illustrating in live data, population of villages in Lunglei and Serchhip districts of Mizoram State, India, and found to perform efficiently in stratifying the population. The proposed methods may provide practically feasible solution in planning socio-economic survey.
PubDate: Sep 2021
- Analytical Solutions of ARL for SAR(p)_{L} Model on a Modified
EWMA Chart
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Piyatida Phanthuna and Yupaporn Areepong A modified exponentially weighted moving average (EWMA) scheme expanded from an EWMA chart is an instrument for immediate detection on a small shifted size. The objective of this research is to suggest the average run length (ARL) with the explicit formula on a modified EWMA control chart for observations of a seasonal autoregressive model of order pth (SAR(p)L) with exponential residual. A numerical integral equation method is brought to approximate ARL for checking an accuracy of explicit formulas. The results of two methods show that their ARL solutions are close and the percentage of the absolute relative change (ARC) is obtained to less than 0.002. Furthermore, the modified EWMA chart with the SAR(p)L model is tested to shift detection when the parameters c and are changed. The ARL and the relative mean index (RMI) results are found to be better when c and are increased. In addition, the modified EWMA control chart is compared to performance with the EWMA scheme and such that their results encourage the modified EWMA chart for a small shift. Finally, this explicit formula can be applied to various real-world data. For example, two data about information and communication technology are used for the validation and the capability of our techniques.
PubDate: Sep 2021
- Hesitant Fuzzy Network Approach for Alternatives Selection with Incomplete
Weight Information
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Shahira Shafie and Abdul Malek Yaakob Networked rule bases in fuzzy system, acknowledged as fuzzy network, carries multiple stages of development in decision making processes that involves the uncertainty in the data used as medium in various field. Fuzzy network promotes transparency in multicriteria decision making (MCDM) whereby the criteria are divided into subsystems of cost and benefit to ensure good assessment performance. By considering Hesitant fuzzy sets (HFS), which gives the permission of a set of possible values to present the membership degree of an element, we develop a novel approach that applies fuzzy network and the maximizing deviation method in solving MCDM problem. Fuzzy network addresses transparency in the formulation and maximizing deviation method can restore weight information in MCDM problems whether partially known or fully unknown. The proposed method is applied in case study of stock evaluation that carries opinion evaluated by several decision makers and compared in terms of performance using Spearman rho correlation.
PubDate: Sep 2021
- Starter Set Generation Based on Factorial Numbers for Half Wing of
Butterfly Representation
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Sharmila Karim and Haslinda Ibrahim Permutation is an interesting subject to explore until today where it is widely applied in many areas. This paper presents the use of factorial numbers for generating starter sets where starter sets are used for listing permutation. Previously starter sets are generated by using their permutation under exchange-based and cycling based. However, in the new algorithm, this process is replaced by factorial numbers. The base theory is there are number of distinct starter sets. Every permutation has its decimal number from zero until for Lexicographic order permutation only. From a decimal number, it will be converted to a factorial number. Then the factorial number will be mapped to its corresponding starter sets. After that, the Half Wing of Butterfly will be presented. The advantage of the use of factorial numbers is the avoidance of the recursive call function for starter set generation. In other words, any starter set can be generated by calling any decimal number. This new algorithm is still in the early stage and under development for the generation of the half wing of butterfly representation. Case n=5 is demonstrated for a new algorithm for lexicographic order permutation. In conclusion, this new development is only applicable for generating starter sets as a lexicographic order permutation due to factorial numbers is applicable for lexicographic order permutation.
PubDate: Sep 2021
- Robust Multivariate Location Estimation in the Existence of Casewise and
Cellwise Outliers
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Yik-Siong Pang Nor Aishah Ahad and Sharipah Soaad Syed Yahaya Multivariate outliers can exist in two forms, casewise and cellwise. Data collection typically contains unknown proportion and types of outliers which can jeopardize the location estimation and affect research findings. In cases where the two coexist in the same data set, traditional distance-based trimmed mean and coordinate-wise trimmed mean are unable to perform well in estimating location measurement. Distance-based trimmed mean suffers from leftover cellwise outliers after the trimming whereas coordinate-wise trimmed mean is affected by extra casewise outliers. Thus, this paper proposes new robust multivariate location estimation known as α-distance-based trimmed median () to deal with both types of outliers simultaneously in a data set. Simulated data were used to illustrate the feasibility of the new procedure by comparing with the classical mean, classical median and α-distance-based trimmed mean. Undeniably, the classical mean performed the best when dealing with clean data, but contrarily on contaminated data. Meanwhile, classical median outperformed distance-based trimmed mean when dealing with both casewise and cellwise outliers, but still affected by the combined outliers' effect. Based on the simulation results, the proposed yields better location estimation on contaminated data compared to the other three estimators considered in this paper. Thus, the proposed can mitigate the issues of outliers and provide a better location estimation.
PubDate: Sep 2021
- On the Representation of the Weight Enumerator of src=image/13424212_01.gif>
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Mans L Mananohas Charles E Mongi Dolfie Pandara Chriestie E J C Montolalu and Muhammad P M Mo'o The weight enumerator of a code is a homogeneous polynomial that provides a lot of information about the code. In this case, for the development of a code, research on the weight enumerator is very important. In this study, we focus on the code . Let be the weight enumerator of the code . Fujii and Oura showed that is generated by and . Indeed, we show that is an element of the polynomial ring . We know that the weight enumerator of all self-dual double-even (Type II) code is generated by and . Recall is a type II code. Thus, is an element of the polynomial ring and . One of the motivations of this research is to investigate the connection between these two polynomial rings in representing . Let and be the coefficients of polynomial that represent as an element of and , respectively. We find that is an element of the polynomial . In addition, we also show that there are no weight enumerators of Type II code generated by and that can be written uniquely as isobaric polynomials in five homogeneous polynomial elements of degrees 8, 24, 24, 24, 24.
PubDate: Sep 2021
- The Theory of Pure Algebraic (Co)Homology
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Alaa Hassan Noreldeen Wageeda M. M. and O. H. Fathy Polynomial: algebra is essential in commutative algebra since it can serve as a fundamental model for differentiation. For module differentials and Loday's differential commutative graded algebra, simplified homology for polynomial algebra was defined. In this article, the definitions of the simplicial, the cyclic, and the dihedral homology of pure algebra are presented. The definition of the simplicial and the cyclic homology is presented in the Algebra of Polynomials and Laurent's Polynomials. The long exact sequence of both cyclic homology and simplicial homology is presented. The Morita invariance property of cyclic homology was submitted. The relationship was introduced, representing the relationship between dihedral and cyclic (co)homology in polynomial algebra. Besides, a relationship , was examined, defining the relationship between dihedral and cyclic (co)homology of Laurent polynomials algebra. Furthermore, the Morita invariance property of dihedral homology in polynomial algebra was investigated. Also, the Morita property of dihedral homology in Laurent polynomials was studied. For the dihedral homology, the long exact sequence was obtained of the short sequence . The long exact sequence of the short sequence was obtained from the reflexive (co)homology of polynomial algebra. Studying polynomial algebra helps calculate COVID-19 vaccines.
PubDate: Sep 2021
- A Monte Carlo Study for Dealing with Multicollinearity and Autocorrelation
Problems in Linear Regression Using Two Stage Ridge Regression Method
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Hussein Eledum and Hytham Hussein Awadallah In the multiple linear regression model, the problem of multicollinearity may come together with autocorrelation; therefore, several methods of estimation are developed to deal with this case; Two-Stage Ridge Regression (TR) is one of them. This article's main objective is to run a Monte Carlo simulation to investigate the impact of both problems, Multicollinearity and Autocorrelation, in multiple linear regression model on the performance of the TR. The simulation is carried out under different levels of multicollinearity, and sets of autocorrelation coefficient, taking into account different sample sizes. Some new properties for the TR method, including expectation, variance and mean square error, are droved. In contrast, the study also has developed some techniques to estimate the biasing parameter for the TR by modifying some popular techniques used in ridge regression (RR). Moreover, Mean Square Error is used as a base for evaluation and comparison. The empirical findings from the simulations revealed that the TR estimator performs better than the RR, and the values of the biasing parameter under the TR are always less than that under the RR. This paper contributes to the existing literature on developing new estimation methods used to overcome the presence of mixed problems in a linear regression model and studying their properties.
PubDate: Sep 2021
- Methods of Stratification for a Generalised Auxiliary Variable Optimum
Allocation
Abstract: Publication date: Sep 2021
Source:Mathematics and Statistics Volume 9 Number 5 Md. Irphan Ahamed Bhuwaneshwar Kumar Gupt and Manoshi Phukon In stratified sampling, ever since Dalenius [1] undertook the problem of optimum stratification, the research in the area has been progressing in various perspectives and dimensions till date. Amidst the multifaceted developments in the trend of the research, consideration of the topic by taking into account various aspects such as different sample selection methods and allocations, study variable based stratification, auxiliary variable based stratification, superpopulation models, extension to two study variables for a single auxiliary variable, extension to two stratification variables for a single study variable etc., are a few noteworthy ones. However, with regard to considering optimum stratification of heteroscedastic populations, as live populations are generally heteroscedastic, it was Gupt and Ahamed [2,3] who considered the problem for a few allocations under a heteroscedastic regression superpopulation (HRS) model. As a sequel to the work of the authors, in this paper, the problem of optimum stratification for an objective variable y based on a concomitant variable x under the HRS model is considered for an allocation proposed by Gupt [4,5] and termed as Generalised Auxiliary Variable Optimum Allocation (GAVOA). Methods of stratification in the form of equations and approximate solutions to the equations which stratify populations at optimum strata boundaries (OSB) and approximately optimum strata boundaries (AOSB) respectively are obtained. Mathematical analysis is used in minimizing sampling variance of the estimator of population mean and deriving all the proposed methods of stratification. The proposed equations divide heteroscedastic populations, symmetrical or moderately skewed or highly skewed, at OSB, but, the equations are implicit in nature and not easy in solving. Therefore, a few methods of finding AOSB are deduced from the equations through analytically justified steps of approximation. The methods may provide practically feasible solutions in survey planning in stratifying heteroscedastic population of any level of heteroscedasticity and the work may contribute, to some extent, theoretically in the research area. The methods are empirically examined in a few generated heteroscedastic data of varied shapes with some assumed levels of heteroscedasticity and found to perform with high efficiency. The proposed methods of stratification are restricted to the particular allocation used.
PubDate: Sep 2021
- Singular Non-circular Complex Elliptically Symmetric Distributions: New
Results and Applications
Abstract: Publication date: Nov 2021
Source:Mathematics and Statistics Volume 9 Number 6 Habti Abeida Absolutely Continuous non-singular complex elliptically symmetric distributions (referred to as the nonsingular CES distributions) have been extensively studied in various applications under the assumption of nonsingularity of the scatter matrix for which the probability density functions (p.d.f's) exist. These p.d.f's, however, can not be used to characterize the CES distributions with a singular scatter matrix (referred to as the singular CES distributions). This paper presents a generalization of the singular real elliptically symmetric (RES) distributions studied by Díaz-García et al. to singular CES distributions. An explicit expression of the p.d.f of a multivariate non-circular complex random vector with singular CES distribution is derived. The stochastic representation of the singular non-circular CES (NC-CES) distributions and the quadratic forms in NC-CES random vector are proved. As special cases, explicit expressions for the p.d.f's of multivariate complex random vectors with singular non-circular complex normal (NC-CN) and singular non-circular complex Compound-Gaussian (NC-CCG) distributions are also derived. Some useful properties of singular NC-CES distributions and their conditional distributions are derived. Based on these results, the p.d.f's of non-circular complex t-distribution, K-distribution, and generalized Gaussian distribution under singularity are presented. These general results degenerate to those of singular circular CES (C-CES) distributions when the pseudo-scatter matrix is equal to the zero matrix. Finally, these results are applied to the problem of estimating the parameters of a complex-valued non-circular multivariate linear model in the presence either of singular NC-CES or C-CES distributed noise terms by proposing widely linear estimators
PubDate: Nov 2021
- Properties of Sakaguchi Kind Functions Associated with Bessel Function
Abstract: Publication date: Nov 2021
Source:Mathematics and Statistics Volume 9 Number 6 H.Priya and B. Srutha Keerthi The aim of the paper is to obtain the First Hankel Determinant and the Second Hankel determinant. We shall make use of few lemmas which are based on Caratheodory's class of analytic functions. We establish a new Sakaguchi class of univalent function, further we estimate the sharp bound for initial coefficients and using the Bessel function expansion. We have discussed about the coefficient as well for the Second Hankel Determinant. The results are obtained for Sakaguchi kind. Our results travel along exploring the stages of Hankel Determinants. Various types of technologies like wire, optical or other electromagnetic systems are used for the transmission of data in one device to another. Filters play an important role in the process that can remove disorted signals. By using different parameter values for the function belongs to Sakaguchi kind of functions the Low pass filter and High pass filter can be designed and that can be done by the coefficient estimates.
PubDate: Nov 2021
- An Asymptotic Test for A Single Outlier in Linear Regression Models
Abstract: Publication date: Nov 2021
Source:Mathematics and Statistics Volume 9 Number 6 Ugah Tobias Ejiofor Mba Emmanuel Ikechukwu Eze Micheal Chinonso Arum Kingsley Chinedu Mba Ifeoma Christy Urama Chinasa and Comfort Njideka Ekene-Okafor It is not uncommon to find an outlier in the response variable in linear regression. Such a deviant value needs to be detected and scrutinized to find out why it is not in agreement with its fitted value. Srikantan [1] has developed a test statistic for detecting the presence of an outlier in the response variable in a multiple linear regression model. Approximate critical values of this test statistic are available and are obtained based on the first-order Bonferroni upper bound. The exact critical values are not available and a result of that, tests carried out on the basis of this approximate critical values may not be very accurate. In this paper, we obtained more accurate and precise critical values of this test statistic for large sample sizes (herein called asymptotic critical values) to improve on the tests that use these critical values. The procedure involved using the exact probability density function of this test statistic to obtain its asymptotic critical values. We then compared these asymptotic critical values with the approximate critical values obtained. An application to simulation results for linear regression models was used to examine the power of this test statistic. The asymptotic critical values obtained were found to be more accurate and precise. Also, the test performed better under these asymptotic values (the power performance of this test statistic was found to better when the asymptotic critical values were used).
PubDate: Nov 2021
- Power Comparisons of Normality Tests Based on L-moments and Classical
Tests
Abstract: Publication date: Nov 2021
Source:Mathematics and Statistics Volume 9 Number 6 Ivana Mala Vaclav Sladek and Diana Bilkova Normality tests are used in the statistical analysis to determine whether a normal distribution is acceptable as a model for the data analysed. A wide range of available tests employs different properties of normal distribution to compare empirical and theoretical distributions. In the present paper, we perform the Monte Carlo simulation to analyse test power. We compare commonly known and applied tests (standard and robust versions of the Jarque-Bera test, Lilliefors test, chi-square goodness-of-fit test, Shapiro-Francia test, Cramer-von Mises goodness-of-fit test, Shapiro-Wilk test, D'Agostino test, and Anderson-Darling test) to the test based on robust L-moments. In the text, in Jarque-Bera type test the moment characteristics of skewness and kurtosis are replaced with their robust versions - L-skewness and L-kurtosis. The distributions with heavy tails (lognormal, Weibull, loglogistic and Student) are used to draw random samples to show the performance of tests when applied on data with outliers. Small sample properties (from 10 observations) are analysed up to large samples of 200 observations. Our results concerning the properties of the classical tests are in line with the conclusion of other recent articles. We concentrate on properties of the test based on L-moments. This normality test is comparable to well-performing and reliable tests; however, it is outperformed by the most powerful Shapiro-Wilks and Shapiro-Francia tests. It works well for Student (symmetric) distribution, comparably with the most frequently used Jarque-Berra tests. As expected, the test is robust to the presence of outliers in comparison with sensitive tests based on product moments or correlations. The test turns out to be very universally reliable.
PubDate: Nov 2021
- Some Results on Number Theory and Differential Equations
Abstract: Publication date: Nov 2021
Source:Mathematics and Statistics Volume 9 Number 6 B. M. Cerna Maguiña Dik D. Lujerio Garcia and Héctor F. Maguiña In this work, using the basic tools of functional analysis, we obtain a technique that allows us to obtain important results, related to quadratic equations in two variables that represent a natural number and differential equations. We show the possible ways to write an even number that ends in six, as the sum of two odd numbers and we establish conditions for said odd numbers to be prime, also making use of a suitable linear functional we obtain representations of natural numbers of the form in order to obtain positive integer solutions of the equation quadratic where is a natural number given that it ends with one. And finally, we show with three examples the use of the proposed technique to solve some ordinary and partial linear differential equations. We believe that the third corollary of our first result of this investigation can help to demonstrate the strong Goldbach conjecture.
PubDate: Nov 2021
- Combined Adomian Decomposition Method with Integral Transform
Abstract: Publication date: Nov 2021
Source:Mathematics and Statistics Volume 9 Number 6 Betty Subartini Ira Sumiati Sukono Riaman and Ibrahim Mohammed Sulaiman At present, three numerical solution methods have mainly been used to solve fractional-order chaotic systems in the literature: frequency domain approximation, predictor–corrector approach and Adomian decomposition method (ADM). Based on the literature, ADM is capable of dealing with linear and nonlinear problems in a time domain. Also, the Adomian decomposition method (ADM) is among the efficient approaches for solving linear and non-linear equations. Numerical solution method is one of the critical problems in theoretical research and in the applications of fractional-order systems. The solution is decomposed into an infinite series and the integral transformation to a differential equation is implemented in this work. Furthermore, the solution can be thought of as an infinite series that converges to an exact solution. The aim of this study is to combine the Adomian decomposition approach with a different integral transformation, including Laplace, Sumudu, Natural, Elzaki, Mohand, and Kashuri-Fundo. The study's key finding is that employing the combined method to solve fractional ordinary differential equations yields good results. The main contribution of our study shows that the combined numerical methods considered produce excellent numerical performance for solving fractional ordinary differential equations. Therefore, the proposed combined method has practical implications in solving fractional order differential equations in science and social sciences, such as finding analytical and numerical solutions for secure communication system, biological system, financial risk models, physics phenomenon, neuron models and engineering application.
PubDate: Nov 2021
- Comparison of Distance and Linkage in Integrated Cluster Analysis with
Abstract: Publication date: Nov 2021
Source:Mathematics and Statistics Volume 9 Number 6 Ni Made Ayu Astari Badung Adji Achmad Rinaldo Fernandes and Waego Hadi Nugroho This study aims to compare the size of distance (Euclidean distance, Manhattan distance, and Mahalanobis distance) and linkage (average linkage, single linkage, and complete linkage) in integrated cluster analysis with Multiple Discriminant Analysis on Home Ownership Credit Bank consumers in Indonesia. The data used are secondary data from the 5C assessment on Bank consumers in Indonesia. The data contain notes on the 5 C assessment as well as 3 credit collectability (current, special mention, and substandard) from Home Ownership Credit customers. The population in this study were all Home Ownership Credit customers in all banks in Indonesia. The sampling technique used was purposive random sampling. The sample size is 300 customers from customer data at three branches of Bank in Indonesia. This research is a quantitative study using cluster analysis integrated with multiple discriminant analysis. The best method for classifying Home Ownership Credit Bank customers based on the 5C variable assessment is an integrated cluster analysis with Multiple Discriminant Analysis based on the Mahalanobis distance with 2 clusters, namely the high cluster and the low cluster. Use of an integrated cluster with Multiple Discriminant Analysis to compare distance and linkage measures. In addition, the objects used are Home Ownership Credit Bank customers in Indonesia.
PubDate: Nov 2021
- Modeling of Path Nonparametric Truncated Spline Linear, Quadratic, and
Cubic in Model on Time Paying Bank Credit
Abstract: Publication date: Nov 2021
Source:Mathematics and Statistics Volume 9 Number 6 Erlinda Citra Lucki Efendi Adji Achmad Rinaldo Fernandes and Maria Bernadetha Theresia Mitakda This study aims to estimate the nonparametric truncated spline path functions of linear, quadratic, and cubic orders at one and two knot points and determine the best model on the variables that affect the timely payment of House Ownership Credit (HOC). In addition, this study aims to test the hypothesis to determine the variables that have a significant effect on punctuality in paying House Ownership Credit (HOC). The data used in this study are primary data. The variables used are service quality and lifestyle as exogenous variables, willingness to pay as mediating variables and on time to pay as endogenous variables. Analysis of the data used in this study is a nonparametric path using R software. The results showed that the best model was obtained on a nonparametric truncated spline linear path model with 2 knot points. The model has the smallest GCV value of 25.9059 and R2 value of 96.96%. In addition, the results of hypothesis testing on function estimation have a significant effect on the relationship between service quality and willingness to pay, the relationship between service quality and on time to pay, the relationship between lifestyle and willingness to pay, and the relationship between lifestyle and on time pay. The novelty of this research is to model and test the hypothesis of nonparametric regression development, namely nonparametric truncated spline paths of linear, quadratic and cubic orders.
PubDate: Nov 2021
- An Improved Simple Averaging Approach for Estimating Parameters in Simple
Linear Regression Model
Abstract: Publication date: Nov 2021
Source:Mathematics and Statistics Volume 9 Number 6 Jetsada Singthongchai Noppakun Thongmual and Nirun Nitisuk This research is about estimating parameters in simple linear regression model. Regression model is applied for predictive in many filed. Ordinary lest square (OLS) approach and Maximum likelihood (ML) approach are employed for estimating parameter in simple linear regression model when the assumption is not violated. This research interested in simple linear regression model when the assumption is violated. Simple Averaging (SA) approach is an alternative for estimating parameters in simple linear regression model where assumptions are not successfully used. We improved SA approach based on the median which is called the improved Simple Averaging (ISA) approach. For comparing the two approaches for estimating parameter in simple linear regression model, ISA approach is compared with SA approach under Root Mean Square Error (RMSE) which reflected accuracy of prediction in simple linear regression. By using the sample, the results showed that ISA approach is better than SA approach where the value of RMSE of ISA approach is less than the value of RMSE of SA approach. Therefore, ISA approach is better than SA approach. Our study suggests ISA approach to estimating parameter on simple linear regression because ISA approach accuracy than SA approach and ISA approach simplify the estimation of parameters in the simple linear regression model. Hence, ISA approach an alternative for estimating parameters in simple linear regression model when the assumptions are not successfully used.
PubDate: Nov 2021
- Some Results on Integer Solutions of Quadratic Polynomials in Two
Variables
Abstract: Publication date: Nov 2021
Source:Mathematics and Statistics Volume 9 Number 6 B. M. Cerna Maguiña and Janet Mamani Ramos Although it is true that there are several articles that study quadratic equations in two variables, they do so in a general way. We focus on the study of natural numbers ending in one, because the other cases can be studied in a similar way. We have given the subject a different approach, that is why our bibliographic citations are few. In this work, using basic tools of functional analysis, we achieve some results in the study of integer solutions of quadratic polynomials in two variables that represent a given natural number. To determine if a natural number ending in one is prime, we must solve equations (i) , (ii) , (iii) . If these equations do not have an integer solution, then the number P is prime. The advantage of this technique is that, to determine if a natural number p is prime, it is not necessary to know the prime numbers less than or equal to the square root of p. The objective of this work was to reduce the number of possibilities assumed by the integer variables in the equation (i), (ii), (iii) respectively. Although it is true that this objective was achieved, we believe that the lower limits for the sums of the solutions of equations (i), (ii), (iii), were not optimal, since in our recent research we have managed to obtain limits lower, which reduce the domain of the integer variables solve equations (i), (ii), (iii), respectively. In a future article we will show the results obtained. The methodology used was deductive and inductive. We would have liked to have a supercomputer, to build or determine prime numbers of many millions of digits, but this is not possible, since we do not have the support of our respective authorities. We believe that the contribution of this work to number theory is the creation of linear functionals for the study of integer solutions of quadratic polynomials in two variables, which represent a given natural number. The utility of large prime numbers can be used to encode any type of information safely, and the scheme shown in this article could be useful for this process.
PubDate: Nov 2021
- Derivation of New Degrees for Best COCUNP Weighted Approximation: II
Abstract: Publication date: Nov 2021
Source:Mathematics and Statistics Volume 9 Number 6 Malik Saad Al-Muhja Habibulla Akhadkulov and Nazihah Ahmad Approximation Theory is a branch of analysis and applied mathematics requiring that the approximation process preserves certain -shaped properties defined at a finite interval , such as convexity in all or parts of the interval. The (Co)convex and Unconstrained Polynomial (COCUNP) approximation is one of the key estimations of the approximation theory that Kopotun has recently raised for ten years. Numerous studies have been conducted on modern methods of weighted approximation to construct the best degree of approximation. In developing COCUNP a novel technique, the Lebesgue Stieltjes integral-i technique is used to resolve certain disadvantages, such as Riemann's integrable functions, which do not have the degree of the best approximation in norm space. In order to achieve the main goal, Derivation of New Degree (DOND) of the best COCUNP approximation was constructions. The theoretical results revealed that, in general, the new degrees of best approximation were able to smaller errors compared to the existing literature in the same estimating. In conclusion, this study has successfully developed DOND for the best (Co)convex Polynomial (COCP) weighted approximation.
PubDate: Nov 2021
- Seidel Laplacian and Seidel Signless Laplacian Spectrum of the
Zero-divisor Graph on the Ring of Integers Modulo
Abstract: Publication date: Nov 2021
Source:Mathematics and Statistics Volume 9 Number 6 Magi P M Sr.Magie Jose and Anjaly Kishore Let be a simple graph of order and let be the Seidel matrix of , defined as where if the vertices and are adjacent and if the vertices and are not adjacent and if . Let be the diagonal matrix where denotes the degree of the vertex of . The Seidel Laplacian matrix of a graph is defined as and the Seidel signless Laplacian matrix of a graph is defined as . The zero-divisor graph of a commutative ring , denoted by , is a simple undirected graph with all non-zero zero-divisors as vertices and two distinct vertices are adjacent if and only if . In this paper, we find the Seidel polynomial and Seidel Laplacian polynomial of the join of two regular graphs using the concept of schur complement and coronal of a square matrix. Also we describe the computation of the Seidel Laplacian and Seidel signless Laplacian eigenvalues of the join of more than two regular graphs, using the well known Fiedler's lemma and apply these results to describe these eigenvalues for the zero-divisor graph on . Further we find the Seidel Laplacian and Seidel signless Laplacian spectrum of the zero-divisor graph of for some values of , say , where are distinct primes. We also prove that 0 is a simple Seidel Laplacian eigenvalue of , for any .
PubDate: Nov 2021
- Some New Results on Equivalent Cauchy Sequences and Their Applications to
Meir-Keeler Contraction in Partial Rectangular Metric Space
Abstract: Publication date: Nov 2021
Source:Mathematics and Statistics Volume 9 Number 6 Sidite Duraj Eriola Sila and Elida Hoxha The study of fixed points in the metric spaces plays a crucial role in the development of Functional Analysis. It is evolved by generalizing the metric space or improving the contractive conditions. Recently, the partial rectangular metric space and its topology have been the center of study for many researchers. They have defined open and closed balls the equivalent Cauchy sequences and Cauchy sequences, convergent sequences which are used as tools in many achieved results. In this paper, two facts for equivalent Cauchy sequences in a partial rectangular metric space are provided by using an ultra - altering distance function. Furthermore, some results of Cauchy sequences in a partial rectangular metric space are highlighted. There is proved that under some conditions the equivalent Cauchy sequences are Cauchy sequences in a partial rectangular metric space. Some fixed point results have been taken as applications of our new conditions of Cauchy sequences and equivalent Cauchy sequences in a partial rectangular metric space for orbitally continuous functions . To illustrate the obtained results some examples are given.
PubDate: Nov 2021
- Trigonometric Ratios Using Algebraic Methods
Abstract: Publication date: Nov 2021
Source:Mathematics and Statistics Volume 9 Number 6 Sameen Ahmed Khan The main aim of this article is to start with an expository introduction to the trigonometric ratios and then proceed to the latest results in the field. Historically, the exact ratios were obtained using geometric constructions. The geometric methods have their own limitations arising from certain theorems. In view of the certain limitations of the geometric methods, we shall focus on the powerful techniques of equations in deriving the exact trigonometric ratios using surds. The cubic and higher-order equations naturally arise while deriving the exact trigonometric ratios. These equations are best expressed using the expansions of the cosines and sine of multiple angles using the Chebyshev polynomials of the first and second kind respectively. So, we briefly present the essential properties of the Chebyshev polynomials. The equations lead to the question of reduced polynomials. This question of the reduced polynomials is addressed using the Euler's totient function. So, we describe the techniques from theory of equations and reduced polynomials. The trigonometric ratios of certain rational angles (when measured in degrees) give rise to rational trigonometric ratios. We shall discuss these along with the related theorems. This is a frontline area of research connecting trigonometry and number theory. Results from number theory and theory of equations are presented wherever required.
PubDate: Nov 2021
- The Combinatorial Expressions and Probability of Random Generation of
Binary Palindromic Digit Combinations
Abstract: Publication date: Nov 2021
Source:Mathematics and Statistics Volume 9 Number 6 Vladislav V. Lyubimov The aim of this paper is to obtain three types of expressions for calculating the probability of implementing palindromic digit combinations on a finite equally possible combination of zeros and ones. When calculating the probability of implementation of palindromic digit combinations, the classical definition of probability is applied. The main results of the paper are formulated in the form of three theorems. Moreover, the consequences of these theorems and typical examples of calculating the probability of implementing palindromic digit combinations in a data string of binary code are considered. All formulated theorems and their consequences are accompanied by proofs. The obtained numerical results of the paper can be used in the analysis of numerical computer data written as a binary code string in BIN format files. It should also be noted that the combinatorial expressions described in the article for calculating the number of palindromic combinations of digits in the binary number system can be used in number theory and in various branches of computer science. The development of these results from the point of view of obtaining an expression for calculating the number of palindromic combinations of digits in the binary number system contained in two-dimensional data arrays is also of immediate theoretical and practical interest. However, these results are not presented in this work, but they can be considered in subsequent publications.
PubDate: Nov 2021
- A Modified Perry's Conjugate Gradient Method Based on Powell's Equation
for Solving Large-Scale Unconstrained Optimization
Abstract: Publication date: Nov 2021
Source:Mathematics and Statistics Volume 9 Number 6 Mardeen Sh. Taher and Salah G. Shareef It is known that the conjugate gradient method is still a popular method for many researchers who are focused in solving the large-scale unconstrained optimization problems and nonlinear equations because the method avoids the computation and storage of some matrices so the memory's requirements of the method are very small. In this work, a modified Perry conjugate gradient method which fulfills a global convergence with standard assumptions is shown and analyzed. The idea of new method is based on Perry method by using the equation which is founded via Powell in 1978. The weak Wolfe–Powell search conditions are used to choose the optimal line search, under the line search and suitable conditions, we prove both descent and sufficient descent conditions. In particular, numerical results show that the new conjugate gradient method is more effective and competitive when compared to other of standard conjugate gradient methods including: - CG- Hestenes and Stiefel (H/S) method, CG-Perry method, CG- Dai and Yuan (D/Y) method. The comparison is completed under a group of standard test problems with various dimensions from the CUTEst test library and the comparative performances of the methods are evaluated by total the number of iterations and the total number of function evaluations.
PubDate: Nov 2021
- An Analytical Study for Caputo Fractional Derivative on Unsteady Casson
Fluid with Thermal Radiation Effect
Abstract: Publication date: Nov 2021
Source:Mathematics and Statistics Volume 9 Number 6 Ridhwan Reyaz Ahmad Qushairi Mohamad Yeaou Jiann Lim Muhammad Saqib Zaiton Mat Isa and Sharidan Shafie Studies on Casson fluid are essential in the development of the manufacturing and engineering fields since it is widely used there. Meanwhile, fractional derivative has been known to be a constructive paradox that can be beneficial in the future. In this study, the development fractional derivative on Casson fluid flow is investigated. A fractional Casson fluid model with effect of thermal radiation is derived together with momentum and energy equations. The Caputo definition of fractional derivative is used in the mathematical formulation. Casson fluid with constant wall temperature over an oscillating plate in the presence of thermal radiation is considered. Solutions were obtained by using Laplace transform and are presented in the form of Wright function. Graphical analysis on velocity and temperature profiles was conducted with variations in parametric values such as fractional parameter, Grashof number, Prandtl number and radiation parameter. Numerical computations were carried out to investigate behaviours of skin friction and Nusselt number. It is found that when the fractional parameter is increased, the velocity and temperature profiles will also increase. Existence of fractional parameter in both velocity and temperature profiles shows the transitional phenomenon of both profiles from an unsteady state to steady state, providing a new perspective on Casson fluid flow. An increment in both profiles is also observed when the thermal radiation parameter is increased. The present results are also validated with published results, and it is found that they are in agreement with each other.
PubDate: Nov 2021
- Relative Coprime Probability and Graph for Some Nonabelian Groups of Small
Order and Their Associated Graph Properties
Abstract: Publication date: Nov 2021
Source:Mathematics and Statistics Volume 9 Number 6 Nurfarah Zulkifli and Nor Muhainiah Mohd Ali Let be a finite group. The probability that two selected elements from and from are chosen at random in a way that the greatest common divisor also known as gcd, of the order of and , which is equal to one, is called as the relative coprime probability. Meanwhile, another definition states that the vertices or nodes are the elements of a group and two distinct vertices or nodes are adjacent if and only if their orders are coprime and any of them is in the subgroup of the group and this is called as the relative coprime graph. This research focuses on determining the relative coprime probability and graph for cyclic subgroups of some nonabelian groups of small order and their associated graph properties by referring to the definitions and theorems given by previous researchers. Besides, various results of the relative coprime probability for nonabelian groups of small order are obtained. As for the relative coprime graph, the result shows that the domination number for each group is one whereas the number of edges and the independence number for each group vary. Types of graphs that can be formed are either star graph, planar graph or complete subgraph depending on the order of the subgroup of a group.
PubDate: Nov 2021
- Tensor Multivariate Trace Inequalities and Their Applications
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 Shih Yu Chang and Hsiao-Chun Wu In linear algebra, the trace of a square matrix is defined as the sum of elements on the main diagonal. The trace of a matrix is the sum of its eigenvalues (counted with multiplicities), and it is invariant under the change of basis. This characterization can be used to define the trace of a tensor in general. Trace inequalities are mathematical relations between different multivariate trace functionals involving linear operators. These relations are straightforward equalities if the involved linear operators commute, however, they can be difficult to prove when the non-commuting linear operators are involved. Given two Hermitian tensors H1 and H2 that do not commute. Does there exist a method to transform one of the two tensors such that they commute without completely destroying the structure of the original tensor' The spectral pinching method is a tool to resolve this problem. In this work, we will apply such spectral pinching method to prove several trace inequalities that extend the Araki–Lieb–Thirring (ALT) inequality, Golden–Thompson(GT) inequality and logarithmic trace inequality to arbitrary many tensors. Our approaches rely on complex interpolation theory as well as asymptotic spectral pinching, providing a transparent mechanism to treat generic tensor multivariate trace inequalities. As an example application of our tensor extension of the Golden–Thompson inequality, we give the tail bound for the independent sum of tensors. Such bound will play a fundamental role in high-dimensional probability and statistical data analysis.
PubDate: May 2021
- No Finite Time Blowup for 3D Incompressible Navier Stokes Equations via
Scaling Invariance
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 Terry E. Moschandreou The problem to The Clay Math Institute "Navier-Stokes, breakdown of smooth solutions here on an arbitrary cube subset of three dimensional space with periodic boundary conditions is examined. The incompressible Navier-Stokes Equations are presented in a new and conventionally different way here, by naturally reducing them to an operator form which is then further analyzed. It is shown that a reduction to a general 2D N-S system decoupled from a 1D non-linear partial differential equation is possible to obtain. This is executed using integration over n-dimensional compact intervals which allows decoupling. The operator form is considered in a physical geometric vorticity case, and a more general case. In the general case, the solution is revealed to have smooth solutions which exhibit finite-time blowup on a fine measure zero set and using the Prékopa-Leindler and Gagliardo-Nirenberg inequalities it is shown that for any non zero measure set in the form of cube subset of 3D there is no finite time blowup for the starred velocity for large dimension of cube and small d. In particular vortices are shown to exist and it is shown that zero is in the attractor of the 3D Navier-Stokes equations.
PubDate: May 2021
- Comparing the Performance of AdaBoost, XGBoost, and Logistic Regression
for Imbalanced Data
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 Sharmeen Binti Syazwan Lai Nur Huda Nabihan Binti Md Shahri Mazni Binti Mohamad Hezlin Aryani Binti Abdul Rahman and Adzhar Bin Rambli An imbalanced data problem occurs in the absence of a good class distribution between classes. Imbalanced data will cause the classifier to be biased to the majority class as the standard classification algorithms are based on the belief that the training set is balanced. Therefore, it is crucial to find a classifier that can deal with imbalanced data for any given classification task. The aim of this research is to find the best method among AdaBoost, XGBoost, and Logistic Regression to deal with imbalanced simulated datasets and real datasets. The performances of these three methods in both simulated and real imbalanced datasets are compared using five performance measures, namely sensitivity, specificity, precision, F1-score, and g-mean. The results of the simulated datasets show that logistic regression performs better than AdaBoost and XGBoost in highly imbalanced datasets, whereas in the real imbalanced datasets, AdaBoost and logistic regression demonstrated similarly good performance. All methods seem to perform well in datasets that are not severely imbalanced. Compared to AdaBoost and XGBoost, logistic regression is found to predict better for datasets with severe imbalanced ratios. However, all three methods perform poorly for data with a 5% minority, with a sample size of n = 100. In this study, it is found that different methods perform the best for data with different minority percentages.
PubDate: May 2021
- Block Method for the Solution of First Order Nonlinear ODEs and Its
Application to HIV Infection of CD4+T Cells Model
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 Adeyeye Oluwaseun and Omar Zurni Some of the issues relating to the human immunodeficiency virus (HIV) epidemic can be expressed as a system of nonlinear first order ordinary differential equations. This includes modelling the spread of the HIV virus in infecting CD4+T cells that help the human immune system to fight diseases. However, real life differential equation models usually fail to have an exact solution, which is also the case with the nonlinear model considered in this article. Thus, an approximate method, known as the block method, is developed to solve the system of first order nonlinear differential equation. To develop the block method, a linear block approach was adopted, and the basic properties required to classify the method as convergent were investigated. The block method was found to be convergent, which ascertained its usability for the solution of the model. The solution obtained from the newly developed method in this article was compared to previous methods that have been adopted to solve same model. In order to have a justifiable basis of comparison, two-step length values were substituted to obtain a one-step and two-step block method. The results show the newly developed block method obtaining accurate results in comparison to previous studies. Hence, this article has introduced a new method suitable for the direct solution of first order differential equation models without the need to simplify to a system of linear algebraic equations. Likewise, its convergent properties and accuracy also give the block method an edge over existing methods.
PubDate: May 2021
- Stationary and Non-Stationary Models of Extreme Ground-Level Ozone in
Peninsular Malaysia
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 Siti Aisyah Zakaria Nor Azrita Mohd Amin Noor Fadhilah Ahmad Radi and Nasrul Hamidin High ground-level ozone (GLO) concentrations will adversely affect human health, vegetations as well as the ecosystem. Therefore, continuous monitoring for GLO trends is a good practice to address issues related to air quality based on high concentrations of GLO. The purpose of this study is to introduce stationary and non-stationary model of extreme GLO. The method is applied to 25 selected stations in Peninsular Malaysia. The maximum daily GLO concentration data over 8 hours from year 2000 to 2016 are used. The factors of this distribution are anticipated using maximum likelihood estimation. A comparison between stationary (constant model) and non-stationary (linear and cyclic model) is performed using the likelihood ratio test (LRT). The LRT is based on the larger value of deviance statistics compared to a chi-square distribution providing the significance evidence to non-stationary model either there is linear trend or cyclic trend. The best fit model between selected models is tested by Akaike's Information Criterion. The results show that 25 stations conform to the non-stationary model either linear or cyclic model, with 14 stations showing significant improvement over the linear model in location parameter while 11 stations follow the cyclic model. This study is important to identify the trends of ozone phenomenon for better quality risk management.
PubDate: May 2021
- Numerical Treatment for Solving Fuzzy Volterra Integral Equation by Sixth
Order Runge-Kutta Method
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 Rawaa Ibrahim Esa Rasha H Ibraheem and Al.i F Jameel There has recently been considerable focus on finding reliable and more effective numerical methods for solving different mathematical problems with integral equations. The Runge–Kutta methods in numerical analysis are a family of iterative methods, both implicit and explicit, with different orders of accuracy, used in temporal and modification for the numerical solutions of integral equations. Fuzzy Integral equations (known as FIEs) make extensive use of many scientific analysis and engineering applications. They appear because of the incomplete information from their mathematical models and their parameters under fuzzy domain. In this paper, the sixth order Runge-Kutta is used to solve second-kind fuzzy Volterra integral equations numerically. The proposed method is reformulated and updated for solving fuzzy second-kind Volterra integral equations in general form by using properties and descriptions of fuzzy set theory. Furthermore a Volterra fuzzy integral equation, based on the parametric form of a fuzzy numbers, transforms into two integral equations of the second kind in the crisp case under fuzzy properties. We apply our modified method using the specific example with a linear fuzzy integral Volterra equation to illustrate the strengths and accurateness of this process. A comparison of evaluated numerical results with the exact solution for each fuzzy level set is displayed in the form of table and figures. Such results indicate that the proposed approach is remarkably feasible and easy to use.
PubDate: May 2021
- Approximation treatment for Linear Fuzzy HIV Infection Model by
Variational Iteration Method
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 Hafed H Saleh Azmi A. and Ali. F. Jameel There has recently been considerable focus on finding reliable and more effective approximate methods for solving biological mathematical models in the form of differential equations. One of the well-known approximate or semi-analytical methods for solving linear, nonlinear differential well as partial differential equations within various fields of mathematics is the Variational Iteration Method (VIM). This paper looks at the use of fuzzy differential equations in human immunodeficiency virus (HIV) infection modeling. The main advantage of the method lies in its flexibility and ability to solve nonlinear equations easily. VIM is introduced to provide approximate solutions for linear ordinary differential equation system including the fuzzy HIV infection model. The model explains the amount of undefined immune cells, and the immune system viral load intensity intrinsic that will trigger fuzziness in patients infected by HIV. CD4+T-cells and cytototoxic T-lymphocytes (CTLs) are known for the immune cells concerned. The dynamics of the immune cell level and viral burden are analyzed and compared across three classes of patients with low, moderate and high immune systems. A modification and formulation of the VIM in the fuzzy domain based on the use of the properties of fuzzy set theory are presented. A model was established in this regard, accompanied by plots that demonstrate the reliability and simplicity of the methods. The numerical results of the model indicate that this approach is effective and easily used in fuzzy domain.
PubDate: May 2021
- On the Up-to-date Course of Mathematical Logic for the Future Math
Teachers
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 E. N. Sinyukova S. V. Drahanyuk and O. O. Chepok All-round development of the everyday logic of students should be considered as one of the most important tasks of general secondary education on the whole and general secondary mathematics education in particular. We discuss the problem of organization in teachers' training institutions of higher education and the expedient training of the future math teachers at institutions of general secondary education. The main goal is to ensure their ability to realize all their future professional activities and the necessary participation in forming the everyday logic of their pupils. The authors think that vocational educational program of training is that the future secondary school math teachers must contain a separate course of mathematical logic including at least 90 training hours (3 credits ECTS). Although the content filling of the course cannot be irrespective of the general level of arrangement of mathematics education in the corresponding country, it ought to be a subject of discussion of the international mathematics community and managers in the sphere of higher mathematics education. Simultaneously, the role, the place, and the expedient structure of such a course in the corresponding training programs should be under discussion. The article represents the authors' point of view on the problems indicated above. The research has a qualitative characteristic as a whole. Only some of its conclusions have statistical corroboration.
PubDate: May 2021
- A Goal Programming Approach for Multivariate Calibration Weights
Estimation in Stratified Random Sampling
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 Siham Rabee Ramadan Hamed Ragaa Kassem and Mahmoud Rashwaan Calibration estimation is one of the most important ways to improve the precision of the survey estimates. It is a method in which the designs weights are modified as little as possible by minimizing a given distance measure to the calibrated weights respecting a set of constraints related to suitable auxiliary information. This paper proposes a new approach for Multivariate Calibration Estimation (MCE) of the population mean of a study variable under stratified random sampling scheme using two auxiliary variables. Almost all literature on calibration estimation used Lagrange multiplier technique in order to estimate the calibrated weights. While Lagrange multiplier technique requires all equations included in the model to be differentiable functions, some un- differentiable functions may be faced in some cases. Hence, it is essential to look for using another technique that can provide more flexibility in dealing with the problem. Accordingly, in this paper, using goal programming approach is newly suggested as a different approach for MCE. The theory of the proposed calibration estimation is presented and the calibrated weights are estimated. A comparison study is conducted using actual and generated data to evaluate the performance of the proposed approach for multivariate calibration estimator with other existing calibration estimators. The results of this study prove that using the proposed GP approach for MCE is more flexible and efficient compared to other calibration estimation methods of the population mean.
PubDate: May 2021
- Per Capita Expenditure Modeling Using Spatial EBLUP Approach – SAE
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 Luthfatul Amaliana Ani Budi Astuti and Nur Silviyah Rahmi Per capita expenditure of an area is a welfare indicator of the community. It is also a reflection of the economic capacity in meeting basic needs. Bali is the second richest province in Indonesia. This study aims to model the per capita expenditure of Bali at the sub-district level using Spatial-EBLUP (SEBLUP) approach in SAE. Small area estimation (SAE) modeling is an indirect estimation approach capable of increasing the effectiveness of sample sizes and minimizing variance. The heterogeneity of an area is influenced by other areas around. Everything is related to one another, but something closer will be more influential than something far away. Therefore, the spatial effect can be included in the random effect of a model small area, which is called as SEBLUP model. The selection of a spatial weights matrix is very important in spatial data modeling. It represents the neighborhood relationship of each spatial observation unit. A SEBLUP model needs a spatial weights matrix, which can be based on distance (radial distance and power distance), contiguity (queen), and a combination of distance and contiguity (radial distance and queen contiguity). The result of the implementation of the SEBLUP approach in per capita expenditure of Bali shows that the SEBLUP model with radial distance spatial weights matrix is the best model with the smallest ARMSE. South Denpasar Sub-district is the most prosperous sub-district with the highest per capita expenditure in Bali. Meanwhile, Abang Sub-district is the smallest per capita expenditure.
PubDate: May 2021
- An Approximation to Zeros of the Riemann Zeta Function Using Fractional
Calculus
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 A. Torres-Hernandez and F. Brambila-Paz In this paper an approximation to the zeros of the Riemann zeta function has been obtained for the first time using a fractional iterative method which originates from a unique feature of the fractional calculus. This iterative method, valid for one and several variables, uses the property that the fractional derivative of constants are not always zero. This allows us to construct a fractional iterative method to find the zeros of functions in which it is possible to avoid expressions that involve hypergeometric functions, Mittag-Leffler functions or infinite series. Furthermore, we can find multiple zeros of a function using a singe initial condition. This partially solves the intrinsic problem of iterative methods, which in general is necessary to provide N initial conditions to find N solutions. Consequently the method is suitable for approximating nontrivial zeros of the Riemann zeta function when the absolute value of its imaginary part tends to infinity. Some examples of its implementation are presented, and finally 53 different values near to the zeros of the Riemann zeta function are shown.
PubDate: May 2021
- Almost Interior Gamma-ideals and Fuzzy Almost Interior Gamma-ideals in
Gamma-semigroups
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 Wichayaporn Jantanan Anusorn Simuen Winita Yonthanthum and Ronnason Chinram Ideal theory plays an important role in studying in many algebraic structures, for example, rings, semigroups, semirings, etc. The algebraic structure Г-semigroup is a generalization of the classical semigroup. Many results in semigroups were extended to results in Г-semigroups. Many results in ideal theory of Г-semigroups were widely investigated. In this paper, we first focus to study some novel ideals of Г-semigroups. In Section 2, we define almost interior Г-ideals and weakly almost interior Г-ideals of Г-semigroups by using the concept ideas of interior Г-ideals and almost Г-ideals of Г-semigroups. Every almost interior Г-ideal of a Г-semigroup S is clearly a weakly almost interior Г-ideal of S but the converse is not true in general. The notions of both almost interior Г-ideals and weakly almost interior Г-ideals of Г-semigroups are generalizations of the notion of interior Г-ideal of a Г-semigroup S. We investigate basic properties of both almost interior Г-ideals and weakly almost interior Г-ideals of Г-semigroups. The notion of fuzzy sets was introduced by Zadeh in 1965. Fuzzy set is an extension of the classical notion of sets. Fuzzy sets are somewhat like sets whose elements have degrees of membership. In the remainder of this paper, we focus on studying some novelties of fuzzy ideals in Г-semigroups. In Section 3, we introduce fuzzy almost interior Г-ideals and fuzzy weakly almost interior Г-ideals of Г-semigroups. We investigate their properties. Finally, we give some relationship between almost interior Г-ideals [weakly almost interior Г-ideals] and fuzzy almost interior Г-ideals [fuzzy weakly almost interior Г-ideals] of Г-semigroups.
PubDate: May 2021
- On the Stochastic Processes on 7-Dimensional Spheres
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 Nurfa Risha and Muhammad Farchani Rosyid We studied isometric stochastic flows of a Stratonovich stochastic differential equation on spheres, i.e., on the standard sphere and Gromoll-Meyer exotic sphere . In this case, and are homeomorphic but not diffeomorphic. The standard sphere can be constructed as the quotient manifold with the so-called -action of S3, whereas the Gromoll-Meyer exotic sphere as the quotient manifold with respect to the so-called -action of S3. The corresponding continuous-time stochastic process and its properties on the Gromoll-Meyer exotic sphere can be obtained by constructing a homeomorphism . The stochastic flow can be regarded as the same stochastic flow on S7, but viewed in Gromoll-Meyer differential structure. The flow on and the corresponding flow on constructed in this paper have the same regularities. There is no difference between the stochastic flow's appearance on S7 viewed in standard differential structure and the appearance of the same stochastic flow viewed in the Gromoll-Meyer differential structure. Furthermore, since the inverse mapping h-1 is differentiable on , the Riemannian metric tensor on , i.e., the pull-back of the Riemannian metric tensor G on the standard sphere , is also differentiable. This fact implies, for instance, the fact that the Fokker-Planck equation associated with the stochastic flow and the Fokker-Planck equation associated with the stochastic differential equation have the same regularities provided that the function β is C1-differentiable. Therefore both differential structures on S7 give the same description of the dynamics of the distribution function of the stochastic process understudy on seven spheres.
PubDate: May 2021
- Reducing Approximation Error with Rapid Convergence Rate for Non-Negative
Matrix Factorization (NMF)
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 Jayanta Biswas Pritam Kayal and Debabrata Samanta Non-Negative Matrix Factorization (NMF) is utilized in many important applications. This paper presents development of an efficient low rank approximate NMF algorithm for feature extraction related to text mining and spectral data analysis. NMF can be used for clustering. NMF factorizes a positive matrix A to two positive matrices W and H matrices where A=WH. The proposal uses k-means clustering algorithm to determine the centroid of each cluster and assigns the centroid coordinates of each cluster as one column for W matrix. The initial choice of W matrix is positive. The H matrix is determined with gradient descent algorithm based on thin QR optimization. The performance comparison of the proposed NMF algorithm is illustrated with results. The accurate choice of initial positive W matrix reduces approximation error and the use of thin QR algorithm in combination with gradient descent approach provides rapid convergence rate for NMF. The proposed algorithm is implemented with the randomly generated matrix in MATLAB environment. The number of significant singular values of the generated matrix is selected as the number of clusters. The error and convergence rate comparison of the proposed algorithm with the current algorithms are demonstrated in this research. The accurate measurement of execution time for individual program is not possible in MATLAB. The average time execution over 200 iterations is therefore calculated with an increasing iteration count of the proposed algorithm and the comparative results are presented.
PubDate: May 2021
- Application of Supersaturated Design to Study the Spread of Electronic
Games
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 Alanazi Talal Abdulrahman Randa Alharbi Osama Alamri Dalia Alnagar and Bader Alruwaili A supersaturated design is an important method that relies on factorial designs whose number of factors is greater than experiments' number. The analysis of supersaturated designs is challenging due to the complexity of the design matrix. This problem is challenging due to the fact that the design matrix has a complicated structure. Identification of the variable including the active factor plays an essential role when supersaturated design is used to analyse the data. A variable selection technique to screen active effects in the SSDs and regression analysis are applied to our case study. This study set out to examine the actual reasons for the spread of electronic games statistically such as Saudi society. An online survey provided quantitative data from 200 participants. Respondents were randomly divided into two conditions (Yes+, No-) and asked to respond to one of two sets of the causes of electronic games. The responses was analysed using contrast method with supersaturated designs and regression methods using the SPSS computer software to determine the actual causes that led to the spread of electronic games. The findings indicated that because of their constant preoccupation, some parents resort to such games in order to get rid of the child's inconvenience and insufficient awareness among parents of the dangers of these games, and excessive pampering is the factor that led to the spread of electronic games in Saudi society statistically. On this basis, it is recommended that Saudi government professionals develop an operational plan to study these causes to take actions. In future investigations, no recent studies address the external environmental aspects that could influence gaming among individuals, and hence further research is required in this field.
PubDate: May 2021
- Volume Minimization of a Closed Coil Helical Spring Using ALO, GWO, DA,
FA, FPA, WOA, CSO. BA, PSO and GSA
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 Rejula Mercy. J and S. Elizabeth Amudhini Stephen Springs are important members often used in machines to exert force, absorb energy and provide flexibility. In mechanical systems, wherever flexibility or relatively a large load under the given circumstances is required, some form of spring is used. In this paper, non-traditional optimization algorithms, namely, Ant Lion Optimizer, Grey Wolf Optimizer, Dragonfly optimization algorithm, Firefly algorithm, Flower Pollination Algorithm, Whale Optimization Algorithm, Cat Swarm Optimization, Bat Algorithm, Particle Swarm Optimization, Gravitational Search Algorithm are proposed to get the global optimal solution for the closed coil helical spring design problem. The problem has three design variables and eight inequality constraints and three bounds. The mathematical formulation of the objective function U is to minimize the volume of closed coil helical spring subject to constraints. The design variables considered are Wire diameter d, Mean coil diameter D, Number of active coils N of the spring. The proposed methods are tested and the performance is evaluated. Ten non-traditional optimization methods are used to find the minimum volume. The problem is computed in the MATLAB environment. The experimental results show that Particle Swarm Optimization outperforms other methods. The results show that PSO gives better results in terms of consistency and minimum value in terms of time and volume of a closed coil helical spring compared to other methods. When compared to other Optimization methods, PSO has few advantages like simplicity and efficiency. In the future, PSO could be extended to solve other mechanical element problems.
PubDate: May 2021
- A New Three-Parameter Weibull Inverse Rayleigh Distribution: Theoretical
Development and Applications
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 Adeyinka Solomon Ogunsanya Waheed Babatunde Yahya Taiwo Mobolaji Adegoke Christiana Iluno Oluwaseun R. Aderele and Matthew Iwada Ekum In this work, a three-parameter Weibull Inverse Rayleigh (WIR) distribution is proposed. The new WIR distribution is an extension of a one-parameter Inverse Rayleigh distribution that incorporated a transformation of the Weibull distribution and Log-logistic as quantile function. The statistical properties such as quantile function, order statistic, monotone likelihood ratio property, hazard, reverse hazard functions, moments, skewness, kurtosis, and linear representation of the new proposed distribution were studied theoretically. The maximum likelihood estimators cannot be derived in an explicit form. So we employed the iterative procedure called Newton Raphson method to obtain the maximum likelihood estimators. The Bayes estimators for the scale and shape parameters for the WIR distribution under squared error, Linex, and Entropy loss functions are provided. The Bayes estimators cannot be obtained explicitly. Hence we adopted a numerical approximation method known as Lindley's approximation in other to obtain the Bayes estimators. Simulation procedures were adopted to see the effectiveness of different estimators. The applications of the new WIR distribution were demonstrated on three real-life data sets. Further results showed that the new WIR distribution performed credibly well when compared with five of the related existing skewed distributions. It was observed that the Bayesian estimates derived performs better than the classical method.
PubDate: May 2021
- Statistical Analyses on Factors Affecting Retirement Savings Decision in
Malaysia
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 Nurul Sima Mohamad Shariff and Waznatul Widad Mohamad Ishak Retirement savings decision is related to the individual judgment on savings planning, and preparation for the retirement. Several factors may affect this decision towards retirement savings. Some of them are demographic factors and other determinants, such as financial knowledge and management, future expectation, social influences and risk tolerance. Due to this interest, this study aims to impact of such factors on retirement savings decision. Furthermore, this study will also discuss the retirement savings decision among Malaysians at different age groups. The data were collected through a survey strategy by using a set of questionnaires. The questions were divided into several sections on the demographic profile, Likert-scale questions on the factors, and the retirement savings decisions. The technique sampling used in this study is a random sampling with 385 respondents. As such, several statistical procedures will be utilized such as the reliability test, Kruskal-Wallis H test, and the ordered probit model. The results of this study found that age, financial knowledge and management, future expectation, and social influences were the significant determinants towards retirement savings decision in Malaysia.
PubDate: May 2021
- Relative Complexity Index for Decision-Making Method
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 Harliza Mohd Hanif Daud Mohamad and Rosma Mohd Dom The complexity of a method has been discussed in the decision-making area since complexity may impose some disadvantages such as loss of information and a high degree of uncertainty. However, there is no empirical justification to determine the complexity level of a method. This paper focuses on introducing a method of measuring the complexity of the decision-making method. In the computational area, there is an established method of measuring complexity named Big-O Notation. This paper adopts the method for determining the complexity level of the decision-making method. However, there is a lack of applying Big-O in the decision-making method. Applying Big-O in decision-making may not be able to differentiate the complexity level of two different decision-making methods. Hence, this paper introduces a Relative Complexity Index (RCI) to cater to this problem. The basic properties of the Relative Complexity Index are also discussed. After the introduction of the Relative Complexity Index, the method is implemented in Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method.
PubDate: May 2021
- Z-Score Functions of Dual Hesitant Fuzzy Set and Its Applications in
Multi-Criteria Decision Making
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 Zahari Md Rodzi Abd Ghafur Ahmad Nur Sa’aidah Ismail Wan Normila Mohamad and Sarahiza Mohmad Dual hesitant fuzzy set (DHFS) consists of two parts: membership hesitant function and non-membership hesitant function. This set supports more exemplary and flexible access to set degrees for each element in the domain and can address two types of hesitant in this situation. It can be considered a powerful tool for expressing uncertain information in the decision-making process. The function of z-score, namely z-arithmetic mean, z-geometric mean, and z-harmonic mean, has been proposed with five important bases, these bases are hesitant degree for dual hesitant fuzzy element (DHFE), DHFE deviation degree, parameter α (the importance of the hesitant degree), parameter β (the importance of the deviation degree) and parameter ϑ (the importance of membership (positive view) or non-membership (negative view). A comparison of the z-score with the existing score function was made to show some of their drawbacks. Next, the z-score function is then applied to solve multi-criteria decision making (MCDM) problems. To illustrate the proposed method's effectiveness, an example of MCDM specifically in pattern recognition has been shown.
PubDate: May 2021
- Two Observations in the Application of Logarithm Theory and their
Implications for Economic Modeling and Analysis
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 Oluremi Davies Ogun The contents of this paper apply to researches in the fields of economics, statistics – physical or life sciences, other social sciences, accounting and finance, business management and mathematics – core and applied. First, I discussed the misconception and the implications thereof, inherent in the conventional practice of entering interest rates as natural or untransformed series in data analysis most especially, regression models. The trends and variabilities of both transformed and untransformed interest rate series were shown to be similar thereby enhancing the likelihood of similar performances in regressions. By extension therefore, the indicated conventional practice unnecessarily and unjustifiably precluded elasticity inference on the coefficients of interest rates and summing up to procedural inefficiency as an independent computation of elasticity became the only available option. Percentages were not the equivalence of percentage changes and thus only series in growth terms hence, percentage changes should be spared log transformation. Secondly, the paper stressed the imperative to avoid unwieldy and theory incongruent expressions in post preliminary data analysis, by flagging the idea that regression models, in particular, of the growth varieties, should as much as practicable, sync with the dictates of modern time series econometrics in the specification of final equations.
PubDate: May 2021
- On Some Properties of Leibniz's Triangle
Abstract: Publication date: May 2021
Source:Mathematics and Statistics Volume 9 Number 3 R. Sivaraman One of the Greatest mathematicians of all time, Gotfried Leibniz, introduced amusing triangular array of numbers called Leibniz's Harmonic triangle similar to that of Pascal's triangle but with different properties. I had introduced entries of Leibniz's triangle through Beta Integrals. In this paper, I have proved that the Beta Integral assumption is exactly same as that of entries obtained through Pascal's triangle. The Beta Integral formulation leads us to establish several significant properties related to Leibniz's triangle in quite elegant way. I have shown that the sum of alternating terms in any row of Leibniz's triangle is either zero or a Harmonic number. A separate section is devoted in this paper to prove interesting results regarding centralized Leibniz's triangle numbers including obtaining a closed expression, the asymptotic behavior of successive centralized Leibniz's triangle numbers, connection between centralized Leibniz's triangle numbers and Catalan numbers as well as centralized binomial coefficients, convergence of series whose terms are centralized Leibniz's triangle numbers. All the results discussed in this section are new and proved for the first time. Finally, I have proved two exceedingly important theorems namely Infinite Hockey Stick theorem and Infinite Triangle Sum theorem. Though these two theorems were known in literature, the way of proving them using Beta Integral formulation is quite new and makes the proof short and elegant. Thus, by simple re-formulation of entries of Leibniz's triangle through Beta Integrals, I have proved existing as well as new theorems in much compact way. These ideas will throw a new light upon understanding the fabulous Leibniz's number triangle.
PubDate: May 2021
- The Seasonal Reproduction Number of p.vivax Malaria Dynamics in Korea
Abstract: Publication date: Mar 2021
Source:Mathematics and Statistics Volume 9 Number 2 Anne M. Fernando Ana Vivas Barber and Sunmi Lee Understanding the dynamics of Malaria can help in reducing the impact of the disease. Previous research proved that including animals in the human transmission model, or 'zooprophylaxis', is effective in reducing transmission of malaria in the human population. This model studies plasmodium vivax malaria and has variables for animal population and mosquito attraction to animals. The existing time-independent Malaria population ODE model is extended to time-dependent model with the differences explored. We introduce the seasonal mosquito population, a Gaussian profile based on data, as a variant for the previous models. The seasonal reproduction number is found using the next generation matrix, endemic and stability analysis is carried out using dynamical systems theory. The model includes short and long term human incubation periods and sensitivity analysis on parameters and all simulations are over three year period. Simulations show for each year larger peaks in the infected populations and seasonal reproduction number during the summer months and we analyze which parameters have more sensitivity in the model and in the seasonal reproduction number. Analysis provides conditions for disease free equilibrium (DFE) and the system is found to be locally asymptotically stable around the DFE when R0
PubDate: Mar 2021
- A New Solution for The Enzymatic Glucose Fuel Cell Model with Morrison
Equation via Haar Wavelet Collocation Method
Abstract: Publication date: Mar 2021
Source:Mathematics and Statistics Volume 9 Number 2 Kuntida Kawinwit Akapak Charoenloedmongkhon and Sanoe Koonprasert Integral equations are essential tools in various areas of applied mathematics. A computational approach to solving an integral equation is important in scientific research. The Haar wavelet collocation method (HWCM) with operational matrices of integration is one famous method which has been applied to solve systems of linear integral equations. In this paper, an approximated analytical method based on the Haar wavelet collocation method is applied to the system of diffusion convection partial differential equations with initial and boundary conditions. This system determines the enzymatic glucose fuel cell with the chemical reaction rate of the Morrison equation. The enzymatic glucose fuel cell model describes the concentration of glucose and hydrogen ion that can be converted into energy. During the process, the model reduces to the linear integral equation system including computational Haar matrices. The computational Haar matrices can be computed by HWCM coding in the Maple program. Illustrated examples are provided to demonstrate the preciseness and effectiveness of the proposed method. The results are shown as numerical solutions of glucose and hydrogen ion.
PubDate: Mar 2021
- A Dirac Delta Operator
Abstract: Publication date: Mar 2021
Source:Mathematics and Statistics Volume 9 Number 2 Juan Carlos Ferrando If T is a (densely defined) self-adjoint operator acting on a complex Hilbert space H and I stands for the identity operator, we introduce the delta function operator at T. When T is a bounded operator, then is an operator-valued distribution. If T is unbounded, is a more general object that still retains some properties of distributions. We provide an explicit representation of in some particular cases, derive various operative formulas involving and give several applications of its usage in Spectral Theory as well as in Quantum Mechanics.
PubDate: Mar 2021
- On Non-Associative Rings
Abstract: Publication date: Mar 2021
Source:Mathematics and Statistics Volume 9 Number 2 Ida Kurnia Waliyanti Indah Emilia Wijayanti and M. Farchani Rosyid Jordan ring is one example of the non-associative rings. We can construct a Jordan ring from an associative ring by defining the Jordan product. In this paper, we discuss the properties of non-associative rings by studying the properties of the Jordan rings. All of the ideals of a non-associative ring R are non-associative, except the ideal generated by the associator in R. Hence, a quotient ring can be constructed, where is the ideal generated by associators in R. The fundamental theorem of the homomorphism ring can be applied to the non-associative rings. By a little modification, we can find that is isomorphic to . Furthermore, we define a module over a non-associative ring and investigate its properties. We also give some examples of such modules. We show if M is a module over a non-associative ring R, then M is also a module over if is contained in the annihilator of R. Moreover, we define the tensor product of modules over a non-associative ring. The tensor product of the modules over a non-associative ring is commutative and associative up to isomorphism but not element by element.
PubDate: Mar 2021
- Solving One-Dimensional Porous Medium Equation Using Unconditionally
Stable Half-Sweep Finite Difference and SOR Method
Abstract: Publication date: Mar 2021
Source:Mathematics and Statistics Volume 9 Number 2 Jackel Vui Lung Chew Jumat Sulaiman and Andang Sunarto A porous medium equation is a nonlinear parabolic partial differential equation that presents many physical occurrences. The solutions of the porous medium equation are important to facilitate the investigation on nonlinear processes involving fluid flow, heat transfer, diffusion of gas-particles or population dynamics. As part of the development of a family of efficient iterative methods to solve the porous medium equation, the Half-Sweep technique has been adopted. Prior works in the existing literature on the application of Half-Sweep to successfully approximate the solutions of several types of mathematical problems are the underlying motivation of this research. This work aims to solve the one-dimensional porous medium equation efficiently by incorporating the Half-Sweep technique in the formulation of an unconditionally-stable implicit finite difference scheme. The noticeable unique property of Half-Sweep is its ability to secure a low computational complexity in computing numerical solutions. This work involves the application of the Half-Sweep finite difference scheme on the general porous medium equation, until the formulation of a nonlinear approximation function. The Newton method is used to linearize the formulated Half-Sweep finite difference approximation, so that the linear system in the form of a matrix can be constructed. Next, the Successive Over Relaxation method with a single parameter was applied to efficiently solve the generated linear system per time step. Next, to evaluate the efficiency of the developed method, deemed as the Half-Sweep Newton Successive Over Relaxation (HSNSOR) method, the criteria such as the number of iterations, the program execution time and the magnitude of absolute errors were investigated. According to the numerical results, the numerical solutions obtained by the HSNSOR are as accurate as those of the Half-Sweep Newton Gauss-Seidel (HSNGS), which is under the same family of Half-Sweep iterations, and the benchmark, Newton-Gauss-Seidel (NGS) method. The improvement in the numerical results produced by the HSNSOR is significant, and requires a lesser number of iterations and a shorter program execution time, as compared to the HSNGS and NGS methods.
PubDate: Mar 2021
- Some Remarks and Propositions on Riemann Hypothesis
Abstract: Publication date: Mar 2021
Source:Mathematics and Statistics Volume 9 Number 2 Jamal Salah In 1859, Bernhard Riemann, a German mathematician, published a paper to the Berlin Academy that would change mathematics forever. The mystery of prime numbers was the focus. At the core of the presentation was indeed a concept that had not yet been proven by Riemann, one that to this day baffles mathematicians. The way we do business could have been changed if the Riemann hypothesis holds true, which is because prime numbers are the key element for banking and e-commerce security. It will also have a significant influence, impacting quantum mechanics, chaos theory, and the future of computation, on the cutting edge of science. In this article, we look at some well-known results of Riemann Zeta function in a different light. We explore the proofs of Zeta integral Representation, Analytic continuity and the first functional equation. Initially, we observe omitting a logical undefined term in the integral representation of Zeta function by the means of Gamma function. For that we propound some modifications in order to reasonably justify the location of the non-trivial zeros on the critical line: s= 1/2 by assuming that ζ(s) and ζ(1-s) simultaneously equal zero. Consequently, we conditionally prove Riemann Hypothesis.
PubDate: Mar 2021
- On Three-Dimensional Mixing Geometric Quadratic Stochastic Operators
Abstract: Publication date: Mar 2021
Source:Mathematics and Statistics Volume 9 Number 2 Ftameh Khaled and Pah Chin Hee It is widely recognized that the theory of quadratic stochastic operator frequently arises due to its enormous contribution as a source of analysis for the investigation of dynamical properties and modeling in diverse domains. In this paper, we are motivated to construct a class of quadratic stochastic operators called mixing quadratic stochastic operators generated by geometric distribution on infinite state space . We also study regularity of such operators by investigating of the limit behavior for each case of the parameter. Some of non-regular cases proved for a new definition of mixing operators by using the shifting definition, where the new parameters satisfy the shifted conditions. A mixing quadratic stochastic operator was established on 3-partitions of the state space and considered for a special case of the parameter Ɛ. We found that the mixing quadratic stochastic operator is a regular transformation for and is a non-regular for . Also, the trajectories converge to one of the fixed points. Stability and instability of the fixed points were investigated by finding of the eigenvalues of Jacobian matrix at these fixed points. We approximate the parameter Ɛ by the parameter , where we established the regularity of the quadratic stochastic operators for some inequalities that satisfy . We conclude this paper by comparing with previous studies where we found some of such quadratic stochastic operators will be non-regular.
PubDate: Mar 2021
- Formulation of a New Implicit Method for Group Implicit BBDF in Solving
Related Stiff Ordinary Differential Equations
Abstract: Publication date: Mar 2021
Source:Mathematics and Statistics Volume 9 Number 2 Norshakila Abd Rasid Zarina Bibi Ibrahim Zanariah Abdul Majid and Fudziah Ismail This paper proposed a new alternative approach of the implicit diagonal block backward differentiation formula (BBDF) to solve linear and nonlinear first-order stiff ordinary differential equations (ODEs). We generate the solver by manipulating the numbers of back values to achieve a higher-order possible using the interpolation procedure. The algorithm is developed and implemented in C ++ medium. The numerical integrator approximates few solution points concurrently with off-step points in a block scheme over a non-overlapping solution interval at a single iteration. The lower triangular matrix form of the implicit diagonal causes fewer differentiation coefficients and ultimately reduces the execution time during running codes. We choose two intermediate points as off-step points appropriately, which are proven to guarantee the method's zero stability. The off-step points help to increase the accuracy by optimizing the local truncation error. The proposed solver satisfied theoretical consistency and zero-stable requirements, leading to a convergent multistep method with third algebraic order. We used the well-known and standard linear and nonlinear stiff IVP problems used in literature for validation to measure the algorithm's accuracy and processor time efficiency. The performance metrics are validated by comparing them with a proven solver, and the output shows that the alternative method is better than the existing one.
PubDate: Mar 2021
- The Varying Threshold Values of Logistic Regression and Linear
Discriminant for Classifying Fraudulent Firm
Abstract: Publication date: Mar 2021
Source:Mathematics and Statistics Volume 9 Number 2 Samingun Handoyo Ying-Ping Chen Gugus Irianto and Agus Widodo The aim of the research is to find the best performance both of logistic regression and linear discriminant which their threshold uses some various values. The performance tools used for evaluating classifier model are confusion matrix, precision-recall, F1 score and receiver operation characteristic (ROC) curve. The Audit-risk data set are used for the implementation of the proposed method. The screening data and dimension reduction by using principal component analysis (PCA) are the first step that must be conducted before the data are divided into the training and testing set. After the training process for obtaining the classifier model parameters has been completed, the calculation of performance measures is done only on the testing set where the various constants are added to the threshold value of both classifier models. The logistic regression classifier has the best performance of 94% on the precision-recall, 91.7% on the F1-score, and 0.906 on the area under curve (AUC) where the threshold values are on the interval between 0.002 and 0.018. On the other hand, the linear discriminant classifier has the best performance when the threshold value is 0.035 and its performance value is respectively the precision-recall of 94%, the F1-score of 91.7%, and the AUC of 0.846.
PubDate: Mar 2021
- Polya's Problem Solving Strategy in Trigonometry: An Analysis of Students'
Difficulties in Problem Solving
Abstract: Publication date: Mar 2021
Source:Mathematics and Statistics Volume 9 Number 2 Dwi Sulistyaningsih Eko Andy Purnomo and Purnomo This study is focused on investigating errors made by students and the various causal factors in working on trigonometry problems by applying sine and cosine rules. Samples were taken randomly from high school students. Data were collected in two ways, namely a written test that was referred to Polya's strategy and interviews with students who made mistakes. Students' errors were analyzed with the Newman concept. The results show that all types of errors occurred with a distribution of 3.83, 19.15, 24.74, 24.89 and 27.39% for reading errors (RE), comprehension error (CE), transformation errors (TE), process skill errors (PSE), and encoding errors (EE), respectively. The RE, CE, TE, PSE, and EE are marked by errors in reading symbols or important information, misunderstanding information and not understanding what is known and questioned, cannot change problems into mathematical models and also incorrectly use signs in arithmetic operations, student inaccuracies in the process of answering and also their lack of understanding in fraction operations, and the inability to deduce answers, respectively. An anomaly occurs because it turns out students who have medium trigonometry achievements make more mistakes than students who have low achievement.
PubDate: Mar 2021
- Instrument Test Development of Mathematics Skill on Elementary School
Abstract: Publication date: Mar 2021
Source:Mathematics and Statistics Volume 9 Number 2 Viktor Pandra Badrun Kartowagiran and Sugiman The aims of this research are: 1) producing the test instrument of mathematics skill on elementary school which is valid and reliable, 2) finding out the characteristics of the test instrument of mathematics skill on elementary school. The instrument test development in this research uses the development model of Wilson, Oriondo and Antonio which is modified. The number of testing sample in this research is 160 students in each class. This research results: 1) the validity index of aiken v is 0.979 in grade IV and 0.988 in grade V. The coefficient of instrument skill in class IV and V are 0.883 and 0.954. 2) the compatibility model in this research is it is suitable for 1PL model or parameter b (difficulty level). The result of parameter analysis of test item in class IV and V, shows that the overall item is in good category which is between -2 to 2. The case indicates that the overall item is accepted and reliable to be used for measuring the development of mathematics skill of elementary school students.
PubDate: Mar 2021
- Numerical Solution for Fuzzy Diffusion Problem via Two Parameter
Alternating Group Explicit Technique
Abstract: Publication date: Mar 2021
Source:Mathematics and Statistics Volume 9 Number 2 A. A. Dahalan and J. Sulaiman The computational technique has become a significant area of study in physics and engineering. The first method to evaluate the problems numerically was a finite difference. In 2002, a computational approach, an explicit finite difference technique, was used to overcome the fuzzy partial differential equation (FPDE) based on the Seikkala derivative. The application of the iterative technique, in particular the Two Parameter Alternating Group Explicit (TAGE) method, is employed to resolve the finite difference approximation resulting after the fuzzy heat equation is investigated in this article. This article broadens the use of the TAGE iterative technique to solve fuzzy problems due to the reliability of the approaches. The development and execution of the TAGE technique towards the full-sweep (FS) and half-sweep (HS) techniques are also presented. The idea of using the HS scheme is to reduce the computational complexity of the iterative methods by nearly/more than half. Additionally, numerical outcomes from the solution of two experimental problems are included and compared with the Alternating Group Explicit (AGE) approaches to clarify their feasibility. In conclusion, the families of the TAGE technique have been used to overcome the linear system structure through a one-dimensional fuzzy diffusion (1D-FD) discretization using a finite difference scheme. The findings suggest that the HSTAGE approach is surpassing in terms of iteration counts, time taken, and Hausdorff distance relative to the FSTAGE and AGE approaches. It demonstrates that the number of iterations for HSTAGE approach has decreased by approximately 71.60-72.95%, whereas for the execution time, the implementation of HSTAGE method is between 74.05-86.42% better. Since TAGE is ideal for concurrent processing, this method has been seen as the key benefit as it consumes sets of independent tasks that can be performed at the same time. The ability of the suggested technique is projected to be useful for the advanced exploration in solving any multi-dimensional FPDEs.
PubDate: Mar 2021
- Prospective Filipino Teachers' Disposition to Mathematics
Abstract: Publication date: Mar 2021
Source:Mathematics and Statistics Volume 9 Number 2 Restituto M. Llagas Jr. Studying mathematics comprises acquiring a positive disposition toward mathematics and seeing mathematics as an effective way of looking at real-life situations. This study aimed to correlate the disposition to Mathematics of prospective Filipino teachers to some teacher-related variables. The participants were the prospective Filipino teachers at the University of Northern Philippines (UNP) and at the Divine Word College of Vigan (DWCV). Two sets of instruments were utilized in the study – the self-report questionnaire and the Mathematics Dispositional Functioning Inventory developed by Beyers [1]. Frequency and percentage, weighted mean, and chi-square were utilized for data analysis. Results show that the overall disposition to mathematics of the participants is "Positive". The cognitive, affective, and conative aspects received a positive disposition. However, some items show an uncertain disposition to mathematics. The participants' profile variables have no significant relationship with their cognitive and conative disposition to mathematics. A training plan was conceptualized to provide information on the results of the study, to enhance the awareness and understanding of dispositions, to equip appropriate methods in solving mathematical problems, and to provide enrichment activities that will foster a positive disposition to mathematics and consequently will improve prospective teachers' and students' performance. Teachers are influential to the development of the students of effective ways of learning, doing, and thinking about mathematics. Understanding how attitudes are learned to establish an association between the teacher's disposition and students' attitude and performance. Thus, fostering dispositions to mathematics through training improves prospective Filipino teachers' and students' performance.
PubDate: Mar 2021
- On Application of Max-Plus Algebra to Synchronized Discrete Event System
Abstract: Publication date: Mar 2021
Source:Mathematics and Statistics Volume 9 Number 2 A. A. Aminu S. E. Olowo I. M. Sulaiman N. Abu Bakar and M. Mamat Max-plus algebra is a discrete algebraic system developed on the operations max () and plus (), where the max and plus operations are defined as addition and multiplication in conventional algebra. This algebraic structure is a semi-ring with its elements being real numbers along with ε=-∞ and e=0. On the other hand, the synchronized discrete event problem is a problem in which an event is scheduled to meet a deadline. There are two aspects of this problem. They include the events running simultaneously and the completion of the lengthiest event at the deadline. A recent survey on max-plus linear algebra shows that the operations max () and plus () play a significant role in modeling of human activities. However, numerous studies have shown that there are very limited literatures on the application of the max-plus algebra to real-life problems. This idea motivates the basic algebraic results and techniques of this research. This paper proposed the discrepancy method of max-plus for solving n×n system of linear equations with n≤n, and further show that an nxn linear system of equations will have either a unique solution, an infinitely many solutions or no solution whiles nxn linear system of equations has either an infinitely many solutions or no solution in (). Also, the proposed concept was extended to the job-shop problem in a synchronized event. The results obtained have shown that the method is very efficient for solving n×n system of linear equations and is also applicable to job-shop problems.
PubDate: Mar 2021
- A Novel Concept of Uncertainty Optimization Based Multi-Granular Rough Set
and Its Application
Abstract: Publication date: Jul 2021
Source:Mathematics and Statistics Volume 9 Number 4 Pradeep Shende and Arvind Kumar Sinha Data is generating at an exponential pace with the advancement in information technology. Such data highly contain uncertain and vague information. The rough set approximation is a way to find information in the data-set under uncertainty and to classify objects of the dataset. This work presents a mathematical approach to evaluate the data-sets uncertainties and their application to data reduction. In this work, we have extended the multi-granulation variable precision rough set in the context of uncertainty optimization. We develop an uncertainty optimization-based multi-granular rough set (UOMGRS) to minimize the uncertainties in the data set more effectively. Using UOMGRS, we find the most informative attribute in the feature space. It is desirable to minimize the rough set boundary region using the attribute having the highest approximation quality. Thus we group the attributes whose relative quality of approximation is the maximum to maximize the positive region and to minimize the uncertain region. We compare the UOMGRS with the single granulation rough set (SGRS) and the multi-granular rough set (MGRS). By our proposed method, we require only an average of 62% attributes for approximation whereas, SGRS and MGRS need an average of at least 72% of attributes in the data set for approximation of the concepts in the data-set. Our proposed method requires less amount of data for the classification of objects in the dataset. The method helps minimize the uncertainties in the dataset in a more efficient way.
PubDate: Jul 2021
- The Class of Noetherian RingsWith Finite Valuation Dimension
Abstract: Publication date: Jul 2021
Source:Mathematics and Statistics Volume 9 Number 4 Samsul Arifin Hanni Garminia and Pudji Astuti Not a long time ago, Ghorbani and Nazemian [2015] introduced the concept of dimension of valuation which measures how much does the ring differ from the valuation. They've shown that every Artinian ring has a finite valuation dimensions. Further, any comutative ring with a finite valuation dimension is semiperfect. However, there is a semiperfect ring which has an infinite valuation dimension. With those facts, it is of interest to further investigate property of rings that has a finite dimension of valuation. In this article we define conditions that a Noetherian ring requires and suffices to have a finite valuation dimension. In particular we prove that, if and only if it is Artinian or valuation, a Noetherian ring has its finite valuation dimension. In view of the fact that a ring needs a semi perfect dimension in terms of valuation, our investigation is confined on semiperfect Noetherian rings. Furthermore, as a finite product of local rings is a semi perfect ring, the inquiry into our outcome is divided into two cases, the case of the examined ring being local and the case where the investigated ring is a product of at least two local rings. This is, first of all, that every local Noetherian ring possesses a finite valuation dimension, if and only if it is Artinian or valuation. Secondly, any Notherian Ring generated by two or more local rings is shown to have a finite valuation dimension, if and only if it is an Artinian.
PubDate: Jul 2021
- Derivation of Some Entries in the Tables of David Bierens De Haan and
Anatolii Prudnikov: An Exercise in Integration Theory
Abstract: Publication date: Jul 2021
Source:Mathematics and Statistics Volume 9 Number 4 Robert Reynolds and Allan Stauffer It is always useful to improve the catalogue of definite integrals available in tables. In this paper we use our previous work on Lobachevsky integrals to derive entries in the tables by Bierens De Haan and Anatolli Prudnikov featuring errata results and new integral formula for interested readers. In this work we derive a definite integral given by (1) in terms of the Lerch function. The importance of this work lies in the derivation of known and new results not presently found in current literature. We used our contour integral method and applied it to an integral in Prudnikov and used it to derive a closed form solution in terms of a special function. The advantage of using a special function is the added benefit of analytic continuation which widens the range of computation of the parameters. Special functions have significance in mathematical analysis, functional analysis, geometry, physics, and other applications. Special functions are used in the solutions of differential equations or integrals of elementary functions. Special functions are linked to the theory of Lie groups and Lie algebras, as well as certain topics in mathematical physics.
PubDate: Jul 2021
- A Convergence Algorithm of Boundary Elements for the Laplace Operator's
Dirichlet Eigenvalue Problem
Abstract: Publication date: Jul 2021
Source:Mathematics and Statistics Volume 9 Number 4 Ali Naji Shaker A partial differential equation has been using the various boundary elements techniques for getting the solution to eigenvalue problem. A number of mathematical concepts were enlightened in this paper in relation with eigenvalue problem. Initially, we studied the basic approaches such as Dirichlet distribution, Dirichlet process and the Model of mixed Dirichlet. Four different eigenvalue problems were summarized, viz. Dirichlet eigenvalue problems, Neumann eigenvalue problems, Mixed Dirichlet-Neumann eigenvalue problem and periodic eigenvalue problem. Dirichlet eigenvalue problem was analyzed briefly for three different cases of value of λ. We put the result for multinomial as its prior is Dirichlet distribution. The result of eigenvalues for the ordinary differential equation was extrapolated. The Basic mathematics was also performed for λ calculations which follow iterative method.
PubDate: Jul 2021
- Quasi-Chebyshevity in
Abstract: Publication date: Jul 2021
Source:Mathematics and Statistics Volume 9 Number 4 Jamila Jawdat and Ayat Kamal This paper deals with Quasi-Chebyshevity in the Bochner function spaces , where X is a Banach space. For W a nonempty closed subset of X and x ∊ X, an element w0 in W is called "best approximation" to x from W, if , for all w in W. All best approximation points of x from W form a set usually denoted by PW (x). The set W is called "proximinal" in X if PW (x) is non empty, for each x in X. Now, W is said to be "Quasi-Chebyshev" in X whenever, for each x in X, the set PW (x) is nonempty and compact in X. This subject was studied in general Banach spaces by several authors and some results had been obtained. In this work, we study Quasi-Chebyshevity in the Bochner Lp- spaces. The main result in this paper is that: given W a Quasi-Chebyshev subspace in X then Lp(μ, W) is Quasi-Chebyshev in , if and only if L1 (μ, W) is Quasi-Chebyshev in L1(μ, X). As a consequence, one gets that if W is reflexive in X such that X satisfies the sequential KK-property then Lp(μ, W) is Quasi-Chebyshev in .
PubDate: Jul 2021
- Robust Estimation for Proportional Odds Model through Monte Carlo
Simulation
Abstract: Publication date: Jul 2021
Source:Mathematics and Statistics Volume 9 Number 4 Faiz Zulkifli Zulkifley Mohamed Nor Afzalina Azmee and Rozaimah Zainal Abidin Ordinal regression is used to model the ordinal response variable as functions of several explanatory variables. The most commonly used model for ordinal regression is the proportional odds model (POM). The classical technique for estimating the unknown parameters of this model is the maximum likelihood (ML) estimator. However, this method is not suitable for solving problems with extreme observations. A robust regression method is needed to handle the problem of extreme points in the data. This study proposes Huber M-estimator as a robust method to estimate the parameters of the POM with a logistic link function and polytomous explanatory variables. This study assesses ML estimator performance and the robust method proposed through an extensive Monte Carlo simulation study conducted using statistical software, R. Measurement for comparisons are bias, RMSE, and Lipsitzs' goodness of fit test. Various sample sizes, percentages of contamination, and residual standard deviations are considered in the simulation study. Preliminary results show that Huber estimates provide the best results for parameter estimation and overall model fitting. Huber's estimator has reached a 50% breakdown point for data containing extreme points that are quite far from most points. In addition, the presence of extreme points that have only a distance of two times far from most points has no major impact on ML estimates. This means that the estimates for ML and Huber may yield the same results if the model's residual values are between -2 and 2. This situation may also occur for data with a percentage of contamination below 5%.
PubDate: Jul 2021
- Unsteady Couette Flow Past between Two Horizontal Riga Plates with Hall
and Ion Slip Effects
Abstract: Publication date: Jul 2021
Source:Mathematics and Statistics Volume 9 Number 4 S. Nasrin R. N. Mondal and M. M. Alam Riga plate is the span wise array of electrodes and permanent magnets that creates a plane surface and produced the electromagnetic hydrodynamic fluid behavior and mostly used in industrial processes with fluid flow affairs. In cases where an external application of a magnetic or electric field is required, better flow is obtained by the involvement of the Riga plate. Riga plate acts as an agent to reduce the skin friction and enhance the heat transfer phenomena. It also diminishes the turbulent effects, so that it is possible to get an efficient flow control and it increases the performance of the machine. So the numerical investigation of the unsteady Couette flow with Hall and ion-slip current effects past between two Riga plates has been studied and the numerical solutions are acquired by using explicit finite difference method and estimated results have been gained for several values of the dimensionless parameter such as pressure gradient parameter, Hall and Ion-slip parameters, modified Hartmann number, Prandtl number, and Eckert number. In this article, the importance of the modified Hartmann number on the flow profiles is immense owing to the Riga plate. The expression of skin friction and Nusselt number has been computed and the outcomes of the relevant parameters on various distributions have been sketched and presented as well as graphically.
PubDate: Jul 2021
- On the Gaussian Approximation to Bayesian Posterior Distributions
Abstract: Publication date: Jul 2021
Source:Mathematics and Statistics Volume 9 Number 4 Christoph Fuhrmann Hanns-Ludwig Harney Klaus Harney and Andreas M¨uller The present article derives the minimal number N of observations needed to approximate a Bayesian posterior distribution by a Gaussian. The derivation is based on an invariance requirement for the likelihood . This requirement is defined by a Lie group that leaves the unchanged, when applied both to the observation(s) and to the parameter to be estimated. It leads, in turn, to a class of specific priors. In general, the criterion for the Gaussian approximation is found to depend on (i) the Fisher information related to the likelihood , and (ii) on the lowest non-vanishing order in the Taylor expansion of the Kullback-Leibler distance between and , where is the maximum-likelihood estimator of , given by the observations . Two examples are presented, widespread in various statistical analyses. In the first one, a chi-squared distribution, both the observations and the parameter are defined all over the real axis. In the other one, the binomial distribution, the observation is a binary number, while the parameter is defined on a finite interval of the real axis. Analytic expressions for the required minimal N are given in both cases. The necessary N is an order of magnitude larger for the chi-squared model (continuous ) than for the binomial model (binary ). The difference is traced back to symmetry properties of the likelihood function . We see considerable practical interest in our results since the normal distribution is the basis of parametric methods of applied statistics widely used in diverse areas of research (education, medicine, physics, astronomy etc.). To have an analytical criterion whether the normal distribution is applicable or not, appears relevant for practitioners in these fields.
PubDate: Jul 2021
- Inference on P[Y < X] for Geometric Extreme Exponential Distribution
Abstract: Publication date: Jul 2021
Source:Mathematics and Statistics Volume 9 Number 4 Reza Pakyari Geometric Extreme Exponential Distribution (GEE) is one of the statistical models that can be useful in fitting and describing lifetime data. In this paper, the problem of estimation of the reliability R = P(Y < X) when X and Y are independent GEE random variables with common scale parameter but different shape parameters has been considered. The probability R = P(Y < X) is also known as stress-strength reliability parameter and demonstrates the case where a component has stress X and is subjected to strength Y. The reliability R = P(Y < X) has applications in engineering, finance and biomedical sciences. We present the maximum likelihood estimator of R and study its asymptotic behavior. We first study the asymptotic distribution of the maximum likelihood estimators of the GEE parameters. We prove that the maximum likelihood estimators and so the reliability R have asymptotic normal distribution. A bootstrap confidence interval for R is also presented. Monte Carlo simulations are performed to assess he performance of the proposed estimation method and validity of the confidence interval. We found that the performance of the maximum likelihood estimator and also the bootstrap confidence interval is satisfactory even for small sample sizes. Analysis of a dataset has been given for illustrative purposes.
PubDate: Jul 2021
- Finitely Generated Modules's Uniserial Dimensions Over a Discrete
Valuation Domain
Abstract: Publication date: Jul 2021
Source:Mathematics and Statistics Volume 9 Number 4 Samsul Arifin Hanni Garminia and Pudji Astuti We present some methods for calculating the module's uniserial dimension that finitely generated over a DVD in this article. The idea of a module's uniserial dimension over a commutative ring, which defines how far the module deviates from being uniserial, was recently proposed by Nazemian etc. They show that if R is Noetherian commutative ring, which implies that every finitely generated module over R has uniserial dimension. Ghorbani and Nazemians have shown that R is Noetherian (resp. Artinian) ring if only the ring R X R has (resp. finite) valuation dimension. The finitely generated modules over valuation domain are further examined from here. However, since the region remains too broad, further research into the module's uniserial dimensions that finitely generated over a DVD is needed. In the case of a DVD R, a finitely generated module over R can, as is well-known, be divided into a direct sum of torsion and a free module. Therefore, first, we present methods for determining the primary module's uniserial dimension, and then followed by methods for the general finitely generated module. As can be observed, the module's uniserial dimension is a function of the elementary divisors and the rank of the non torsion module item, which is the major finding of this work.
PubDate: Jul 2021
- Time Series Forecasting with Trend and Seasonal Patterns using NARX
Network Ensembles
Abstract: Publication date: Jul 2021
Source:Mathematics and Statistics Volume 9 Number 4 Hermansah Dedi Rosadi Abdurakhman and Herni Utami In this research, we propose a Nonlinear Auto-Regressive network with exogenous inputs (NARX) model with a different approach, namely the determination of the main input variables using a stepwise regression and exogenous input using a deterministic seasonal dummy. There are two approaches in making a deterministic seasonal dummy, namely the binary and the sine-cosine dummy variables. Approximately half the number of input variables plus one is contained in the neurons of the hidden layer. Furthermore, the resilient backpropagation learning algorithm and the tangent hyperbolic activation function were used to train each network. Three ensemble operators are used, namely mean, median, and mode, to solve the overfitting problem and the single NARX model's weakness. Furthermore, we provide an empirical study using actual data, where forecasting accuracy is determined by Mean Absolute Percent Error (MAPE). The empirical study results show that the NARX model with binary dummy exogenous is the most accurate for trend and seasonal with multiplicative properties data patterns. For trend and seasonal with additive properties data patterns, the NARX model with sine-cosine dummy exogenous is more accurate, except the fact that the NARX model uses the mean ensemble operator. Besides, for trend and non-seasonal data patterns, the most accurate NARX model is obtained using the mean ensemble operator. This research also shows that the median and mode ensemble operators, which are rarely used, are more accurate than the mean ensemble operator for data that have trend and seasonal patterns. The median ensemble operator requires the least average computation time, followed by the mode ensemble operator. On the other hand, all of our proposed NARX models' accuracy consistently outperforms the exponential smoothing method and the ARIMA method.
PubDate: Jul 2021
- An Analysis about Fourier Series Estimator in Nonparametric Regression for
Longitudinal Data
Abstract: Publication date: Jul 2021
Source:Mathematics and Statistics Volume 9 Number 4 M. Fariz Fadillah Mardianto Gunardi and Herni Utami Fourier series is a function that is often used Mathematically and Statistically especially for modeling. Here, Fourier series can be constructed as an estimator in nonparametric regression. Nonparametric regression is not only using cross section data, but also longitudinal data. Some of nonparametric regression estimators have been developed for longitudinal data case, such as kernel, and spline. In this study, we concentrate to develop an inference analysis that related to Fourier series estimator in nonparametric regression for longitudinal data. Nonparametric regression based on Fourier series is capable to model data relationship with fluctuation or oscillation pattern that represents with sine and cosine functions. For point estimation analysis, Penalized Weighted Least Square (PWLS) is used to determine an estimator for parameter vector in nonparametric regression. Different with previous studies, PWLS is used to get smooth estimator. The result is an estimator for nonparametric regression curve for longitudinal data based on Fourier series approach. In addition, this study also investigated the asymptotic properties of the nonparametric regression curve estimators using the Fourier series approach for longitudinal data, especially linearity and consistency. Some study cases based on previous research and a new study case is given to make sure that Fourier series estimator in nonparametric regression has good performance in longitudinal data modeling. This study is important in order to develop further inferences Statistics, such as interval estimation and test hypothesis that related nonparametric regression with Fourier series estimator for longitudinal data.
PubDate: Jul 2021
- Time Sensitive Analysis of Antagonistic Stochastic Processes and
Applications to Finance and Queueing
Abstract: Publication date: Jul 2021
Source:Mathematics and Statistics Volume 9 Number 4 Jewgeni H. Dshalalow Kizza Nandyose and Ryan T. White This paper deals with a class of antagonistic stochastic games of three players A, B, and C, of whom the first two are active players and the third is a passive player. The active players exchange hostile attacks at random times of random magnitudes with each other and also with player C. Player C does not respond to any attacks (that are regarded as a collateral damage). There are two sustainability thresholds M and T are set so that when the total damages to players A and B cross M and T, respectively, the underlying player is ruined. At some point (ruin time), one of the two active players will be ruined. Player C's damages are sustainable and some rebuilt. Of interest are the ruin time and the status of all three players upon as well as at any time t prior to . We obtain an analytic formula for the joint distribution of the named processes and demonstrate its closed form in various analytic and computational examples. In some situations pertaining to stock option trading, stock prices (player C) can fluctuate. So in this case, it is of interest to predict the first time when an underlying stock price drops or significantly drops so that the trader can exercise the call option prior to the drop and before maturity T. Player A monitors the prices upon times assigning 0 damage to itself if the stock price appreciates or does not change and assumes a positive integer if the price drops. The times are themselves damages to player B with threshold T. The "ruin" time is when threshold M is crossed (i.e., there is a big price drop or a series of drops) or when the maturity T expires whichever comes first. Thus a prior action is needed and its time is predicted. We illustrate the applicability of the game on a number of other practical models, including queueing systems with vacations and (N,T)-policy.
PubDate: Jul 2021
- Three Dimensional Fractional Fourier-Mellin Transform, and its
Applications
Abstract: Publication date: Jul 2021
Source:Mathematics and Statistics Volume 9 Number 4 Arvind Kumar Sinha and Srikumar Panda The main objective of the paper is to study the three-dimensional fractional Fourier Mellin transforms (3DFRFMT), their basic properties and applicability due to mainly use in the radar system, reconstruction of grayscale images, in the detection of the human face, etc. Only the fractional Fourier transform is based on time-frequency distribution, whereas only the fractional Mellin transform is on scale covariant transformation. Both transforms can discover action in the definite assortment. The fractional Fourier transform is applicable for controlling the range of shift, whereas the fractional Mellin transform is accustomed to managing the range of rotation and scaling of the function. So, combining both transformations, we get an elegant expression for 3DFRFMT, which can be used in several fields. The paper introduces the concept of three-dimensional fractional Fourier Mellin transforms and their applications. Modulation property is the most useful concept in the signal system, radar technology, pattern reorganization, and many more in the integral transform. Parseval's identity applies to the conservation of energy in the universe. Thus we establish the modulation theorem, Parseval's theorem, scaling theorem, analytic theorem for three-dimensional fractional Fourier Mellin transform. We also give some examples of three-dimensional fractional Fourier-Mellin transform on some functions. Finally, we provide three-dimensional fractional Fourier-Mellin transform applications for solving homogeneous and non-homogeneous Mboctara partial differential equations that we can apply with advantages to solve the different types of problems in signal processing systems. The transform is beneficial in a maritime strategy as a co-realtor to control moments in any specific three-dimensional space. The concept is the most powerful tool to deal with any information system problems. After obtaining the generalization, we can explore many more ideas in applying three-dimensional fractional Fourier-Mellin transformations in many real word problems.
PubDate: Jul 2021
- Modified Variational Iteration Method for Solving Nonlinear Partial
Differential Equation Using Adomian Polynomials
Abstract: Publication date: Jul 2021
Source:Mathematics and Statistics Volume 9 Number 4 S. A. Ojobor and A. Obihia The aim of this paper is to solve numerically the Cauchy problems of nonlinear partial differential equation (PDE) in a modified variational iteration approach. The standard variational iteration method (VIM) is first studied before modifying it using the standard Adomian polynomials in decomposing the nonlinear terms of the PDE to attain the new iterative scheme modified variational iteration method (MVIM). The VIM was used to iteratively determine the nonlinear parabolic partial differential equation to obtain some results. Also, the modified VIM was used to solve the nonlinear PDEs with the aid of Maple 18 software. The results show that the new scheme MVIM encourages rapid convergence for the problem under consideration. From the results, it is observed that for the values the MVIM converges faster to exact result than the VIM though both of them attained a maximum error of order 10-9. The resulting numerical evidences were competing with the standard VIM as to the convergence, accuracy and effectiveness. The results obtained show that the modified VIM is a better approximant of the above nonlinear equation than the traditional VIM. On the basis of the analysis and computation we strongly advocate that the modified with finite Adomian polynomials as decomposer of nonlinear terms in partial differential equations and any other mathematical equation be encouraged as a numerical method.
PubDate: Jul 2021
- Z-Score Functions of Hesitant Fuzzy Sets
Abstract: Publication date: Jul 2021
Source:Mathematics and Statistics Volume 9 Number 4 Zahari Md Rodzi Abd Ghafur Ahmad Norul Fadhilah Ismail and Nur Lina Abdullah The hesitant fuzzy set (HFS) concept as an extension of fuzzy set (FS) in which the membership degree of a given element, called the hesitant fuzzy element (HFE), is defined as a set of possible values. A large number of studies are concentrating on HFE and HFS measurements. It is not just because of their crucial importance in theoretical studies, but also because they are required for almost any application field. The score function of HFE is a useful method for converting data into a single value. Moreover, the scoring function provides a much easier way to determine each alternative's ranking order for multi-criteria decision-making (MCDM). This study introduces a new hesitant degree of HFE and the z-score function of HFE, which consists of z-arithmetic mean, z-geometric mean, and z-harmonic mean. The z-score function is developed with four main bases: a hesitant degree of HFE, deviation value of HFE, the importance of the hesitant degree of HFE, α, and importance of the deviation value of HFE, β. These three proposed scores are compared with the existing scores functions to identify the proposed z-score function's flexibility. An algorithm based on the z-score function was developed to create an algorithm solution to MCDM. Example of secondary data on supplier selection for automated companies is used to prove the algorithms' capability in ranking order for MCDM.
PubDate: Jul 2021
- Two-Sided Group Chain Sampling Plans Based on Truncated Life Test for
Generalized Exponential Distribution
Abstract: Publication date: Jul 2021
Source:Mathematics and Statistics Volume 9 Number 4 Nazrina Aziz Zahirah Hasim and Zakiyah Zain Acceptance sampling is an important technique in quality assurance; its main goal is to achieve the most accurate decision in accepting lot using minimum resources. In practice, this often translates into minimizing the required sample sizes for the inspection, while satisfying the maximum allowable risks by consumer and producer. Numerous sampling plans have been developed over the past decades, the most recent being the incorporation of grouping to enable simultaneous inspection in the two-sided chain sampling which considers information from preceding and succeeding samples. This combination offers improved decision accuracy with reduced inspection resources. To-date, two-sided group chain sampling plan (TSGCh) for characteristic based on truncated lifetime has only been explored for Pareto distribution of the 2nd kind. This article introduces TSGCh sampling plan for products with lifetime that follows generalized exponential distribution. It focuses on minimizing consumer's risk and operates with three acceptance criteria. The equations that derived from the set conditions involving generalized exponential and binomial distributions are mathematically solved to develop this sampling plan. Its performance is measured on the probability of lot acceptance and number of minimum groups. A comparison with the established new two-sided group chain (NTSGCh) indicates that the proposed TSGCh sampling plan performs better in terms of sample size requirement and consumers' protection. Thus, this new acceptance sampling plan can reduce the inspection time, resources, and costs via smaller sample size (number of groups), while providing the desired consumers' protection.
PubDate: Jul 2021
- Approximate Solution of Higher Order Fuzzy Initial Value Problems of
Ordinary Differential Equations Using Bezier Curve Representation
Abstract: Publication date: Jul 2021
Source:Mathematics and Statistics Volume 9 Number 4 Sardar G Amen Ali F Jameel and Abdul Malek Yaakob The Bezier curve is a parametric curve used in the graphics of a computer and related areas. This curve, connected to the polynomials of Bernstein, is named after the design curves of Renault's cars by Pierre Bézier in the 1960s. There has recently been considerable focus on finding reliable and more effective approximate methods for solving different mathematical problems with differential equations. Fuzzy differential equations (known as FDEs) make extensive use of various scientific analysis and engineering applications. They appear because of the incomplete information from their mathematical models and their parameters under uncertainty. This article discusses the use of Bezier curves for solving elevated order fuzzy initial value problems (FIVPs) in the form of ordinary differential equation. A Bezier curve approach is analyzed and updated with concepts and properties of the fuzzy set theory for solving fuzzy linear problems. The control points on Bezier curve are obtained by minimizing the residual function based on the least square method. Numerical examples involving the second and third order linear FIVPs are presented and compared with the exact solution to show the capability of the method in the form of tables and two dimensional shapes. Such findings show that the proposed method is exceptionally viable and is straightforward to apply.
PubDate: Jul 2021
- The Effect of Independent Parameter on Accuracy of Direct Block Method
Abstract: Publication date: Jul 2021
Source:Mathematics and Statistics Volume 9 Number 4 Iskandar Shah Mohd Zawawi Zarina Bibi Ibrahim and Khairil Iskandar Othman Block methods that approximate the solution at several points in block form are commonly used to solve higher order differential equations. Inspired by the literature and ongoing research in this field, this paper intends to explore a new derivation of block backward differentiation formula that employs independent parameter to provide sufficient accuracy when solving second order ordinary differential equations directly. The use of three backward steps and five independent parameters are considered adequately in generating the variable coefficients of the formulas. To ascertain only one parameter exists in the derived formula, the order of the method is determined. Such independent parameter retains the favorable convergence properties although the values of parameter will affect the zero stability and truncation error. An ability of the method to compute the approximated solutions at two points concurrently is undeniable. Another advantage of the method is being able to solve the second order problems directly without recourse to the technique of reducing it to a system of first order equations. The essential of the error analysis is to observe the effect of independent parameter on the accuracy, in the sense that with certain appropriate values of parameter, the accuracy is improved. The performance of the method is tested with some initial value problems and the numerical results confirm that the maximum error and average error obtained by the proposed method are smaller at certain step size compared to the other conventional direct methods.
PubDate: Jul 2021
- Applications of the Differential Transformation Method and Multi-Step
Differential Transformation Method to Solve a Rotavirus Epidemic Model
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 Pakwan Riyapan Sherif Eneye Shuaib Arthit Intarasit and Khanchit Chuarkham Epidemic models are essential in understanding the transmission dynamics of diseases. These models are often formulated using differential equations. A variety of methods, which includes approximate, exact and purely numerical, are often used to find the solutions of the differential equations. However, most of these methods are computationally intensive or require symbolic computations. This article presents the Differential Transformation Method (DTM) and Multi-Step Differential Transformation Method (MSDTM) to find the approximate series solutions of an SVIR rotavirus epidemic model. The SVIR model is formulated using the nonlinear first-order ordinary differential equations, where S; V; I and R are the susceptible, vaccinated, infected and recovered compartments. We begin by discussing the theoretical background and the mathematical operations of the DTM and MSDTM. Next, the DTM and MSDTM are applied to compute the solutions of the SVIR rotavirus epidemic model. Lastly, to investigate the efficiency and reliability of both methods, solutions obtained from the DTM and MSDTM are compared with the solutions from the Runge-Kutta Order 4 (RK4) method. The solutions from the DTM and MSDTM are in good agreement with the solutions from the RK4 method. However, the comparison results show that the MSDTM is more efficient and converges to the RK4 method than the DTM. The advantage of the DTM and MSDTM over other methods is that it does not require a perturbation parameter to work and does not generate secular terms. Therefore the application of both methods
PubDate: Jan 2021
- On One Mathematical Model of Cooling Living Biological Tissue
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 B. K. Buzdov When cooling living biological tissue (active, non-inert medium), cryomedicine uses cryo-instruments with various forms of cooling surface. Cryoinstruments are located on the surface of biological tissue or completely penetrate into it. With a decrease in the temperature of the cooling surface, an unsteady temperature field appears in the tissue, which in the general case depends on three spatial coordinates and time. To date, there are a large number of scientific publications that consider mathematical models of cryodestruction of biological tissue. However, in the overwhelming majority of them, the Pennes equation (or some of its modifications) is taken as the basis of the mathematical model, from which the linear nature of the dependence of heat sources of biological tissue on the desired temperature field is visible. This character of the dependence does not allow one to describe the actually observed spatial localization of heat. In addition, Pennes' model does not take into account the fact that the freezing of the intercellular fluid occurs much earlier than the freezing of the intracellular fluid and the heat corresponding to these two processes is released at different times. In the proposed work, a new mathematical model of cooling and freezing of living biological tissue are built with a flat rectangular applicator located on its surface. The model takes into account the above features and is a three-dimensional boundary-value problem of the Stefan type with nonlinear heat sources of a special type and has applications in cryosurgery. A method is proposed for the numerical study of the problem posed, based on the use of locally one-dimensional difference schemes without explicitly separating the boundary of the influence of cold and the boundaries of the phase transition. The method was previously successfully tested by the author in solving other two-dimensional problems arising in cryomedicine.
PubDate: Jan 2021
- Fixed Point Theorems in Complex Valued Quasi b-Metric Spaces for
Satisfying Rational Type Contraction
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 J. Uma Maheswari A. Anbarasan and M. Ravichandran The notion of complex valued metric spaces proved the common fixed point theorem that satisfies rational mapping of contraction. In the contraction mapping theory, several researchers demonstrated many fixed-point theorems, common fixed-point theorems and coupled fixed-point theorems by using complex valued metric spaces. The idea of b-metric spaces proved the fixed point theorem by the principle of contraction mapping. The notion of complex valued b-metric spaces, and this metric space was the generalization of complex valued metric spaces. They explained the fixed point theorem by using the rational contraction. In the metric spaces, we refer to this metric space as a quasi-metric space, the symmetric condition d(x, y) = d(y, x) is ignored. Metric space is a special kind of space that is quasi-metric. The Quasi metric spaces were discussed by many researchers. Banach introduced the theory of contraction mapping and proved the theorem of fixed points in metric spaces. We are now introducing the new notion of complex quasi b-metric spaces involving rational type contraction which proved the unique fixed point theorems with continuous as well as non-continuous functions. Illustrate this with example.
PubDate: Jan 2021
- Generalized Relation between the Roots of Polynomial and Term of
Recurrence Relation Sequence
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 Vipin Verma and Mannu Arya Many researchers have been working on recurrence relation which is an important topic not only in mathematics but also in physics, economics and various applications in computer science. There are many useful results on recurrence relation sequence but there main problem to find any term of recurrence relation sequence we need to find all previous terms of recurrence relation sequence. There were many important theorems obtained on recurrence relations. In this paper we have given special identity for generalized kth order recurrence relation. These identities are very useful for finding any term of any order of recurrence relation sequence.
Authors define a special formula in this paper by this we can find direct any term of a recurrence relation sequence. In this recurrence relation sequence to find any terms we need to find all previous terms so this result is very important. There is important property of a relation between coefficients of recurrence relation terms and roots of a polynomial for second order relation but in this paper, we gave this same property of recurrence relation of all higher order recurrence relation. So finally, we can say that this theorem is valid all order of recurrence relation only condition that roots are distinct. So, we can say that this paper is generalization of property of a relation between coefficients of recurrence relation terms and roots of a polynomial. Theorem: - Let C1 and C2 are arbitrary real numbers and suppose the equation (1) Has X1 and X2 are distinct roots. Then the sequence is a solution of the recurrence relation (2) . For n= 0, 1, 2 …where β1 and β2 are arbitrary constants. Proof: - First suppose that of type we shall prove is a solution of recurrence relation (2). Since X1, X2 and X3 are roots of equation (1) so all are satisfied equation (1) so we have, . Consider . This implies . So the sequence is a solution of the recurrence relation. Now we will prove the second part of theorem. Let is a sequence with three . Let . So (3). (4). Multiply by X1 to (3) and subtracts from (4). We have similarly we can find . So we can say that values of β1 and β2 are defined as roots are distinct. So non- trivial values ofβ1 and β2 can find and we can say that result is valid. Example: Let be any sequence such that n≥3 and a0=0, a1=1, a2=2. Then find a10 for above sequence. Solution: The polynomial of above sequence is . Solving this equation we have roots are 1, 2, and 3 using above theorem we have (7). Using a0=0, a1=1, a2=2 in (7) we have β1+β2+β3=0 (8). β1+2β2+3β2=1 (9).β1+4β2+9β3=2 (10) Solving (8), (9) and (10) we have , , . This implies . Now put n=10 we have a10=-27478. Recurrence relation is a very useful topic of mathematics, many problems of real life may be solved by recurrence relations, but in recurrence relation there is a major difficulty in the recurrence relation. If we want to find 100th term of sequence, then we need to find all previous 99 terms of given sequence, then we can get 100th term of sequence but above theorem is very useful if coefficients of recurrence relation of given sequence satisfies the condition of the above theorem, then we can apply above theorem and we can find direct any term of sequence without finding all previous terms.
PubDate: Jan 2021
- Fuzzy Time Series Forecasting Model Based on Intuitionistic Fuzzy Sets via
Delegation of Hesitancy Degree to the Major Grade De-i-fuzzification
Method
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 Nik Muhammad Farhan Hakim Nik Badrul Alam Nazirah Ramli and Norhuda Mohammed Fuzzy time series is a powerful tool to forecast the time series data under uncertainty. Fuzzy time series was first initiated with fuzzy sets and then generalized by intuitionistic fuzzy sets. The intuitionistic fuzzy sets consider the degree of hesitation in which the degree of non-membership is incorporated. In this paper, a fuzzy set time series forecasting model based on intuitionistic fuzzy sets via delegation of hesitancy degree to the major grade de-i-fuzzification approach was developed. The proposed model was implemented on the data of student enrollments at the University of Alabama. The forecasted output was obtained using the fuzzy logical relationships of the output, and the performance of the forecasted output was compared with the fuzzy time series forecasting model based on fuzzy sets using the mean square error, root mean square error, mean absolute error, and mean absolute percentage error. The results showed that the forecasting model based on induced fuzzy sets from intuitionistic fuzzy sets performs better compared to the fuzzy time series forecasting model based on fuzzy sets.
PubDate: Jan 2021
- A Note on Lienard-Chipart Criteria and its Application to Epidemic Models
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 Auni Aslah Mat Daud An important part of the study of epidemic models is the local stability analysis of the equilibrium points. The linear algebra method which is commonly employed is the well-known Routh-Hurwitz criteria. The criteria give necessary and sufficient conditions for all of the roots of the characteristic polynomial to be negative or have negative real parts. To date, there are no epidemic models in the literature which employ Lienard-Chipart criteria. This note recommends an alternative linear algebra method namely Lienard-Chipart criteria, to significantly simplify the local stability analysis of epidemic models. Although Routh-Hurwitz criteria are a correct method for local stability analysis, Lienard-Chipart criteria have advantages over Routh-Hurwitz criteria. Using Lienard-Chipart criteria, only about half of the Hurwitz determinants inequalities are required, with the remaining conditions of each set concern with only the sign of the alternate coefficients of the characteristic polynomial. The Lienard-Chipart criteria are especially useful for polynomials with symbolic coefficients, as the determinants are usually significantly more complicated than original coefficients as degree of the polynomial increases. Lienard-Chipart criteria and Routh-Hurwitz criteria have similar performance for systems of dimension five or less. Theoretically, for systems of dimension higher than five, verifying Lienard-Chipart criteria should be much easier than verifying Routh-Hurwitz criteria and the advantage of Lienard-Chipart criteria may become clear. Examples of local stability analysis using Lienard-Chipart criteria for two recently proposed models are demonstrated to show the advantages of simplified Lienard-Chipart criteria over Routh-Hurwitz criteria.
PubDate: Jan 2021
- Application of Fuzzy Linear Regression with Symmetric Parameter for
Predicting Tumor Size of Colorectal Cancer
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 Muhammad Ammar Shafi Mohd Saifullah Rusiman and Siti Nabilah Syuhada Abdullah The colon and rectum is the final portion of the digestive tube in the human body. Colorectal cancer (CRC) occurs due to bacteria produced from undigested food in the body. However, factors and symptoms needed to predict tumor size of colorectal cancer are still ambiguous. The problem of using linear regression arises with the use of uncertain and imprecise data. Since the fuzzy set theory's concept can deal with data not to a precise point value (uncertainty data), this study applied the latest fuzzy linear regression to predict tumor size of CRC. Other than that, the parameter, error and explanation for the both models were included. Furthermore, secondary data of 180 colorectal cancer patients who received treatment in general hospital with twenty five independent variables with different combination of variable types were considered to find the best models to predict the tumor size of CRC. Two models; fuzzy linear regression (FLR) and fuzzy linear regression with symmetric parameter (FLRWSP) were compared to get the best model in predicting tumor size of colorectal cancer using two measurement statistical errors. FLRWSP was found to be the best model with least value of mean square error (MSE) and root mean square error (RMSE) followed by the methodology stated.
PubDate: Jan 2021
- Impact of Sleep on Usage of the Smart Phone at the Bedtime– A Case
Study
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 Navya Pratyusha M Rajyalakshmi K Apparao B V and Charankumar G Pittsburgh Sleep Quality Index (PSQI) Scoring (Buysse et al. 1989) is a powerful method to measure the sleep quality index based on the scores of various factors namely duration of sleep, sleep disturbance, sleep latency, day dysfunction due to sleepiness, sleep efficiency, need meds to sleep and overall sleep quality. Mainly we focused on the smart phones' usage and its impact on the quality of sleep at the bed time. Many studies have proved that the usage of smart phones at bed time affects the sleep quality, health and productivity. In the present study, we have collected data randomly from the middle-aged adults and observed the relation between gender and the quality of sleep using phi coefficient. It is clearly observed that as we move from males to females, we move negatively from good sleep quality to poor sleep quality. It indicates that males have poor sleep quality than females. We also performed an analysis of variance to test the hypothesis that there is any association between the smart phones' usage and its impact on quality of sleep at bed time.
PubDate: Jan 2021
- Fourier Method in Initial Boundary Value Problems for Regions with
Curvilinear Boundaries
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 Leontiev V. L. The algorithm of the generalized Fourier method associated with the use of orthogonal splines is presented on the example of an initial boundary value problem for a region with a curvilinear boundary. It is shown that the sequence of finite Fourier series formed by the method algorithm converges at each moment to the exact solution of the problem – an infinite Fourier series. The structure of these finite Fourier series is similar to that of partial sums of an infinite Fourier series. As the number of grid nodes increases in the area under consideration with a curvilinear boundary, the approximate eigenvalues and eigenfunctions of the boundary value problem converge to the exact eigenvalues and eigenfunctions, and the finite Fourier series approach the exact solution of the initial boundary value problem. The method provides arbitrarily accurate approximate analytical solutions to the problem, similar in structure to the exact solution, and therefore belongs to the group of analytical methods for constructing solutions in the form of orthogonal series. The obtained theoretical results are confirmed by the results of solving a test problem for which both the exact solution and analytical solutions of discrete problems for any number of grid nodes are known. The solution of test problem confirm the findings of the theoretical study of the convergence of the proposed method and the proposed algorithm of the method of separation of variables associated with orthogonal splines, yields the approximate analytical solutions of initial boundary value problem in the form of a finite Fourier series with any desired accuracy. For any number of grid nodes, the method leads to a generalized finite Fourier series which corresponds with high accuracy to the partial sum of the Fourier series of the exact solution of the problem.
PubDate: Jan 2021
- The Performance Analysis of a New Modification of Conjugate Gradient
Parameter for Unconstrained Optimization Models
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 I M Sulaiman M Mamat M Y Waziri U A Yakubu and M Malik Conjugate Gradient (CG) method is the most prominent iterative mathematical technique that can be useful for the optimization of both linear and non-linear systems due to its simplicity, low memory requirement, computational cost, and global convergence properties. However, some of the classical CG methods have some drawbacks which include weak global convergence, poor numerical performance both in terms of number of iterations and the CPU time. To overcome these drawbacks, researchers proposed new variants of the CG parameters with efficient numerical results and nice convergence properties. Some of the variants of the CG method include the scale CG method, hybrid CG method, spectral CG method, three-term CG method, and many more. The hybrid conjugate gradient (CG) algorithm is among the efficient variant in the class of the conjugate gradient methods mentioned above. Some interesting features of the hybrid modifications include inherenting the nice convergence properties and efficient numerical performance of the existing CG methods. In this paper, we proposed a new hybrid CG algorithm that inherits the features of the Rivaie et al. (RMIL*) and Dai (RMIL+) conjugate gradient methods. The proposed algorithm generates a descent direction under the strong Wolfe line search conditions. Preliminary results on some benchmark problems show that the proposed method efficient and promising.
PubDate: Jan 2021
- Some Properties on Fréchet-Weibull Distribution with Application to
Real Life Data
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 Deepshikha Deka Bhanita Das Bhupen K Baruah and Bhupen Baruah Research, development and extensive use of generalized form of distributions in order to analyze and modeling of applied sciences research data has been growing tremendously. Weibull and Fréchet distribution are widely discussed for reliability and survival analysis using experimental data from physical, chemical, environmental and engineering sciences. Both the distributions are applicable to extreme value theory as well as small and large data sets. Recently researchers develop several probability distributions to model experimental data as these parent models are not adequate to fit in some experiments. Modified forms of the Weibull distribution and Fréchet distribution are more flexible distributions for modeling experimental data. This article aims to introduce a generalize form of Weibull distribution known as Fréchet-Weibull Distribution (FWD) by using the T-X family which extends a more flexible distribution for modeling experimental data. Here the pdf and cdf with survival function [S(t)], hazard rate function [h(t)] and asymptotic behaviour of pdf and survival function and the possible shapes of pdf, cdf, S(t) and h(t) of FWD have been studied and the parameters are estimated using maximum livelihood method (MLM). Some statistical properties of FWD such as mode, moments, skewness, kurtosis, variation, quantile function, moment generating function, characteristic function and entropies are investigated. Finally the FWD has been applied to two sets of observations from mechanical engineering and shows the superiority of FWD over other related distributions. This study will provide a useful tool to analyze and modeling of datasets in Mechanical Engineering sciences and other related field.
PubDate: Jan 2021
- Corporate Domination Number of the Cartesian Product of Cycle and Path
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 S. Padmashini and S. Pethanachi Selvam Domination in graphs is to dominate the graph G by a set of vertices , vertex set of G) when each vertex in G is either in D or adjoining to a vertex in D. D is called a perfect dominating set if for each vertex v is not in D, which is adjacent to exactly one vertex of D. We consider the subset C which consists of both vertices and edges. Let denote the set of all vertices V and the edges E of the graph G. Then is said to be a corporate dominating set if every vertex v not in is adjacent to exactly one vertex of , where the set P consists of all vertices in the vertex set of an edge induced sub graph , (E1 a subset of E) such that there should be maximum one vertex common to any two open neighborhood of different vertices in V(G[E1]) and Q, the set consists of all vertices in the vertex set V1, a subset of V such that there exists no vertex common to any two open neighborhood of different vertices in V1. The corporate domination number of G, denoted by , is the minimum cardinality of elements in C. In this paper, we intend to determine the exact value of corporate domination number for the Cartesian product of the Cycle and Path .
PubDate: Jan 2021