Similar Journals
Mathematics and Statistics
Number of Followers: 5 Open Access journal ISSN (Print) 2332-2071 - ISSN (Online) 2332-2144 Published by Horizon Research Publishing [54 journals] |
- Applications of the Differential Transformation Method and Multi-Step
Differential Transformation Method to Solve a Rotavirus Epidemic Model
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 Pakwan Riyapan Sherif Eneye Shuaib Arthit Intarasit and Khanchit Chuarkham Epidemic models are essential in understanding the transmission dynamics of diseases. These models are often formulated using differential equations. A variety of methods, which includes approximate, exact and purely numerical, are often used to find the solutions of the differential equations. However, most of these methods are computationally intensive or require symbolic computations. This article presents the Differential Transformation Method (DTM) and Multi-Step Differential Transformation Method (MSDTM) to find the approximate series solutions of an SVIR rotavirus epidemic model. The SVIR model is formulated using the nonlinear first-order ordinary differential equations, where S; V; I and R are the susceptible, vaccinated, infected and recovered compartments. We begin by discussing the theoretical background and the mathematical operations of the DTM and MSDTM. Next, the DTM and MSDTM are applied to compute the solutions of the SVIR rotavirus epidemic model. Lastly, to investigate the efficiency and reliability of both methods, solutions obtained from the DTM and MSDTM are compared with the solutions from the Runge-Kutta Order 4 (RK4) method. The solutions from the DTM and MSDTM are in good agreement with the solutions from the RK4 method. However, the comparison results show that the MSDTM is more efficient and converges to the RK4 method than the DTM. The advantage of the DTM and MSDTM over other methods is that it does not require a perturbation parameter to work and does not generate secular terms. Therefore the application of both methods
PubDate: Jan 2021
- On One Mathematical Model of Cooling Living Biological Tissue
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 B. K. Buzdov When cooling living biological tissue (active, non-inert medium), cryomedicine uses cryo-instruments with various forms of cooling surface. Cryoinstruments are located on the surface of biological tissue or completely penetrate into it. With a decrease in the temperature of the cooling surface, an unsteady temperature field appears in the tissue, which in the general case depends on three spatial coordinates and time. To date, there are a large number of scientific publications that consider mathematical models of cryodestruction of biological tissue. However, in the overwhelming majority of them, the Pennes equation (or some of its modifications) is taken as the basis of the mathematical model, from which the linear nature of the dependence of heat sources of biological tissue on the desired temperature field is visible. This character of the dependence does not allow one to describe the actually observed spatial localization of heat. In addition, Pennes' model does not take into account the fact that the freezing of the intercellular fluid occurs much earlier than the freezing of the intracellular fluid and the heat corresponding to these two processes is released at different times. In the proposed work, a new mathematical model of cooling and freezing of living biological tissue are built with a flat rectangular applicator located on its surface. The model takes into account the above features and is a three-dimensional boundary-value problem of the Stefan type with nonlinear heat sources of a special type and has applications in cryosurgery. A method is proposed for the numerical study of the problem posed, based on the use of locally one-dimensional difference schemes without explicitly separating the boundary of the influence of cold and the boundaries of the phase transition. The method was previously successfully tested by the author in solving other two-dimensional problems arising in cryomedicine.
PubDate: Jan 2021
- Fixed Point Theorems in Complex Valued Quasi b-Metric Spaces for
Satisfying Rational Type Contraction
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 J. Uma Maheswari A. Anbarasan and M. Ravichandran The notion of complex valued metric spaces proved the common fixed point theorem that satisfies rational mapping of contraction. In the contraction mapping theory, several researchers demonstrated many fixed-point theorems, common fixed-point theorems and coupled fixed-point theorems by using complex valued metric spaces. The idea of b-metric spaces proved the fixed point theorem by the principle of contraction mapping. The notion of complex valued b-metric spaces, and this metric space was the generalization of complex valued metric spaces. They explained the fixed point theorem by using the rational contraction. In the metric spaces, we refer to this metric space as a quasi-metric space, the symmetric condition d(x, y) = d(y, x) is ignored. Metric space is a special kind of space that is quasi-metric. The Quasi metric spaces were discussed by many researchers. Banach introduced the theory of contraction mapping and proved the theorem of fixed points in metric spaces. We are now introducing the new notion of complex quasi b-metric spaces involving rational type contraction which proved the unique fixed point theorems with continuous as well as non-continuous functions. Illustrate this with example.
PubDate: Jan 2021
- Generalized Relation between the Roots of Polynomial and Term of
Recurrence Relation Sequence
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 Vipin Verma and Mannu Arya Many researchers have been working on recurrence relation which is an important topic not only in mathematics but also in physics, economics and various applications in computer science. There are many useful results on recurrence relation sequence but there main problem to find any term of recurrence relation sequence we need to find all previous terms of recurrence relation sequence. There were many important theorems obtained on recurrence relations. In this paper we have given special identity for generalized kth order recurrence relation. These identities are very useful for finding any term of any order of recurrence relation sequence.
Authors define a special formula in this paper by this we can find direct any term of a recurrence relation sequence. In this recurrence relation sequence to find any terms we need to find all previous terms so this result is very important. There is important property of a relation between coefficients of recurrence relation terms and roots of a polynomial for second order relation but in this paper, we gave this same property of recurrence relation of all higher order recurrence relation. So finally, we can say that this theorem is valid all order of recurrence relation only condition that roots are distinct. So, we can say that this paper is generalization of property of a relation between coefficients of recurrence relation terms and roots of a polynomial. Theorem: - Let C1 and C2 are arbitrary real numbers and suppose the equation (1) Has X1 and X2 are distinct roots. Then the sequence is a solution of the recurrence relation (2) . For n= 0, 1, 2 …where β1 and β2 are arbitrary constants. Proof: - First suppose that of type we shall prove is a solution of recurrence relation (2). Since X1, X2 and X3 are roots of equation (1) so all are satisfied equation (1) so we have, . Consider . This implies . So the sequence is a solution of the recurrence relation. Now we will prove the second part of theorem. Let is a sequence with three . Let . So (3). (4). Multiply by X1 to (3) and subtracts from (4). We have similarly we can find . So we can say that values of β1 and β2 are defined as roots are distinct. So non- trivial values ofβ1 and β2 can find and we can say that result is valid. Example: Let be any sequence such that n≥3 and a0=0, a1=1, a2=2. Then find a10 for above sequence. Solution: The polynomial of above sequence is . Solving this equation we have roots are 1, 2, and 3 using above theorem we have (7). Using a0=0, a1=1, a2=2 in (7) we have β1+β2+β3=0 (8). β1+2β2+3β2=1 (9).β1+4β2+9β3=2 (10) Solving (8), (9) and (10) we have , , . This implies . Now put n=10 we have a10=-27478. Recurrence relation is a very useful topic of mathematics, many problems of real life may be solved by recurrence relations, but in recurrence relation there is a major difficulty in the recurrence relation. If we want to find 100th term of sequence, then we need to find all previous 99 terms of given sequence, then we can get 100th term of sequence but above theorem is very useful if coefficients of recurrence relation of given sequence satisfies the condition of the above theorem, then we can apply above theorem and we can find direct any term of sequence without finding all previous terms.
PubDate: Jan 2021
- Fuzzy Time Series Forecasting Model Based on Intuitionistic Fuzzy Sets via
Delegation of Hesitancy Degree to the Major Grade De-i-fuzzification
Method
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 Nik Muhammad Farhan Hakim Nik Badrul Alam Nazirah Ramli and Norhuda Mohammed Fuzzy time series is a powerful tool to forecast the time series data under uncertainty. Fuzzy time series was first initiated with fuzzy sets and then generalized by intuitionistic fuzzy sets. The intuitionistic fuzzy sets consider the degree of hesitation in which the degree of non-membership is incorporated. In this paper, a fuzzy set time series forecasting model based on intuitionistic fuzzy sets via delegation of hesitancy degree to the major grade de-i-fuzzification approach was developed. The proposed model was implemented on the data of student enrollments at the University of Alabama. The forecasted output was obtained using the fuzzy logical relationships of the output, and the performance of the forecasted output was compared with the fuzzy time series forecasting model based on fuzzy sets using the mean square error, root mean square error, mean absolute error, and mean absolute percentage error. The results showed that the forecasting model based on induced fuzzy sets from intuitionistic fuzzy sets performs better compared to the fuzzy time series forecasting model based on fuzzy sets.
PubDate: Jan 2021
- A Note on Lienard-Chipart Criteria and its Application to Epidemic Models
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 Auni Aslah Mat Daud An important part of the study of epidemic models is the local stability analysis of the equilibrium points. The linear algebra method which is commonly employed is the well-known Routh-Hurwitz criteria. The criteria give necessary and sufficient conditions for all of the roots of the characteristic polynomial to be negative or have negative real parts. To date, there are no epidemic models in the literature which employ Lienard-Chipart criteria. This note recommends an alternative linear algebra method namely Lienard-Chipart criteria, to significantly simplify the local stability analysis of epidemic models. Although Routh-Hurwitz criteria are a correct method for local stability analysis, Lienard-Chipart criteria have advantages over Routh-Hurwitz criteria. Using Lienard-Chipart criteria, only about half of the Hurwitz determinants inequalities are required, with the remaining conditions of each set concern with only the sign of the alternate coefficients of the characteristic polynomial. The Lienard-Chipart criteria are especially useful for polynomials with symbolic coefficients, as the determinants are usually significantly more complicated than original coefficients as degree of the polynomial increases. Lienard-Chipart criteria and Routh-Hurwitz criteria have similar performance for systems of dimension five or less. Theoretically, for systems of dimension higher than five, verifying Lienard-Chipart criteria should be much easier than verifying Routh-Hurwitz criteria and the advantage of Lienard-Chipart criteria may become clear. Examples of local stability analysis using Lienard-Chipart criteria for two recently proposed models are demonstrated to show the advantages of simplified Lienard-Chipart criteria over Routh-Hurwitz criteria.
PubDate: Jan 2021
- Application of Fuzzy Linear Regression with Symmetric Parameter for
Predicting Tumor Size of Colorectal Cancer
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 Muhammad Ammar Shafi Mohd Saifullah Rusiman and Siti Nabilah Syuhada Abdullah The colon and rectum is the final portion of the digestive tube in the human body. Colorectal cancer (CRC) occurs due to bacteria produced from undigested food in the body. However, factors and symptoms needed to predict tumor size of colorectal cancer are still ambiguous. The problem of using linear regression arises with the use of uncertain and imprecise data. Since the fuzzy set theory's concept can deal with data not to a precise point value (uncertainty data), this study applied the latest fuzzy linear regression to predict tumor size of CRC. Other than that, the parameter, error and explanation for the both models were included. Furthermore, secondary data of 180 colorectal cancer patients who received treatment in general hospital with twenty five independent variables with different combination of variable types were considered to find the best models to predict the tumor size of CRC. Two models; fuzzy linear regression (FLR) and fuzzy linear regression with symmetric parameter (FLRWSP) were compared to get the best model in predicting tumor size of colorectal cancer using two measurement statistical errors. FLRWSP was found to be the best model with least value of mean square error (MSE) and root mean square error (RMSE) followed by the methodology stated.
PubDate: Jan 2021
- Impact of Sleep on Usage of the Smart Phone at the Bedtime– A Case
Study
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 Navya Pratyusha M Rajyalakshmi K Apparao B V and Charankumar G Pittsburgh Sleep Quality Index (PSQI) Scoring (Buysse et al. 1989) is a powerful method to measure the sleep quality index based on the scores of various factors namely duration of sleep, sleep disturbance, sleep latency, day dysfunction due to sleepiness, sleep efficiency, need meds to sleep and overall sleep quality. Mainly we focused on the smart phones' usage and its impact on the quality of sleep at the bed time. Many studies have proved that the usage of smart phones at bed time affects the sleep quality, health and productivity. In the present study, we have collected data randomly from the middle-aged adults and observed the relation between gender and the quality of sleep using phi coefficient. It is clearly observed that as we move from males to females, we move negatively from good sleep quality to poor sleep quality. It indicates that males have poor sleep quality than females. We also performed an analysis of variance to test the hypothesis that there is any association between the smart phones' usage and its impact on quality of sleep at bed time.
PubDate: Jan 2021
- Fourier Method in Initial Boundary Value Problems for Regions with
Curvilinear Boundaries
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 Leontiev V. L. The algorithm of the generalized Fourier method associated with the use of orthogonal splines is presented on the example of an initial boundary value problem for a region with a curvilinear boundary. It is shown that the sequence of finite Fourier series formed by the method algorithm converges at each moment to the exact solution of the problem – an infinite Fourier series. The structure of these finite Fourier series is similar to that of partial sums of an infinite Fourier series. As the number of grid nodes increases in the area under consideration with a curvilinear boundary, the approximate eigenvalues and eigenfunctions of the boundary value problem converge to the exact eigenvalues and eigenfunctions, and the finite Fourier series approach the exact solution of the initial boundary value problem. The method provides arbitrarily accurate approximate analytical solutions to the problem, similar in structure to the exact solution, and therefore belongs to the group of analytical methods for constructing solutions in the form of orthogonal series. The obtained theoretical results are confirmed by the results of solving a test problem for which both the exact solution and analytical solutions of discrete problems for any number of grid nodes are known. The solution of test problem confirm the findings of the theoretical study of the convergence of the proposed method and the proposed algorithm of the method of separation of variables associated with orthogonal splines, yields the approximate analytical solutions of initial boundary value problem in the form of a finite Fourier series with any desired accuracy. For any number of grid nodes, the method leads to a generalized finite Fourier series which corresponds with high accuracy to the partial sum of the Fourier series of the exact solution of the problem.
PubDate: Jan 2021
- The Performance Analysis of a New Modification of Conjugate Gradient
Parameter for Unconstrained Optimization Models
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 I M Sulaiman M Mamat M Y Waziri U A Yakubu and M Malik Conjugate Gradient (CG) method is the most prominent iterative mathematical technique that can be useful for the optimization of both linear and non-linear systems due to its simplicity, low memory requirement, computational cost, and global convergence properties. However, some of the classical CG methods have some drawbacks which include weak global convergence, poor numerical performance both in terms of number of iterations and the CPU time. To overcome these drawbacks, researchers proposed new variants of the CG parameters with efficient numerical results and nice convergence properties. Some of the variants of the CG method include the scale CG method, hybrid CG method, spectral CG method, three-term CG method, and many more. The hybrid conjugate gradient (CG) algorithm is among the efficient variant in the class of the conjugate gradient methods mentioned above. Some interesting features of the hybrid modifications include inherenting the nice convergence properties and efficient numerical performance of the existing CG methods. In this paper, we proposed a new hybrid CG algorithm that inherits the features of the Rivaie et al. (RMIL*) and Dai (RMIL+) conjugate gradient methods. The proposed algorithm generates a descent direction under the strong Wolfe line search conditions. Preliminary results on some benchmark problems show that the proposed method efficient and promising.
PubDate: Jan 2021
- Some Properties on Fréchet-Weibull Distribution with Application to
Real Life Data
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 Deepshikha Deka Bhanita Das Bhupen K Baruah and Bhupen Baruah Research, development and extensive use of generalized form of distributions in order to analyze and modeling of applied sciences research data has been growing tremendously. Weibull and Fréchet distribution are widely discussed for reliability and survival analysis using experimental data from physical, chemical, environmental and engineering sciences. Both the distributions are applicable to extreme value theory as well as small and large data sets. Recently researchers develop several probability distributions to model experimental data as these parent models are not adequate to fit in some experiments. Modified forms of the Weibull distribution and Fréchet distribution are more flexible distributions for modeling experimental data. This article aims to introduce a generalize form of Weibull distribution known as Fréchet-Weibull Distribution (FWD) by using the T-X family which extends a more flexible distribution for modeling experimental data. Here the pdf and cdf with survival function [S(t)], hazard rate function [h(t)] and asymptotic behaviour of pdf and survival function and the possible shapes of pdf, cdf, S(t) and h(t) of FWD have been studied and the parameters are estimated using maximum livelihood method (MLM). Some statistical properties of FWD such as mode, moments, skewness, kurtosis, variation, quantile function, moment generating function, characteristic function and entropies are investigated. Finally the FWD has been applied to two sets of observations from mechanical engineering and shows the superiority of FWD over other related distributions. This study will provide a useful tool to analyze and modeling of datasets in Mechanical Engineering sciences and other related field.
PubDate: Jan 2021
- Corporate Domination Number of the Cartesian Product of Cycle and Path
Abstract: Publication date: Jan 2021
Source:Mathematics and Statistics Volume 9 Number 1 S. Padmashini and S. Pethanachi Selvam Domination in graphs is to dominate the graph G by a set of vertices , vertex set of G) when each vertex in G is either in D or adjoining to a vertex in D. D is called a perfect dominating set if for each vertex v is not in D, which is adjacent to exactly one vertex of D. We consider the subset C which consists of both vertices and edges. Let denote the set of all vertices V and the edges E of the graph G. Then is said to be a corporate dominating set if every vertex v not in is adjacent to exactly one vertex of , where the set P consists of all vertices in the vertex set of an edge induced sub graph , (E1 a subset of E) such that there should be maximum one vertex common to any two open neighborhood of different vertices in V(G[E1]) and Q, the set consists of all vertices in the vertex set V1, a subset of V such that there exists no vertex common to any two open neighborhood of different vertices in V1. The corporate domination number of G, denoted by , is the minimum cardinality of elements in C. In this paper, we intend to determine the exact value of corporate domination number for the Cartesian product of the Cycle and Path .
PubDate: Jan 2021
- Finite Difference Method for Pricing of Indonesian Option under a Mixed
Fractional Brownian Motion
Abstract: Publication date: Sep 2020
Source:Mathematics and Statistics Volume 8 Number 5 Chatarina Enny Murwaningtyas Sri Haryatmi Kartiko Gunardi and Herry Pribawanto Suryawan This paper deals with an Indonesian option pricing using mixed fractional Brownian motion to model the underlying stock price. There have been researched on the Indonesian option pricing by using Brownian motion. Another research states that logarithmic returns of the Jakarta composite index have long-range dependence. Motivated by the fact that there is long-range dependence on logarithmic returns of Indonesian stock prices, we use mixed fractional Brownian motion to model on logarithmic returns of stock prices. The Indonesian option is different from other options in terms of its exercise time. The option can be exercised at maturity or at any time before maturity with profit less than ten percent of the strike price. Also, the option will be exercised automatically if the stock price hits a barrier price. Therefore, the mathematical model is unique, and we apply the method of the partial differential equation to study it. An implicit finite difference scheme has been developed to solve the partial differential equation that is used to obtain Indonesian option prices. We study the stability and convergence of the implicit finite difference scheme. We also present several examples of numerical solutions. Based on theoretical analysis and the numerical solutions, the scheme proposed in this paper is efficient and reliable.
PubDate: Sep 2020
- Probabilistic Inventory Model under Flexible Trade Credit Plan Depending
upon Ordering Amount
Abstract: Publication date: Sep 2020
Source:Mathematics and Statistics Volume 8 Number 5 Piyali Mallick and Lakshmi Narayan De In this work, we propose a stochastic inventory model under the situations that delay in imbursement is acceptable. Most of the inventory model on this topic supposed that the supplier would offer the retailer a fixed delay period and the retailer could sell the goods and accumulate revenue and earn interest with in the credit period. They also assumed that the trade credit period is independent of the order quantity. Limited investigators developed EOQ model under permissible delay in payments, where trade credit is connected with the order quantity. When the order quantity is a lesser amount of the quantity at which the delay in payment is not permitted, the payments for the items must be made immediately. Otherwise, the fixed credit period is permitted. However, all these models were completely deterministic in nature. In reality, this trade credit period cannot be fixed. If it is fixed, then retailer will not be interested to buy higher quantity than the fixed quantity at which delay in payment is permitted. To reflect this situation, we assumed that trade credit period is not static but fluctuates with the ordering quantity. The demand throughout any arrangement period follows a probability distribution. We have calculated the total variable cost for every unit of time. The optimum ordering policy of the scheme can be found with the aid of three theorems (proofs are provided). An algorithm to determine the best ordering rule with the assistance of the propositions is established and numerical instances are provided for clarification. Sensitivity investigation of all the parameters of the model is presented and deliberated. Some previously published results are special cases of the consequences gotten in this paper.
PubDate: Sep 2020
- Determining Day of Given Date Mathematically
Abstract: Publication date: Sep 2020
Source:Mathematics and Statistics Volume 8 Number 5 R. Sivaraman Computation of day of a week from given date belonging to any century has been a great quest among astronomers and mathematicians for long time. In recent centuries, thanks to efforts of some great mathematicians we now know methods of accomplishing this task. In doing so, people have developed various methods, some of which are very concise and compact but not much accessible explanation is provided. The chief purpose of this paper is to address this issue. Also, almost all known calculations involve either usage of tables or some pre-determined codes usually assigned for months, years or centuries. In this paper, I had established the mathematical proof of determining the day of any given date which is applicable for any number of years even to the time of BCE. I had provided the detailed mathematical derivation of month codes which were key factors in determining the day of any given date. Though the procedures for determining the day of given date are quite well known, the way in which they arrived is not so well known. This paper will throw great detail in that aspect. To be precise, I had explained the formula obtained by German Mathematician Zeller in detail and tried to simplify it further which will reduce its complexity and at the same time, would be as effective as the original formula. The explanations for Leap Years and other astronomical facts were clearly presented in this paper to aid the derivation of the compact form of Zeller's Formula. Some special cases and illustrations are provided wherever necessary to clarify the computations for better understanding of the concepts.
PubDate: Sep 2020
- Stochastic Latent Residual Approach for Consistency Model Assessment
Abstract: Publication date: Sep 2020
Source:Mathematics and Statistics Volume 8 Number 5 Hani Syahida Zulkafli George Streftaris and Gavin J. Gibson Hypoglycaemia is a condition when blood sugar levels in body are too low. This condition is usually a side effect of insulin treatment in diabetic patients. Symptoms of hypoglycaemia vary not only between individuals but also within individuals making it difficult for the patients to recognize their hypoglycaemia episodes. Given this condition, and because the symptoms are not exclusive to only hypoglycaemia, it is very important for patients to be able to identify that they are having a hypoglycaemia episode. Consistency models are statistical models that quantify the consistency of individual symptoms reported during hypoglycaemia. Because there are variations of consistency model, it is important to identify which model best fits the data. The aim of this paper is to asses and verify the models. We developed an assessment method based on stochastic latent residuals and performed posterior predictive checking as the model verification. It was found that a grouped symptom consistency model with multiplicative form of symptom propensity and episode intensity threshold ﬁts the data better and has more reliable predictive ability as compared to other models. This model can be used in assisting patients and medical practitioners to quantify patients' reporting symptoms capability, hence promote awareness of their hypoglycaemia episodes so that corrective actions can be quickly taken.
PubDate: Sep 2020
- Construction a Diagnostic Test in the Form of Two-tier Multiple Choice on
Calculus Material
Abstract: Publication date: Sep 2020
Source:Mathematics and Statistics Volume 8 Number 5 Edy Nurfalah Irvana Arofah Ika Yuniwati Andi Haslinah and Dwi Retno Lestari This work is a research development of two-tier multiples choice diagnostic test instruments on calculus material. The purpose of this study is; 1) Obtaining the construction of a two-tier multiples choice diagnostic test based on the validity of the contents and Constable, 2) obtaining the quality of two-tier multiples choice diagnostic tests based on the reliability value. The method used is focused on the construction of diagnostic tests. The development research was adapted from the Retnawati development model. The research generated: 1) Construction of a two-tier multiples choice diagnostic test based on the validity of the contents and the construction obtained that the two-tier multiples choice diagnostic test is proven valid. 2) The quality of two-tier multiples choice diagnostic tests based on the reliability value gained that the compiled two-tier diagnostic test instruments. The validity of the content is evidenced by the average validity index (V), for the two-tier multiples choice diagnostic test instrument obtained an average validity index (V) of 0.9333 and for an interview guideline instrument acquired the validity index (V) 0.7556 in which both the validity index (V) approaches the value 1. Whereas for the validity of the construction acquired three dominant factors based on the scree-plot and corresponds to many factors on the calculus material examined in this study. The quality of two-tier multiples choice diagnostic tests is compiled of two-tier diagnostic test instruments based on the reliability value gained.
PubDate: Sep 2020
- Fuzzy Sumudu Decomposition Method for Fuzzy Delay Differential Equations
with Strongly Generalized Differentiability
Abstract: Publication date: Sep 2020
Source:Mathematics and Statistics Volume 8 Number 5 N. A. Abdul Rahman Fuzzy delay differential equation has always been a tremendous way to model real-life problems. It has been developed throughout the last decade. Many types of fuzzy derivatives have been considered, including the recently introduced concept of strongly generalized differentiability. However, considering this interpretation, very few methods have been introduced, obstructing the potential of fuzzy delay differential equations to be developed further. This paper aims to provide solution for fuzzy nonlinear delay differential equations and the derivatives considered in this paper is interpreted using the concept of strongly generalized differentiability. Under this method, the calculations will lead to two cases i.e. two solutions, and one of the solutions is decreasing in the diameter. To fulfil this, a method resulting from the elegant combination of fuzzy Sumudu transform and Adomian decomposition method is used, it is termed as fuzzy Sumudu decomposition method. A detailed procedure for solving fuzzy nonlinear delay differential equations with the mentioned type of derivatives is constructed in detail. A numerical example is provided afterwards to demonstrate the applicability of the method. It is shown that the solution is not unique, and this is in accord with the concept of strongly generalized differentiability. The two solutions can later be chosen by researcher with regards to the characteristic of the problems. Finally, conclusion is drawn.
PubDate: Sep 2020
- Hankel Determinant H_{2}(3) for Certain Subclasses of Univalent
Functions
Abstract: Publication date: Sep 2020
Source:Mathematics and Statistics Volume 8 Number 5 Andy Liew Pik Hern Aini Janteng and Rashidah Omar Let S to be the class of functions which are analytic, normalized and univalent in the unit disk . The main subclasses of S are starlike functions, convex functions, close-to-convex functions, quasiconvex functions, starlike functions with respect to (w.r.t.) symmetric points and convex functions w.r.t. symmetric points which are denoted by , and KS respectively. In recent past, a lot of mathematicians studied about Hankel determinant for numerous classes of functions contained in S. The qth Hankel determinant for and is defined by . is greatly familiar so called Fekete-Szeg¨o functional. It has been discussed since 1930's. Mathematicians still have lots of interest to this, especially in an altered version of . Indeed, there are many papers explore the determinants H2(2) and H3(1). From the explicit form of the functional H3(1), it holds H2(k) provided k from 1-3. Exceptionally, one of the determinant that is has not been discussed in many times yet. In this article, we deal with this Hankel determinant . From this determinant, it consists of coefficients of function f which belongs to the classes and KS so we may find the bounds of for these classes. Likewise, we got the sharp results for and Ks for which a2 = 0 are obtained.
PubDate: Sep 2020
- Integration of Cluster Centers and Gaussian Distributions in Fuzzy C-Means
for the Construction of Trapezoidal Membership Function
Abstract: Publication date: Sep 2020
Source:Mathematics and Statistics Volume 8 Number 5 Siti Hajar Khairuddin Mohd Hilmi Hasan and Manzoor Ahmed Hashmani Fuzzy C-Means (FCM) is one of the mostly used techniques for fuzzy clustering and proven to be robust and more efficient based on various applications. Image segmentation, stock market and web analytics are examples of popular applications which use FCM. One limitation of FCM is that it only produces Gaussian membership function (MF). The literature shows that different types of membership functions may perform better than other types based on the data used. This means that, by only having Gaussian membership function as an option, it limits the capability of fuzzy systems to produce accurate outcomes. Hence, this paper presents a method to generate another popular shape of MF, the trapezoidal shape (trapMF) from FCM to allow more flexibility to FCM in producing outputs. The construction of trapMF is using mathematical theory of Gaussian distributions, confidence interval and inflection points. The cluster centers or mean (μ) and standard deviation (σ) from the Gaussian output are fully used to determine four trapezoidal parameters; lower limit a, upper limit d, lower support limit b, and upper support limit c with the assistance of function trapmf() in Matlab fuzzy toolbox. The result shows that the mathematical theory of Gaussian distributions can be applied to generate trapMF from FCM.
PubDate: Sep 2020
- Homotopy Perturbation Method for Solving Linear Fuzzy Delay Differential
Equations Using Double Parametric Approach
Abstract: Publication date: Sep 2020
Source:Mathematics and Statistics Volume 8 Number 5 Ali F Jameel Sardar G Amen Azizan Saaban Noraziah H Man and Fathilah M Alipiah Delay differential equations (known as DDEs) are a broad use of many scientific researches and engineering applications. They come because the pace of the shift in their mathematical models relies all the basis not just on their present condition, but also on a certain past cases. In this work, we propose an algorithm of the approximate method to solve linear fuzzy delay differential equations using the Homotopy Perturbation Method with double parametric form fuzzy numbers. The detailed algorithm of the approach to fuzzification and defuzzificationis analysis is provided. In the initial conditions of the proposed problem there are uncertainties with regard to the triangular fuzzy number. A double parametric form of fuzzy numbers is defined and applied for the first time in this topic for the present analysis. This method's simplicity and ability to overcome delay differential equations without complicating Adomian polynomials or incorrect nonlinear assumptions. The approximate solution is compared with the exact solution to confirm the validity and efficiency of the method to handle linear fuzzy delay differential equation. To show the features of this proposed method, a numerical example is illustrated, involving first order fuzzy delay differential equation. These findings indicate that the suggested approach is very successful and simple to implement.
PubDate: Sep 2020
- Modified Average Sample Number for Improved Double Sampling Plan Based on
Truncated Life Test Using Exponentiated Distributions
Abstract: Publication date: Sep 2020
Source:Mathematics and Statistics Volume 8 Number 5 O. S. Deepa The reliability of the product has developed a dynamic issue in a worldwide business market. Generally acceptance sampling guarantees the superiority of the product. In acceptance sampling plan, increasing the sample size may lead to minimization of customers' risk of accepting bad lots and producers' risk of rejecting good lots to a certain level but will increase the cost of inspection. Hence truncation of life test time may be introduced to reduce the cost of inspection. Modified Average Sample Number (MASN) for Improved Double Sampling Plan (IDSP) based on truncated life test for popular exponentiated family such as exponentiated gamma, exponentiated lomax and exonentiated Weibull distribution are considered. The modified ASN creates a band width for average sample number which is much useful for the consumer and producer. The interval for average sample number makes the choice of consumer with a maximum and minimum sample size which is of much benefit without any loss for the producer. The probability of acceptance and average sample number based on modified double sampling plan for lower and upper limit is computed for the exponentiated family. Optimal parameters of IDSP under various exponentiated families with different shape parameters were computed. The proposed plan is compared over traditional double sampling and modified double sampling using Gamma distribution, Weibull distribution and Birnbaum-Saunders distribution and shows that the proposed plan with respect to exponentiated family performs better than all other plans. The tables were provided for all distributions. Comparative study of tables based on proposed exponentiated family and earlier existing plan are also done.
PubDate: Sep 2020
- -action Induced by Shift Map on 1-Step Shift of Finite Type over Two
Symbols and k-type Transitive
Abstract: Publication date: Sep 2020
Source:Mathematics and Statistics Volume 8 Number 5 Nor Syahmina Kamarudin and Syahida Che Dzul-Kifli The dynamics of a multidimensional dynamical system may sometimes be inherited from the dynamics of its classical dynamical system. In a multidimensional case, we introduce a new map called a -action on space X induced by a continuous map as such that, where, and is a map of the form . We then look at how topological transitivity of f effects the behaviour of k-type transitivity of the -action, . To verify this, we look specifically at spaces called 1-step shifts of finite type over two symbols which are equipped with a map called the shift map, . We apply some topological theories to prove the -action on 1-step shifts of finite type over two symbols induced by the shift map, is k-type transitive for all whenever is topologically transitive. We found a counterexample which shows that not all maps are k-type transitive for all . However, we have also found some sufficient conditions for k-type transitivity for all. In conclusions, the map on 1-step shifts of finite type over two symbols induced by the shift map is k-type transitive for all whenever either the shift map is topologically transitive or satisfies the sufficient conditions. This study helps to develop the study of k-chaotic behaviours of -action on the multidimensional dynamical system, contributions, and its application towards symbolic dynamics.
PubDate: Sep 2020
- Comparison for the Approximate Solution of the Second-Order Fuzzy
Nonlinear Differential Equation with Fuzzy Initial Conditions
Abstract: Publication date: Sep 2020
Source:Mathematics and Statistics Volume 8 Number 5 Ali F Jameel Akram H. Shather N.R. Anakira A. K. Alomari and Azizan Saaban This research focuses on the approximate solutions of second-order fuzzy differential equations with fuzzy initial condition with two different methods depending on the properties of the fuzzy set theory. The methods in this research based on the Optimum homotopy asymptotic method (OHAM) and homotopy analysis method (HAM) are used implemented and analyzed to obtain the approximate solution of second-order nonlinear fuzzy differential equation. The concept of topology homotopy is used in both methods to produce a convergent series solution for the propped problem. Nevertheless, in contrast to other destructive approaches, these methods do not rely upon tiny or large parameters. This way we can easily monitor the convergence of approximation series. Furthermore, these techniques do not require any discretization and linearization relative with numerical methods and thus decrease calculations more that can solve high order problems without reducing it into a first-order system of equations. The obtained results of the proposed problem are presented, followed by a comparative study of the two implemented methods. The use of the methods investigated and the validity and applicability of the methods in the fuzzy domain are illustrated by a numerical example. Finally, the convergence and accuracy of the proposed methods of the provided example are presented through the error estimates between the exact solutions displayed in the form of tables and figures.
PubDate: Sep 2020
- Construction of Bivariate Copulas on a Multivariate Exponentially Weighted
Moving Average Control Chart
Abstract: Publication date: Sep 2020
Source:Mathematics and Statistics Volume 8 Number 5 Sirasak Sasiwannapong Saowanit Sukparungsee Piyapatr Busababodhin and Yupaporn Areepong The control chart is an important tool in multivariate statistical process control (MSPC), which for monitoring, control, and improvement of the process control. In this paper, we propose six types of copula combinations for use on a Multivariate Exponentially Weighted Moving Average (MEWMA) control chart. Observations from an exponential distribution with dependence measured with Kendall's tau for moderate and strong positive and negative dependence (where ) among the observations were generated by using Monte Carlo simulations to measure the Average Run Length (ARL) as the performance metric and should be sufficiently large when the process is in-control on a MEWMA control chart. In this study, we develop an approach performance on the MEWMA control chart based on copula combinations by using the Monte Carlo simulations.The results show that the out-of-control (ARL1) values for were less than for in almost all cases. The performances of the Farlie-Gumbel-Morgenstern×Ali-Mikhail-Haq copula combination was superior to the others for all shifts with strong positive dependence among the observations and . Moreover, when the magnitudes of the shift were very large, the performance metric values for observations with moderate and strong positive and negative dependence followed the same pattern.
PubDate: Sep 2020
- Test Efficiency Analysis of Parametric, Nonparametric, Semiparametric
Regression in Spatial Data
Abstract: Publication date: Sep 2020
Source:Mathematics and Statistics Volume 8 Number 5 Diah Ayu Widyastuti Adji Achmad Rinaldo Fernandes Henny Pramoedyo Nurjannah and Solimun Regression analysis has three approaches in estimating the regression curve, namely: parametric, nonparametric, and semiparametric approaches. Several studies have discussed modeling with the three approaches in cross-section data, where observations are assumed to be independent of each other. In this study, we propose a new method for estimating parametric, nonparametric, and semiparametric regression curves in spatial data. Spatial data states that at each point of observation has coordinates that indicate the position of the observation, so between observations are assumed to have different variations. The model developed in this research is to accommodate the influence of predictor variables on the response variable globally for all observations, as well as adding coordinates at each observation point locally. Based on the value of Mean Square Error (MSE) as the best model selection criteria, the results are obtained that modeling with a nonparametric approach produces the smallest MSE value. So this application data is more precise if it is modeled by the nonparametric truncated spline approach. There are eight possible models formed in this research, and the nonparametric model is better than the parametric model, because the MSE value in the nonparametric model is smaller. As for the semiparametric regression model that is formed, it is obtained that the variable X2 is a parametric component while X1 and X3 are the nonparametric components (Model 2). The regression curve estimation model with a nonparametric approach tends to be more efficient than Model 2 because the linearity assumption test results show that the relationship of all the predictor variables to the response variable shows a non-linear relationship. So in this study, spatial data that has a non-linear relationship between predictor variables and responses tends to be better modeled with a nonparametric approach.
PubDate: Sep 2020
- A Modified Robust Support Vector Regression Approach for Data Containing
High Leverage Points and Outliers in the Y-direction
Abstract: Publication date: Sep 2020
Source:Mathematics and Statistics Volume 8 Number 5 Habshah Midi and Jama Mohamed The support vector regression (SVR) model is currently a very popular non-parametric method used for estimating linear and non-linear relationships between response and predictor variables. However, there is a possibility of selecting vertical outliers as support vectors that can unduly affect the estimates of regression. Outliers from abnormal data points may result in bad predictions. In addition, when both vertical outliers and high leverage points are present in the data, the problem is further complicated. In this paper, we introduced a modified robust SVR technique in the simultaneous presence of these two problems. Three types of SVR models, i.e. eps-regression (ε-SVR), nu-regression (v-SVR) and bound constraint eps-regression (ε-BSVR), with eight different kernel functions are integrated into the new proposed algorithm. Based on 10-fold cross-validation and some model performance measures, the best model with a suitable kernel function is selected. To make the selected model robust, we developed a new double SVR (DSVR) technique based on fixed parameters. This can be used to detect and reduce the weight of influential observations or anomalous points in the data set. The effectiveness of the proposed technique is verified by using a simulation study and some well-known contaminated data sets.
PubDate: Sep 2020
- Generalised Modified Taylor Series Approach of Developing k-step Block
Methods for Solving Second Order Ordinary Differential Equations
Abstract: Publication date: Nov 2020
Source:Mathematics and Statistics Volume 8 Number 6 Oluwaseun Adeyeye and Zurni Omar Various algorithms have been proposed for developing block methods where the most adopted approach is the numerical integration and collocation approaches. However, there is another conventional approach known as the Taylor series approach, although it was utilised at inception for the development of linear multistep methods for first order differential equations. Thus, this article explores the adoption of this approach through the modification of the aforementioned conventional Taylor series approach. A new methodology is then presented for developing block methods, which is a more accurate method for solving second order ordinary differential equations, coined as the Modified Taylor Series (MTS) Approach. A further step is taken by presenting a generalised form of the MTS Approach that produces any k-step block method for solving second order ordinary differential equations. The computational complexity of this approach after being generalised to develop k-step block method for second order ordinary differential equations is calculated and the result shows that the generalised algorithm involves less computational burden, and hence is suitable for adoption when developing block methods for solving second order ordinary differential equations. Specifically, an alternate and easy-to-adopt approach to developing k-step block methods for solving second order ODEs with fewer computations has been introduced in this article with the developed block methods being suitable for solving second order differential equations directly.
PubDate: Nov 2020
- Rainfall Modelling using Generalized Extreme Value Distribution with
Cyclic Covariate
Abstract: Publication date: Nov 2020
Source:Mathematics and Statistics Volume 8 Number 6 Jasmine Lee Jia Min and Syafrina Abdul Halim Increased flood risk is recognized as one of the most significant threats in most parts of the world, resulting in severe flooding events which have caused significant property and human life losses. As there is an increase in the number of extreme flash flood events observed in Klang Valley, Malaysia recently, this paper focuses on modelling extreme daily rainfall within 30 years from year 1975 toyear 2005 in Klang Valley using generalized extreme value (GEV) distribution. Cyclic covariate is introduced in the distribution because of the seasonal rainfall variation in the series. One stationary (GEV) and three nonstationary models (NSGEV1, NSGEV2, and NSGEV3) are constructed to assess the impact of cyclic covariates on the extreme daily rainfall events. The better GEV model is selected using Akaike's information criterion (AIC), bayesian information criterion (BIC) and likelihood ratio test (LRT). The return level is then computed using the selected fitted GEV model. Results indicate that the NSGEV3 model with cyclic covariate trend presented in location and scale parameters provides better fits the extreme rainfall data. The results showed the capability of the nonstationary GEV with cyclic covariates in capturing the extreme rainfall events. The findings would be useful for engineering design and flood risk management purposes.
PubDate: Nov 2020
- Fuzzy Estimations for Detecting Abrupt Changes: Cases on Tourism Series
Abstract: Publication date: Nov 2020
Source:Mathematics and Statistics Volume 8 Number 6 Nurhaida Subanar Abdurakhman and Agus Maman Abadi This article deals with problems of detecting abrupt changes in time series ba Change Point Model (CPM) framework. We propose a fuzzification in a Fuzzy Time Series (FTS) model to eliminate a trend in a contaminated dependent series. The independent residuals are then inputed on the CPM method. In simulating an abrupt change, an ARIMA(1,1,1) and variance of the model are considered. The abrupt change is modelled as an AO (Additive Outlier) type of outliers. The minimum weight or breaksize of the abrupt change is defined based on the ARIMA variance formulated in this article. The percentage of uncorrelated residuals obtained by the FTS model and the percentage of correct detection of the proposed procedure are shown by simulation. The proposed detecting algorithm is implemented to detect abrupt changes in monthly tourism series in literature, i.e., in Taiwan and in Bali. The first series shows a slowly increasing trend with one abrupt change while the second series exhibits not only a slowly increasing trend but also a strong seasonal pattern with two abrupt changes. For comparison, we detect the changes in the empirical examples on an existing automatic detection procedure using tso package in R. For the first example, the results show that both detecting procedures give exactly a similar location of one change point where the package recognises it as an AO type of outliers. The abrupt change is related to the period of SARS outbreak in Taiwan. On the second example, the proposed procedure locates 4 change points which form two locations of changes, i.e., the first two change points are within 2 time points so do the last two change points. The locations are closed to times of Bali Bombing events. Meanwhile, the automatic procedure recognizes only one AO outlier on the series.
PubDate: Nov 2020
- Fitting a Curve, Cutting Surface, and Adjusting the Shapes of Developable
Hermite Patches
Abstract: Publication date: Nov 2020
Source:Mathematics and Statistics Volume 8 Number 6 Kusno Formulation of developable patches is beneficial for modeling of the plate-metal sheet in the based-metal-industries objects. Meanwhile, installing the developable patches on a frame of the items and making a hole on these objects surface still need some practical techniques for developing. For these reasons, this research aims to introduce some methods for fitting a curve segment, cutting the developable patches, and adjusting their formulas. Using these methods can design various profile shapes of rubber filer installed on a frame of the objects and create a fissure or hole on the patches' surface. The steps are as follows. First, we define the planes containing the patches' generatrixes and orthogonal to the boundary curves. Then, it fits the Hermite and Bézier curve, via arranging some control points data on these planes, to model the rubber filler shapes. Second, we numerically evaluate a method for cutting the patches with a plane and adjusting the patches' form by modifying their formula from a linear interpolation form into a combination of curve and vectors forms. As a result, it can present some equations and procedures for plotting required curves, cutting surfaces, and modifying the extensible or narrowable shape of Hermite patches. These methods offer some advantages and contribute to designing the based-metal-sheets' object surfaces, especially modeling various forms of rubber filer profiles installed on a frame of the objects and making hole shapes on the plate-metal sheets.
PubDate: Nov 2020
- Algorithmic Verification of Constraint Satisfaction Method on Timetable
Problem
Abstract: Publication date: Nov 2020
Source:Mathematics and Statistics Volume 8 Number 6 Viliam Ďuriš Various problems in the real world can be viewed as the Constraint Satisfaction Problem (CSP) based on several mathematical principles. This paper is a guideline for complete automation of the Timetable Problem (TTP) formulated as CSP, which we are able to solve algorithmically, and so the advantage is the possibility to solve the problem on a computer. The theory presents fundamental concepts and characteristics of CSP along with an overview of basic algorithms used in terms of its solution and forms the TTP as CSP and delineates the basic properties and requirements to be met in the timetable. The theory in our paper is mostly based on the Jeavons, Cohen, Gyssens, Cooper, and Koubarakis work, on the basis of which we've constructed a computer programme, which verifies the validity and functionality of the Constraint satisfaction method for solving the Timetable Problem. The solution of the TTP, which is characterized by its basic characteristics and requirements, was implemented by a tree-based search algorithm to a program and our main contribution is an algorithmic verification of constraints abilities and reliability when solving a TTP by means of constraints. The created program was also used to verify the time complexity of the algorithmic solution.
PubDate: Nov 2020
- Determining the Order of a Moving Average Model of Time Series Using
Reversible Jump MCMC: A Comparison between Laplacian and Gaussian Noises
Abstract: Publication date: Nov 2020
Source:Mathematics and Statistics Volume 8 Number 6 Suparman Abdellah Salhi and Mohd Saifullah Rusiman Moving average (MA) is a time series model often used for pattern forecasting and recognition. It contains a noise that is often assumed to have a Gaussian distribution. However, in various applications, noise often does not have this distribution. This paper suggests using Laplacian noise in the MA model, instead. The comparison of Gaussian and Laplacian noises was also investigated to ascertain the right noise for the model. Moreover, the Bayesian method was used to estimate the parameters, such as the order and coefficient of the model, as well as noise variance. The posterior distribution has a complex form because the parameters are concerened with the combination of spaces of different dimensions. Therefore, to overcome this problem, the Markov Chain Monte Carlo (MCMC) reversible jump algorithm is adopted. A simulation study was conducted to evaluate its performance. After it has worked properly, it was applied to model human heart rate data. The results showed that the MCMC algorithm can estimate the parameters of the MA model. This was developed using Laplace distributed noise. Moreover, when compared with the Gaussian, the Laplacian noise resulted in a higher order model and produced a smaller variance.
PubDate: Nov 2020
- Comparison of Parameter Estimators for Generalized Pareto Distribution
under Peak over Threshold
Abstract: Publication date: Nov 2020
Source:Mathematics and Statistics Volume 8 Number 6 Wilhemina Adoma Pels Atinuke Olusola Adebanji and Sampson Twumasi-Ankrah The study focused on the Generalized Pareto Distribution (GPD) under the Peak Over Threshold approach (POT). Twenty-one estimation methods were considered for extreme value modeling and their performances were compared. Our goal is to identify the best method in various conditions by the use of a systematic simulation study. Some other estimators which were initially not created under the POT framework (NON-POT) were also compared concurrently with the ones under the POT framework. The simulation results under varying shape parameters showed the Zhang Estimator as "best" in performance for NON-POT in estimating both the shape and scale parameter for heavy-tailed cases. In the POT framework, the Zhang Estimator again performed "best" in estimating very heavy tails for the shape and very short tails for the scale regardless of the value of the scale parameter. When varying sample size, under the NON-POT framework, the Zhang estimator performed as "best" heavy-tailed whiles for the POT framework, the Pickands Estimator was "best" in performance at estimating the shape parameter for large sample sizes and the Zhang, small sample sizes.
PubDate: Nov 2020
- Convergence Almost Everywhere of Non-convolutional Integral Operators in
Lebesgue Spaces
Abstract: Publication date: Nov 2020
Source:Mathematics and Statistics Volume 8 Number 6 Yakhshiboev M. U. The case of one-dimensional and multidimensional non-convolutional integral operators in Lebesgue spaces is considered in this paper. The convergence in the norm and almost everywhere of non-convolution integral operators in Lebesgue spaces was insufficiently studied. The kernels of non-convolutional integral operators do not need to have a monotone majorant, so the well-known results on the convergence almost everywhere of convolutional averages are not applicable here. The kernels of nonconvolutional integral operators take into account different behaviors at and depending on (which is important in applications) and cover the situation in the particular case of convolutional and non-convolutional integral operators. We are interested in the behavior of function as . Theorems on convergence almost everywhere in the case of one-dimensional and multidimensional nonconvolution integral operators in Lebesgue spaces are proved. The theorems proved are more general ones (including for convolutional integral operators) and cover a wide class of kernels.
PubDate: Nov 2020
- Generalization of the Reachability Problem on Directed Graphs
Abstract: Publication date: Nov 2020
Source:Mathematics and Statistics Volume 8 Number 6 Vladimir A. Skorokhodov The problem of reachability on graphs with restriction is studied. Such restrictions mean that only those paths that satisfy certain conditions are valid paths on the graph. Because of this, for classical optimization problems one has to consider only a subset of feasible paths on the graph, which significantly complicates their solution. Reachability constraints arise naturally in various applied problems, for example, in the problem of navigation in telecommunication networks with areas of strong signal attenuation or when modeling technological processes in which there is a condition for the order of actions or the compatibility of operations. General concepts of a graph with non-standard reachability and a valid path on it are introduced. It is shown that the classical graphs, as well as graphs with restrictions on passing through the selected arcs subsets are special cases of graphs with non-standard reachability. General approach to solving the shortest path problem on a graph with non-standard achievability is developed. This approach consists in constructing an auxiliary graph and reducing the shortest path problem on a graph with non-standard reachability to a similar problem on an auxiliary graph. The theorem on the correspondence of the paths of the original and auxiliary graphs is proved.
PubDate: Nov 2020
- On Some Global Solution of the Basic Equations in the Geodesic Mappings'
Theory of Riemannian Spaces
Abstract: Publication date: Nov 2020
Source:Mathematics and Statistics Volume 8 Number 6 E. N. Sinyukova and O. L. Chepok It is well known that concepts of a geodesic line and a geodesic mapping are among the most fundamental concepts of classical theory of Riemannian spaces. In geometry, concept of Riemannian space has been formed as a generalization of the concept of a smooth surface in a three-dimensional Euclidean space. It has turned out to be possible to extend to Riemannian space the concept of a geodesic point of a curve and to represent a geodesic line of Riemannian space as a curve that consists exclusively of geodesic points. The fact has allowed understanding not only the local but also the global character of basic equations of geodesic mappings' theory of Riemannian spaces that have been originally received as a result of local investigations. An example of the global solution of the so-called new form of basic equations in the theory of geodesic mappings of Riemannian spaces is built in the article. Sphere that is considered as a subset of Euclidean space , forms its topological background. Investigations are based on the concept of equidistant Riemannian space. They are carried out according to the atlas that consists of two charts, obtained with the help of a stereographic projection.
PubDate: Nov 2020
- The Implementation of First Order and Second Order with Mixed Measurement
to Identify Farmers Satisfaction
Abstract: Publication date: Nov 2020
Source:Mathematics and Statistics Volume 8 Number 6 Retno Ayu Cahyoningtyas Solimun and Adji Achmad Rinaldo Fernandes The purpose of this research is to develop structural modeling with metric and nonmetric measurement scales. Also, this study compares the level of efficiency between the first order and second-order models. The application of structural modeling in agriculture is the satisfaction of farmers in East Java. The data used in this study are about perceptions by distributing questionnaires to farmers in East Java Province in 2020. The respondents in this study were 155 districts in East Java Province. Therefore, the sampling technique chosen is probability sampling, which is a proportional area random sampling. The results are obtained that the first-order model is better than the second-order model because it has the lowest MSE value and the highest R2. The results of the path analysis for the first order and second-order models produce the same results that there is a significant positive effect between the gratitude variables on the farmer satisfaction variable. That is, the more gratitude felt by farmers, the satisfaction will be increased by East Java Farmers. On the other hand, the test results showed that demographic variables did not significantly influence gratitude variables.
PubDate: Nov 2020
- Measuring Given Partial Information about Intuitionistic Fuzzy Sets
Abstract: Publication date: Nov 2020
Source:Mathematics and Statistics Volume 8 Number 6 Priya Arora and V. P Tomar Background: Measuring the information and removal of uncertainty are the essential nature of human thinking and many world objectives. Information is well used and beneficial if it is free from uncertainty and fuzziness. Shannon was the primitive who coined the term entropy for measure of uncertainty. He also gave an expression of entropy based on probability distribution. Zadeh used the idea of Shannon to develop the concept of fuzzy sets. Later on, Atanassov generalized the concept of fuzzy set and developed intuitionistic fuzzy sets. Purpose: Sometimes we do not have complete information about fuzzy set or intuitionistic fuzzy sets. Some partial information is known about them i.e either only few values of membership function or non membership function are known or a relationship between them is known or some inequalities governing these parameters are known. Kapur has measured the partial information given by a fuzzy set. In this paper, we have attempted to quantify partial information given by intuitionistic fuzzy sets by considering all the cases. Methodologies: We analyze some well-known definitions and axioms used in the field of fuzzy theory. Principal Results: We have devised methods to measure the incomplete information given about intuitionistic fuzzy sets. Major Conclusions: By devising the methods of measuring partial information about IFS, we can use this information to get an idea about the given set and use this information wisely to make a good decision.
PubDate: Nov 2020
- Evaluating the Performance of Unit Root Tests in Single Time Series
Processes
Abstract: Publication date: Nov 2020
Source:Mathematics and Statistics Volume 8 Number 6 Jonathan Kwaku Afriyie Sampson Twumasi-Ankrah Kwasi Baah Gyamfi Doris Arthur and Wilhemina Adoma Pels Unit root tests for stationarity have relevancy in almost every practical time series analysis. Deciding on which unit root test to use is a topic of active interest. In this study, we compare the performance of the three commonly used unit root tests (i.e., Augmented Dickey-Fuller (ADF), Phillips-Perron (PP), and Kwiatkowski Phillips Schmidt and Shin (KPSS)) in time series. Based on literature, these unit root tests sometimes disagree in selecting the appropriate order of integration for a given series. Therefore, the decision to use a unit root test relies essentially on the judgment of the researcher. Suppose we wish to annul the subjective decision. In that case, we have to locate an objective basis that unmistakably characterizes which test is the most appropriate for a particular time series type. Thus, this study seeks to unravel this problem by providing a guide on which unit root tests to utilize when there is a disagreement between them. A simulation study of eight (8) univariate time series models with eight (8) different sample sizes, three (3) differencing orders, and nine different parameter values were performed. It was observed from the results that the performance of the three tests improved as the sample size increased. Based on comparing the overall performance, the KPSS was the "best" unit root test to use when there is disagreement.
PubDate: Nov 2020
- A Mathematical Model of Horizontal Averaged Groundwater Pollution
Measurement with Several Substances due to Chemical Reaction
Abstract: Publication date: Nov 2020
Source:Mathematics and Statistics Volume 8 Number 6 Jirapud Limthanakul and Nopparat Pochai Chloride is a well-known chemical compound that is very useful in industry and agricultural, chloride can be transformed to hypochlorite, chlorite, chlorate and perchlorate, chloride and their substances are not dangerous if we used in the optimal level. Groundwater that contaminated chloride and their substances impacts human health, for an example, if we drink water that contaminated chloride exceed 250 mg/L it can cause heart problems and contribute to high blood pressure. to avoid this problem, we used mathematical models to explain groundwater contamination with chloride and their substances. Transient groundwater flow model provides the hydraulic head of groundwater, in this model we will get the level of groundwater, next, we need to find its velocity and direction by using the result in first model put into second model. Groundwater velocity model provides x- and z-direction vector in groundwater, after computation we will plugin the result into the last model to approximated the chloride concentration in groundwater. Groundwater contamination dispersion model provides chloride, hypochlorite, chlorite, chlorate and perchlorate concentration. The proposed explicit finite difference techniques are used to approximate the model solution. Explicit method was used to solved hydraulic head model. Forward space described groundwater velocity model. Forward time and central space used to predict transient groundwater contaminated models. The simulations can be used to indicate when each simulated zone becomes a hazardous zone or a protection zone.
PubDate: Nov 2020
- Construction of Lorenz Curves Based on Empirical Distribution Laws of
Economic Indicators
Abstract: Publication date: Nov 2020
Source:Mathematics and Statistics Volume 8 Number 6 Aleksandr Bochkov Dmitrii Pervukhin Aleksandr Grafov and Veronika Nikitina The quality of construction of Lorenz curves depends on the features of the information used. As a rule, information is represented by a sample of values of the studied indicator, which is checked for unevenness. Economic indicators of income and cost, and features of their samples are considered. The feature of the cost economic indicator associated with the presence in the sample of its values of the clot is highlighted (the concentration of values on a small segment of the entire range of sample). It is shown that the established order of constructing empirical laws based on such samples does not give the desired effect when constructing Lorenz curves due to the loss of information content of the sample in the places of the clot. The purpose of this article is to improve the quality of the Lorenz curve by increasing the information content of the sample with a clot by applying the clustering procedure when constructing an empirical law. A step-by-step clustering procedure is proposed for dividing the entire range of sample into intervals to construct an empirical distribution law, which is an element of the novelty of this study. A specific example shows how to improve the quality of building a Lorenz curve using this procedure. In addition, it is shown that Lorenz curves for economic indicators can be constructed directly on the basis of the empirical distribution law and at the same time take into account its features.
PubDate: Nov 2020
- Solution of Newell – Whitehead – Segal Equation of Fractional Order by
Using Sumudu Decomposition Method
Abstract: Publication date: Nov 2020
Source:Mathematics and Statistics Volume 8 Number 6 Shams A. Ahmed and Mohamed Elbadri Newell Whitehead Segal (NWS) equation has been used in describing many natural phenomena arising in fluid mechanics and hence acquired more attention. Studies in the past gave importance to obtaining numerical or analytical solutions of this kind of equations by employing methods like Modified Homotopy Analysis Transform method (MHATM), Adomian Decomposition method (ADM), Homotopy Analysis Sumudu Transform method (HASTM), Fractional Complex Transform (FCT) coupled with He's polynomials method (FCT-HPM) and Fractional Residual Power Series method (FRPSM). This research aims to demonstrate an efficient analytical method called the Sumudu Decomposition Method (SDM) for the study of analytical and numerical solutions of the NWS of fractional order. The coupling of Adomian Decomposition method with Sumudu transform method simplifies the calculation. From the numerical results obtained, it is evident that SDM is easy to execute and offers accurate results for the NWS equation than with other methods such as FCT-HPM and FRPSM. Therefore, it is easy to apply the coupling of Adomian Decomposition technique with Sumudu transform method, and when applied to nonlinear differential equations of fractional order, it yields accurate results.
PubDate: Nov 2020
- On the Effect of Vaccination, Screening and Treatment in Controlling
Typhoid Fever Spread Dynamics: Deterministic and Stochastic Applications
Abstract: Publication date: Nov 2020
Source:Mathematics and Statistics Volume 8 Number 6 Temitope Olu Ogunlade Oluwatayo Michael Ogunmiloro Segun Nathaniel Ogunyebi Grace Ebunoluwa Fatoyinbo Joshua Otonritse Okoro Opeyemi Roselyn Akindutire Omobolaji Yusuf Halid and Adenike Oluwafunmilola Olubiyi This work concerns a deterministic and stochastic model describing the transmission of typhoid fever infection in human host community, where the vaccination of susceptible births and immigrants as well as screening and treatment of carriers and infected individuals are considered in the model build - up. The well-posedness and computation of the basic reproduction number Rtyp of the deterministic model are obtained and analysed. The deterministic model is further transformed into a stochastic model, where the drift and diffusion parts of the model are obtained, and the existence and uniqueness of the stochastic model are discussed. Numerical simulations involving the model parameters of Rtyp showed that vaccination of susceptible births and influx of immigrants as well as screening and treatment of carriers and infected humans are effective in bringing the threshold Rtyp(Rtyp)≈0.7944) below 1, and the results of other simulations suggest more health policies are to be implemented, as low Rtyp may not be guaranteed because vaccination wanes over time. In addition, the numerical simulations of the stochastic model equations describing the sub - population of human individuals in the total human host community are carried out using the computational software MATLAB.
PubDate: Nov 2020
- Superstability and Solution of The Pexiderized Trigonometric Functional
Equation
Abstract: Publication date: May 2020
Source:Mathematics and Statistics Volume 8 Number 3 Gwang Hui Kim The present work continues the study for the superstability and solution of the Pexider type functional equation , which is the mixed functional equation represented by sum of the sine, cosine, tangent, hyperbolic trigonometric, and exponential functions. The stability of the cosine (d'Alembert) functional equation and the Wilson equation was researched by many authors: Baker [7], Badora [5], Kannappan [14], Kim ([16, 19]), and Fassi, etc [11]. The stability of the sine type equations was researched by Cholewa [10], Kim ([18], [20]). The stability of the difference type equation for the above equation was studied by Kim ([21], [22]). In this paper, we investigate the superstability of the sine functional equation and the Wilson equation from the Pexider type difference functional equation , which is the mixed equation represented by the sine, cosine, tangent, hyperbolic trigonometric functions, and exponential functions. Also, we obtain additionally that the Wilson equation and the cosine functional eqaution in the obtained results can be represented by the composition of a homomorphism. In here, the domain (G; +) of functions is a noncommutative semigroup (or 2-divisible Abelian group), and A is an unital commutative normed algebra with unit 1A. The obtained results can be applied and expanded to the stability for the difference type's functional equation which consists of the (hyperbolic) secant, cosecant, logarithmic functions.
PubDate: May 2020
- On (2; 2)-regular Non-associative Ordered Semigroups via Its Semilattices
and Generated (Generalized Fuzzy) Ideals
Abstract: Publication date: May 2020
Source:Mathematics and Statistics Volume 8 Number 3 Yousef Al-Qudah Faisal Yousafzai Mohammed M. Khalaf and Mohammad Almousa The main motivation behind this paper is to study some structural properties of a non-associative structure as it hasn't attracted much attention compared to associative structures. In this paper, we introduce the concept of an ordered A*G**-groupoid and provide that this class is more generalized than an ordered AG-groupoid with left identity. We also define the generated left (right) ideals in an ordered A*G**-groupoid and characterize a (2; 2)-regular ordered A*G**-groupoid in terms of these ideals. We then study the structural properties of an ordered A*G**-groupoid in terms of its semilattices, (2; 2)-regular class and generated commutative monoids. Subsequently, compare -fuzzy left/right ideals of an ordered AG-groupoid and respective examples are provided. Relations between an -fuzzy idempotent subsets of an ordered A*G**-groupoid and its -fuzzybi-ideals are discussed. As an application of our results, we get characterizations of (2; 2)-regular ordered A*G**-groupoid in terms of semilattices and -fuzzy left (right) ideals. These concepts will help in verifying the existing characterizations and will help in achieving new and generalized results in future works.
PubDate: May 2020
- Differential Invariants of One Parametrical Group of Transformations
Abstract: Publication date: May 2020
Source:Mathematics and Statistics Volume 8 Number 3 Abdishukurova Guzal Narmanov Abdigappar and Sharipov Xurshid The concept of differential invariant, along with the concept of invariant differentiation, is the key in modern geometry [1]-[10]. In the Erlangen program [3] Felix Klein proposed a unified approach to the description of various geometries. According to this program, one of the main problems of geometry is to construct invariants of geometric objects with respect to the action of the group defining this geometry. This approach is largely based on the ideas of Sophus Lee, who introduced continuous geometry groups of transformations, now known as Lie groups, into geometry. In particular, when considering classification problems and equivalence problems in differential geometry, differential invariants with respect to the action of Lie groups should be considered. In this case, the equivalence problem of geometric objects is reduced to finding a complete system of scalar differential invariants. The interpretation of the k- order differential invariant as a function on the space of k- jets of sections of the corresponding bundle made it possible to operate with them efficiently, and using invariant differentiation, new differential invariants can be obtained. Differential invariants with respect to a certain Lie group generate differential equations for which this group is a symmetry group. This allows one to apply the well-known integration methods to such equations, and, in particular, the Li- Bianchi theorem [4]. Depending on the type of geometry, the orders of the first nontrivial differential invariants can be different. For example, in the space R3 equipped with the Euclidean metric, the complete system of differential invariants of a curve is its curvature and torsion, which are second and third order invariants, respectively. Note that scalar differential invariants are the only type of invariants whose components do not change when changing coordinates. For this reason, scalar differential invariants are effectively used in solving equivalence problems. In this paper differential invariants of Lie group of one parametric transformations of the space of two independent and three dependent variables are studied. It is shown method of construction of invariant differential operator. Obtained results applied for finding differential invariants of surfaces.
PubDate: May 2020
- High-speed Dynamic Programming Algorithms in Applied Problems of a Special
Kind
Abstract: Publication date: May 2020
Source:Mathematics and Statistics Volume 8 Number 3 V. I. Struchenkov and D. A. Karpov The article discusses the solution of applied problems, for which the dynamic programming method developed by R. Bellman in the middle of the last century was previously proposed. Currently, dynamic programming algorithms are successfully used to solve applied problems, but with an increase in the dimension of the task, the reduction of the counting time remains relevant. This is especially important when designing systems in which dynamic programming is embedded in a computational cycle that is repeated many times. Therefore, the article analyzes various possibilities of increasing the speed of the dynamic programming algorithm. For some problems, using the Bellman optimality principle, recurrence formulas were obtained for calculating the optimal trajectory without any analysis of the set of options for its construction step by step. It is shown that many applied problems when using dynamic programming, in addition to rejecting unpromising paths lead to a specific state, also allow rejecting hopeless states. The article proposes a new algorithm for implementing the R. Bellman principle for solving such problems and establishes the conditions for its applicability. The results of solving two-parameter problems of various dimensions presented in the article showed that the exclusion of hopeless states can reduce the counting time by 10 or more times.
PubDate: May 2020
- Hermite-Hadamard Type Inequalities for Composite Log-Convex Functions
Abstract: Publication date: May 2020
Source:Mathematics and Statistics Volume 8 Number 3 Nik Muhammad Farhan Hakim Nik Badrul Alam Ajab Bai Akbarally and Silvestru Sever Dragomir Hermite-Hadamard type inequalities related to convex functions are widely being studied in functional analysis. Researchers have refined the convex functions as quasi-convex, h-convex, log-convex, m-convex, (a,m)-convex and many more. Subsequently, the Hermite-Hadamard type inequalities have been obtained for these refined convex functions. In this paper, we firstly review the Hermite-Hadamard type inequality for both convex functions and log-convex functions. Then, the definition of composite convex function and the Hermite-Hadamard type inequalities for composite convex functions are also reviewed. Motivated by these works, we then make some refinement to obtain the definition of composite log-convex functions, namely composite--1 log-convex function. Some examples related to this definition such as GG-convexity and HG-convexity are given. We also define k-composite log-convexity and k-composite--1 log-convexity. We then prove a lemma and obtain some Hermite-Hadamard type inequalities for composite log-convex functions. Two corollaries are also proved using the theorem obtained; the first one by applying the exponential function and the second one by applying the properties of k-composite log-convexity. Also, an application for GG-convex functions is given. In this application, we compare the inequalities obtained from this paper with the inequalities obtained in the previous studies. The inequalities can be applied in calculating geometric means in statistics and other fields.
PubDate: May 2020
- New Possibilities of Application of Artificial Intelligence Methods for
High-Precision Solution of Boundary Value Problems
Abstract: Publication date: May 2020
Source:Mathematics and Statistics Volume 8 Number 3 Leonid N. Yasnitsky and Sergey L. Gladkiy One of the main problems in modern mathematical modeling is to obtain high-precision solutions of boundary value problems. This study proposes a new approach that combines the methods of artificial intelligence and a classical analytical method. The use of the analytical method of fictitious canonic regions is proposed as the basis for obtaining reliable solutions of boundary value problems. The novelty of the approach is in the application of artificial intelligence methods, namely, genetic algorithms, to select the optimal location of fictitious canonic regions, ensuring maximum accuracy. A general genetic algorithm has been developed to solve the problem of determining the global minimum for the choice and location of fictitious canonic regions. For this genetic algorithm, several variants of the function of crossing individuals and mutations are proposed. The approach is applied to solve two test boundary value problems: the stationary heat conduction problem and the elasticity theory problem. The results of solving problems showed the effectiveness of the proposed approach. It took no more than a hundred generations to achieve high precision solutions in the work of the genetic algorithm. Moreover, the error in solving the stationary heat conduction problem was so insignificant that this solution can be considered as precise. Thus, the study showed that the proposed approach, combining the analytical method of fictitious canonic regions and the use of genetic optimization algorithms, allows solving complex boundary-value problems with high accuracy. This approach can be used in mathematical modeling of structures for responsible purposes, where the accuracy and reliability of the results is the main criterion for evaluating the solution. Further development of this approach will make it possible to solve with high accuracy of more complicated 3D problems, as well as problems of other types, for example, thermal elasticity, which are of great importance in the design of engineering structures.
PubDate: May 2020
- Structural Equation Modeling (SEM) Analysis with Warppls Approach Based on
Theory of Planned Behavior (TPB)
Abstract: Publication date: May 2020
Source:Mathematics and Statistics Volume 8 Number 3 Ni Wayan Surya Wardhani Waego Hadi Nugroho Adji Achmad Rinaldo Fernandes and Solimun WANT-E is a tool created to purify methane gas from organic waste intended as a substitute for renewable gas fuel. The WANT-E product is new because it is necessary to do research related to the public interest in WANT-E products. This study uses primary data obtained from questionnaires with variables based on Theory of Planned Behavior (TPB), namely behavior attitudes, subjective norms, perceived behavior control, and behavior interests that are spread to the community of Cibeber Village, Cikalong Subdistrict, Tasikmalaya Regency that uses LPG gas cylinders or stove using sampling techniques in the form of the judgment sampling method. The analysis used is SEM with the WarpPLS approach, which is to determine the effect of relationships between variables. The results of the analysis obtained the effect of a positive relationship between behavior attitudes variables on subjective norms, behavior attitudes toward perceived behavior control, subjective norms of behavior interests, and perceived behavior control of behavior interests. Then the influence of indirect relations on subjective norms and perceived behavior control was obtained as mediation between behavior attitudes toward behavior interests.
PubDate: May 2020
- The Indicatrix of the Surface in Four-Dimensional Galilean Space
Abstract: Publication date: May 2020
Source:Mathematics and Statistics Volume 8 Number 3 Artykbaev Abdullaaziz and Nurbayev Abdurashid Ravshanovich This article discusses geometric quantities associated with the concept of surfaces and the indicatrix of a surface in four-dimensional Galileo space. In this case, the second orderly line in the plane is presented as a surface indicatrix. It is shown that with the help of the Galileo space movement, the second orderly line can be brought to the canonical form. The movement in the Galileo space is radically different from the movement in the Euclidean space. Galileo movements include parallel movement, axis rotation, and sliding. Sliding gives deformation in the Euclidean space. The surface indicatrix is deformed by the Galileo movement. When the indicatrix is deformed, the surface will be deformed. In the classification of three-dimensional surface points in the four-dimensional Galileo phase, the classification of the indicatrix of the surface at this point was used. This shows the cyclic state of surface points other than Euclidean geometry. The geometric characteristics of surface curves were determined using the indicatrix test. It is determined what kind of geometrical meaning the identified properties have in the Euclidean phase. It is shown that the Galilean movement gives surface deformation in the Euclidean sense. Deformation of the surface is indicated by the fact that the Gaussian curvature remains unchanged.
PubDate: May 2020
- Characterizations of Some Special Curves in Lorentz-Minkowski Space
Abstract: Publication date: May 2020
Source:Mathematics and Statistics Volume 8 Number 3 M. Khalifa Saad R. A. Abdel-Baky F. Alharbi and A. Aloufi In a theory of space curves, especially, a helix is the most elementary and interesting topic. A helix, moreover, pays attention to natural scientists as well as mathematicians because of its various applications, for example, DNA, carbon nanotube, screws, springs and so on. Also there are many applications of helix curve or helical structures in Science such as fractal geometry, in the fields of computer aided design and computer graphics. Helices can be used for the tool path description, the simulation of kinematic motion or the design of highways, etc. The problem of the determination of parametric representation of the position vector of an arbitrary space curve according to the intrinsic equations is still open in the Euclidean space E3 and in the Minkowski space . In this paper, we introduce some characterizations of a non-null slant helix which has a spacelike or timelike axis in . We use vector differential equations established by means of Frenet equations in Minkowski space . Also, we investigate some differential geometric properties of these curves according to these vector differential equations. Besides, we illustrate some examples to confirm our findings.
PubDate: May 2020
- On the Geometry of Hamiltonian Symmetries
Abstract: Publication date: May 2020
Source:Mathematics and Statistics Volume 8 Number 3 Narmanov Abdigappar and Parmonov Hamid The problem of integrating equations of mechanics is the most important task of mathematics and mechanics. Before Poincare's book "Curves Defined by Differential Equations", integration tasks were considered as analytical problems of finding formulas for solutions of the equation of motion. After the appearance of this book, it became clear that the integration problems are related to the behavior of the trajectories as a whole. This, of course, stimulated methods of qualitative theory of differential equations. Present time, the main method in this problem has become the symmetry method. Newton used the ideas of symmetry for the problem of central motion. Further, Lagrange revealed that the classical integrals of the problem of gravitation bodies are associated with invariant equations of motion with respect to the Galileo group. Emmy Noether showed that each integral of the equation of motion corresponds to a group of transformations preserving the action. The phase flow of the Hamilton system of equations, in which the first integral serves as the Hamiltonian, translates the solutions of the original equations into solutions. The Liouville theorem on the integrability of Hamilton equations was created on this idea. The Liouville theorem states that phase flows of involutive integrals generate an Abelian group of symmetries Hamiltonian methods have become increasingly important in the study of the equations of continuum mechanics, including fluids, plasmas and elastic media. In this paper it is considered the problem on the Hamiltonian system which describes of motion of a particle which is attracted to a fixed point with a force varying as the inverse cube of the distance from the point. We are concerned with just one aspect of this problem, namely the questions on the symmetry groups and Hamiltonian symmetries. It is found Hamiltonian symmetries of this Hamiltonian system and it is proven that Hamiltonian symmetry group of the considered problem contains two dimensional Abelian Lie group. Also it is constructed the singular foliation which is generated by infinitesimal symmetries which invariant under phase flow of the system. In the present paper, smoothness is understood as smoothness of the class C∞.
PubDate: May 2020
- Lightlike Hypersurfaces of an Indefinite Kaehler Manifold with an ()-type
Connection
Abstract: Publication date: May 2020
Source:Mathematics and Statistics Volume 8 Number 3 Jae Won Lee Dae Ho Jin and Chul Woo Lee Jin [1] defined an ()-type connection on semi-Riemannian manifolds. Semi-symmetric nonmetric connection and non-metric ∅-symmetric connection are two important examples of this connection such that () = (1; 0) and () = (0; 1), respectively. In semi-Riemannian geometry, there are few literatures for the lightlike geometry, so we expose new theories for non-degenerate submanifolds in semi-Riemannian geometry. The goal of this paper is to study a characterization of a (Lie) recurrent lightlike hypersurface M of an indefinite Kaehler manifold with an ()-type connection when the charateristic vector field is tangnet to M. In the special case that an indefinite Kaehler manifold of constant holomorphic sectional curvature is an indefinite complex space form, we investigate a lightlike hypersurface of an indefinite complex space form with an ()-type connection when the charateristic vector field is tangnet to M. Moreover, we show that the total space, the complex space form, is characterized by the screen conformal lightlike hypersurface with an ()-type connection. With a semi-symmetric non-metric connection, we show that an indefinite complex space form is flat.
PubDate: May 2020
- Adomian Decomposition Method with Modified Bernstein Polynomials for
Solving Nonlinear Fredholm and Volterra Integral Equations
Abstract: Publication date: May 2020
Source:Mathematics and Statistics Volume 8 Number 3 Mohammad Almousa Many different problems in mathematics, physics, engineering can be expressed in the form of integral equations. Among these are diffraction problems, population growth, heat transfer, particle transport problems, electrical engineering, elasticity, control, elastic waves, diffusion problems, quantum mechanics, heat radiation, electrostatics and contact problems. Therefore, the solutions which are obtained by the mathematical methods play an important role in these fields. The most two basic types of integral equations are called Fredholm (FIEs) and Volterra (VIEs). In many instances, the ordinary and partial differential equations can be converted into Fredhom and Volterra integral equations that are solved more effectively. We aim through this research to present an improved Adomian decomposition method based on modified Bernstein polynomials (ADM-MBP) to solve nonlinear integral equations of the second kind. We introduced efficient method, constructed on modified Bernstein polynomials. The formulation is developed to solve nonlinear Fredholm and Volterra integral equations of second kind. This method is tested for some examples from nonlinear integral equations. Maple software was used to obtain the solutions of these examples. The results demonstrate reliability of the proposed method. Generally, the proposed method is very convenient to apply to find the solutions of Fredholm and Volterra integral equations of second kind.
PubDate: May 2020
- MTSD-TCC: A Robust Alternative to Tukey's Control Chart (TCC) Based on the
Modified Trimmed Standard Deviation (MTSD)
Abstract: Publication date: May 2020
Source:Mathematics and Statistics Volume 8 Number 3 Moustafa Omar Ahmed Abu-Shawiesh Muhammad Riaz and Qurat-Ul-Ain Khaliq In this study, a robust control chart as an alternative to the Tukey's control chart (TCC) based on the modified trimmed standard deviation (MTSD), namely MTSD-TCC, is proposed. The performance of the proposed and the competing Tukey's control chart (TCC) is measured using different length properties such as average run length (ARL), standard deviation of run length (SDRL), and median run length (MDRL). Also, the study covered normal and contaminated cases. We have observed that the proposed robust control chart (MTSD-TCC) is quite efficient at detecting process shifts. Also, it is evident from the simulation results that the proposed robust control chart (MTSD-TCC) offers superior detection ability for different trimming levels as compared to the Tukey's control chart (TCC) under the contaminated process setups. As a result, it is recommended to use the proposed robust control chart (MTSD-TCC) for process monitoring. An application numerical example using real-life data is provided to illustrate the implementation of the proposed robust control chart (MTSD-TCC) which also supported the results of the simulation study to some extent.
PubDate: May 2020
- Application of Parameterized Hesitant Fuzzy Soft Set Theory in Decision
Making
Abstract: Publication date: May 2020
Source:Mathematics and Statistics Volume 8 Number 3 Zahari Md Rodzi and Abd Ghafur Ahmad In this paper, by combining hesitant fuzzy soft sets (HFSSs) and fuzzy parameterized, we introduce the idea of a new hybrid model, fuzzy parameterized hesitant fuzzy soft sets (FPHFSSs). The benefit of this theory is that the degree of importance of parameters is being provided to HFSSs directly from decision makers. In addition, all the information is represented in a single set in the decision making process. Then, we likewise ponder its basic operations such as AND, OR, complement, union and intersection. The basic properties such as associative, distributive and de Morgan's law of FPHFSSs are proven. Next, in order to resolve the multi-criteria decision making problem (MCDM), we present arithmetic mean score and geometry mean score incorporated with hesitant degree of FPHFSSs in TOPSIS. This algorithm can cater some existing approach that suggested to add such elements to a shorter hesitant fuzzy element, rendering it equivalent to another hesitant fuzzy element, or to duplicate its elements to obtain two sequence of the same length. Such approaches would break the original data structure and modify the data. Finally, to demonstrate the efficacy and viability of our process, we equate our algorithm with existing methods.
PubDate: May 2020
- The Consistency of Blindfolding in the Path Analysis Model with Various
Number of Resampling
Abstract: Publication date: May 2020
Source:Mathematics and Statistics Volume 8 Number 3 Solimun and Adji Achmad Rinaldo Fernandes The use of regression analysis has not been able to deal with the problems of complex relationships with several response variables and the presence of intervening endogenous variables in a relationship. Analysis that is able to handle these problems is path analysis. In path analysis there are several assumptions, one of which is the assumption of residual normality. If the normality residual assumptions are not met, then estimating the parameters can produce a biased estimator, a large and not consistent range of estimators. Unmet residual normality problems can be overcome by using resampling. Therefore in this study, a simulation study was conducted to apply resampling with the blindfold method to the condition that the normality assumption is not met with various levels of resampling in the path analysis. Based on the simulation results, different levels of closeness occur consistently at different resampling quantities. At a low level of closeness, it is consistent with the resampling magnitude of 1000. At a moderate level, a consistent level of resampling of 500 occurs. At a high level of closeness, it is consistent with the amount of resampling 1400.
PubDate: May 2020
- Hybrid Flow-Shop Scheduling (HFS) Problem Solving with Migrating Birds
Optimization (MBO) Algorithm
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2A Yona Eka Pratiwi Kusbudiono Abduh Riski and Alfian Futuhul Hadi The development of an increasingly rapid industrial development resulted in increasingly intense competition between industries. Companies are required to maximize performance in various fields, especially by meeting customer demand with agreed timeliness. Scheduling is the allocation of resources to the time to produce a collection of jobs. PT. Bella Agung Citra Mandiri is a manufacturing company engaged in making spring beds. The work stations in the company consist of 5 stages consisting of ram per with three machines, clamps per 1 machine, firing mattresses with two machines, sewing mattresses three machines and packing with one machine. The model problem that was solved in this study was Hybrid Flowshop Scheduling. The optimization method for solving problems is to use the metaheuristic method Migrating Birds Optimization. To avoid problems faced by the company, scheduling is needed to minimize makespan by paying attention to the number of parallel machines. The results of this study are scheduling for 16 jobs and 46 jobs. Decreasing makespan value for 16 jobs minimizes the time for 26 minutes 39 seconds, while for 46 jobs can minimize the time for 3 hours 31 minutes 39 seconds.
PubDate: Mar 2020
- Fourth-order Compact Iterative Scheme for the Two-dimensional Time
Fractional Sub-diffusion Equations
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2A Muhammad Asim Khan and Norhashidah Hj. Mohd Ali The fractional diffusion equation is an important mathematical model for describing phenomena of anomalous diffusion in transport processes. A high-order compact iterative scheme is formulated in solving the two-dimensional time fractional sub-diffusion equation. The spatial derivative is evaluated using Crank-Nicolson scheme with a fourth-order compact approximation and the Caputo derivative is used for the time fractional derivative to obtain a discrete implicit scheme. The order of convergence for the proposed method will be shown to be of . Numerical examples are provided to verify the high-order accuracy solutions of the proposed scheme.
PubDate: Mar 2020
- Parameter Estimations of the Generalized Extreme Value Distributions for
Small Sample Size
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2A RaziraAniza Roslan Chin Su Na and Darmesah Gabda The standard method of the maximum likelihood has poor performance in GEV parameter estimates for small sample data. This study aims to explore the Generalized Extreme Value (GEV) parameter estimation using several methods focusing on small sample size of an extreme event. We conducted simulation study to illustrate the performance of different methods such as the Maximum Likelihood (MLE), probability weighted moment (PWM) and the penalized likelihood method (PMLE) in estimating the GEV parameters. Based on the simulation results, we then applied the superior method in modelling the annual maximum stream flow in Sabah. The result of the simulation study shows that the PMLE gives better estimate compared to MLE and PMW as it has small bias and root mean square errors, RMSE. For an application, we can then compute the estimate of return level of river flow in Sabah.
PubDate: Mar 2020
- An Alternative Approach for Finding Newton's Direction in Solving
Large-Scale Unconstrained Optimization for Problems with an Arrowhead
Hessian Matrix
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2A Khadizah Ghazali Jumat Sulaiman Yosza Dasril and Darmesah Gabda In this paper, we proposed an alternative way to find the Newton direction in solving large-scale unconstrained optimization problems where the Hessian of the Newton direction is an arrowhead matrix. The alternative approach is a two-point Explicit Group Gauss-Seidel (2EGGS) block iterative method. To check the validity of our proposed Newton’s direction, we combined the Newton method with 2EGGS iteration for solving unconstrained optimization problems and compared it with a combination of the Newton method with Gauss-Seidel (GS) point iteration and the Newton method with Jacobi point iteration. The numerical experiments are carried out using three different artificial test problems with its Hessian in the form of an arrowhead matrix. In conclusion, the numerical results showed that our proposed method is more superior than the reference method in term of the number of inner iterations and the execution time.
PubDate: Mar 2020
- Robust Method in Multiple Linear Regression Model on Diabetes Patients
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2A Mohd Saifullah Rusiman Siti Nasuha Md Nor Suparman and Siti Noor Asyikin Mohd Razali This paper is focusing on the application of robust method in multiple linear regression (MLR) model towards diabetes data. The objectives of this study are to identify the significant variables that affect diabetes by using MLR model and using MLR model with robust method, and to measure the performance of MLR model with/without robust method. Robust method is used in order to overcome the outlier problem of the data. There are three robust methods used in this study which are least quartile difference (LQD), median absolute deviation (MAD) and least-trimmed squares (LTS) estimator. The result shows that multiple linear regression with application of LTS estimator is the best model since it has the lowest value of mean square error (MSE) and mean absolute error (MAE). In conclusion, plasma glucose concentration in an oral glucose tolerance test is positively affected by body mass index, diastolic blood pressure, triceps skin fold thickness, diabetes pedigree function, age and yes/no for diabetes according to WHO criteria while negatively affected by the number of pregnancies. This finding can be used as a guideline for medical doctors as an early prevention of stage 2 of diabetes.
PubDate: Mar 2020
- Weakly Special Classes of Modules
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2A Puguh Wahyu Prasetyo Indah Emilia Wijayanti Halina France-Jackson and Joe Repka In the development of Theory Radical of Rings, there are two kinds of radical constructions. The first radical construction is the lower radical construction and the second one is the upper radical construction. In fact, the class π of all prime rings forms a special class and the upper radical class of forms a radical class which is called the prime radical. An upper radical class which is generated by a special class of rings is called a special radical class. On the other hand, we also have the class of all semiprime rings which is weakly special class of rings. Moreover, we can construct a special class of modules by using a given special class of rings. This condition motivates the existence of the question how to construct weakly special class modules by using a given weakly special class of rings. This research is a qualitative research. The results of this research are derived from fundamental axioms and properties of radical class of rings especially on special and weakly special radical classes. In this paper, we introduce the notion of a weakly special class of modules, a generalization of the notion on a special class of modules based on the definition of semiprime modules. Furthermore, some properties and examples of weakly special classes of modules are given. The main results of this work are the definition of a weakly special class of modules and their properties.
PubDate: Mar 2020
- Bayesian Estimation in Piecewise Constant Model with Gamma Noise by Using
Reversible Jump MCMC
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2A Suparman A piecewise constant model is often applied to model data in many fields. Several noises can be added in the piecewise constant model. This paper proposes the piecewise constant model with a gamma multiplicative noise and a method to estimate a parameter of the model. The estimation is done in a Bayesian framework. A prior distribution for the model parameter is chosen. The prior distribution for the parameter model is multiplied with a likelihood function for the data to build a posterior distribution for the parameter. Because a number of models are also parameters, a form of the posterior distribution for the parameter is too complex. A Bayes estimator cannot be calculated easily. A reversible jump Monte Carlo Markov Chain (MCMC) is used to find the Bayes estimator of the model parameter. A result of this paper is the development of the piecewise constant model and the method to estimate the model parameter. An advantage of this method can simultaneously estimate the constant piecewise model parameter.
PubDate: Mar 2020
- Approximate Analytical Solutions of Nonlinear Korteweg-de Vries Equations
Using Multistep Modified Reduced Differential Transform Method
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2A Che Haziqah Che Hussin Ahmad Izani Md Ismail Adem Kilicman and Amirah Azmi This paper aims to propose and investigate the application of Multistep Modified Reduced Differential Transform Method (MMRDTM) for solving the nonlinear Korteweg-de Vries (KdV) equation. The proposed technique has the advantage of producing an analytical approximation in a fast converging sequence with a reduced number of calculated terms. MMRDTM is presented with some modification of the reduced differential transformation method (RDTM) which is the nonlinear term is replaced by related Adomian polynomials and then adopting a multistep approach. Consequently, the obtained approximation results do not only involve smaller number of calculated terms for the nonlinear KdV equation, but also converge rapidly in a broad time frame. We provided three examples to illustrates the advantages of the proposed method in obtaining the approximation solutions of the KdV equation. To depict the solution and show the validity and precision of the MMRDTM, graphical inputs are included.
PubDate: Mar 2020
- The Performance of Different Correlation Coefficient under Contaminated
Bivariate Data
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2A Bahtiar Jamili Zaini and Shamshuritawati Sharif Bivariate data consist of 2 random variables that are obtained from the same population. The relationship between 2 bivariate data can be measured by correlation coefficient. A correlation coefficient computed from the sample data is used to measure the strength and direction of a linear relationship between 2 variables. However, the classical correlation coefficient results are inadequate in the presence of outliers. Therefore, this study focuses on the performance of different correlation coefficient under contaminated bivariate data to determine the strength of their relationships. We compared the performance of 5 types of correlation, which are classical correlations such as Pearson correlation, Spearman correlation and Kendall’s Tau correlation with other robust correlations, such as median correlation and median absolute deviation correlation. Results show that when there is no contamination in data, all 5 correlation methods show a strong relationship between 2 random variables. However, under the condition of data contamination, median absolute deviation correlation denotes a strong relationship compared to other methods.
PubDate: Mar 2020
- Stochastic Decomposition Result of an Unreliable Queue with Two Types of
Services
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2 Gautam Choudhury Akhil Goswami Anjana Begum and Hemanta Kumar Sarmah The single server queue with two types of heterogeneous services with generalized vacation for unreliable server have been extended to include several types of generalizations to which attentions has been paid by several researchers. One of the most important results which deals with such types of models is the “Stochastic Decomposition Result”, which allows the system behaviour to be analyzed by considering separately distribution of system (queue) size with no vacation and additional system (queue) size due to vacation. Our intention is to look into some sort of united approach to establish stochastic decomposition result for two types of general heterogeneous service queues with generalized vacations for unreliable server with delayed repair to include several types of generalizations. Our results are based on embedded Markov Chain technique which is considerably a most powerful and popular method wisely used in applied probability, specially in queueing theory. The fundamental idea behind this method is to simplify the description of state from two dimensional states to one dimensinal state space. Finally, the results that are derived is shown to include several types of generalizations of some existing well- known results for vacation models, that may lead to remarkable simpliﬁcation while solving similar types of complex models.
PubDate: Mar 2020
- Approximations for Theories of Abelian Groups
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2 Inessa I. Pavlyuk and Sergey V. Sudoplatov Approximations of syntactic and semantic objects play an important role in various ﬁelds of mathematics. They can create theories and structures in one given class by means of others, usually simpler. For instance, in certain situations, inﬁnite objects can be approximated by ﬁnite or strongly minimal ones. Thus, complicated objects can be collected using simpliﬁed ones. Among these objects, Abelian groups, their ﬁrst order theories, connections and dynamics are of interests. Theories of Abelian groups are characterized by Szmielew invariants leading to the study and descriptions of approximations in terms of these invariants. In the paper we apply a general approach for approximating theories to the class of theories of Abelian groups which characterizes the approximability of a theory of Abelian groups by a given family of theories of Abelian groups in terms of Szmielew invariants and their limits. We describe some forms of approximations for theories of Abelian groups. In particular, approximations of theories of Abelian groups by theories of ﬁnite ones are characterized. In addition, we describe approximations by quasi-cyclic and torsion-free Abelian groups and their combinations with respect to given families of prime numbers. Approximations and closures of families of theories with respect to standard Abelian groups for various sets of prime numbers are also described.
PubDate: Mar 2020
- Groundwater-quality Assessment Models with Total Nitrogen Transformation
Effects
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2 Supawan Yena and Nopparat Pochai Nitrogen is emitted extensively by industrial companies, increasing nitrogen compounds such as ammonia, nitrate, and nitrite in soil and water as a result of nitrogen cycle reactions. Groundwater contamination with nitrates and nitrites impacts human health. Mathematical models can explain groundwater contamination with nitrates and nitrites. Hydraulic head model provides the hydraulic head of groundwater. Groundwater velocity model provided x- and y- direction vector in groundwater. Groundwater contamination distribution model provides nitrogen, nitrate and nitrite concentration. Finite difference techniques are approximate the models solution. Alternating direction explicit method was used to clarify hydraulic head model. Centered space explained groundwater velocity model. Forward time central space was used to predict groundwater transportation of contamination models. We simulate different circumstances to explain the pollution in leachate water underground, paying attention to the toxic nitrogen, ammonia, nitrate, nitrite blended in the water.
PubDate: Mar 2020
- Improved Frequency Table with Application to Environmental Data
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2 Mohammed M. B. Adam M. B. Zulkafli H. S. and Ali N. This paper proposes three different statistics to be used to represent the magnitude observations in each class when estimating the statistical measures from the frequency table for continuous data. The existing frequency tables use the midpoint as the magnitude of observations in each class, which results in an error called grouping error. Using the midpoint is due to the assumption that the observations in each class are uniformly distributed and concentrated around their midpoint, which is not always valid. In this research, construction of the frequency tables using the three proposed statistics, the arithmetic mean, median, and midrange and midpoint are respectively named, Method 1, Method 2, Method 3, and the Existing method. The four methods are compared using root-mean-squared error (RMSE) by performing simulation studies using three distributions, normal, uniform, exponential distributions. The simulation results are validated using real data, Glasgow weather data. The ﬁndings indicated that using the arithmetic mean to represent the magnitude of observations in each class of the frequency table leads to minimal error relative to other statistics. It is followed by using the median, for data simulated from normal and exponential distributions, and using midrange for data simulated from uniform distribution. Meanwhile, in choosing the appropriate number of classes used in constructing the frequency tables, among seven different rules used, the freedman and Diaconis rule is the recommended rule.
PubDate: Mar 2020
- Solvability, Completeness and Computational Analysis of A Perturbed
Control Problem with Delays
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2 Ludwik Byszewski Denis Blackmore Alexander A. Balinsky Anatolij K. Prykarpatski and Mirosław Lu´styk As a ﬁrst step, we provide a precise mathematical framework for the class of control problems with delays (which we refer to as the control problem) under investigation in a Banach space setting, followed by careful deﬁnitions of the key properties to be analyzed such as solvability and complete controllability. Then, we recast the control problem in a reduced form that is especially amenable to the innovative analytical approach that we employ. We then study in depth the solvability and completeness of the (reduced) nonlinearly perturbed linear control problem with delay parameters. The main tool in our approach is the use of a Borsuk–Ulam type ﬁxed point theorem to analyze the topological structure of a suitably reduced control problem solution, with a focus on estimating the dimension of the corresponding solution set, and proving its completeness. Next, we investigate its analytical solvability under some special, mildly restrictive, conditions imposed on the linear control and nonlinear functional perturbation. Then, we describe a novel computational projection-based discretization scheme of our own devising for obtaining accurate approximate solutions of the control problem along with useful error estimates. The scheme effectively reduces the inﬁnite-dimensional problem to a sequence of solvable ﬁnite-dimensional matrix valued tasks. Finally, we include an application of the scheme to a special degenerate case of the problem wherein the Banach–Steinhaus theorem is brought to bear in the estimation process.
PubDate: Mar 2020
- The Way of Pooling p-values
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2 Fausto Galetto Pooling p-values arises both in practical (in any science and engineering applications) and theoretical (statistical) issues. The p-value (sometimes p value) is a probability used as a statistical decision quantity: in practical applications, it is used to decide if an experimenter has to believe that his/her collected data confirm or disconfirm his/her hypothesis about the “reality” of a phenomenon. It is a real number, determination of a Random Variable, uniformly distributed, related to the data provided by the measurement of a phenomenon. Almost all statistical software provides p-values when statistical hypotheses are considered, e.g. in Analysis of Variance and regression methods. Combining the p-values from various samples is crucial, because the number of degrees of freedom (df) of the samples we want to combine is influencing our decision: forgetting this can have dangerous consequences. One way of pooling p-values is provided by a formula of Fisher; unfortunately, this method does not consider the number of degrees of freedom. We will show other ways of doing that and we will prove that theory is more important than any formula which does not consider the phenomenon on which we have to decide: the distribution of the Random Variables is fundamental in order to pool data from various samples. Manager, professors and scholars should remember Deming’s profound knowledge and Juran’s ideas; profound knowledge means “understanding variation (type of variation)” in any process, production or managerial; not understanding variation causes cost of poor quality (more than 80% of sales value) and do not permits a real improvement.
PubDate: Mar 2020
- Analysis of the Element's Arrangement Structures in Discrete Numerical
Sequences
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2 Anton Epifanov Paper contains the results of the analysis of the laws of functioning of discrete dynamical systems, as mathematical models of which, using the apparatus of geometric images of automatons, are used numerical sequences which interpreted as sequences of second coordinates of points of geometric images of automatons. The geometric images of the laws of the functioning of the automaton are reduced to numerical sequences and numerical graphs. The problem of constructing an estimate of the complexity of the structures of such sequences is considered. To analyze the structure of sequences, recurrence forms are used that give characteristics of the relative positions of elements in the sequence. The parameters of recurrent forms are considered, which characterize the lengths of the initial segments of sequences determined by recurrence forms of fixed orders, the number of changes of recurrent forms required to determine the entire sequence, the place of change of recurrence forms, etc. All these parameters are systematized into the special spectrum of dynamic parameters used for the recurrent determination of sequences, which is used as a means of constructing estimates of the complexity of sequences. In this paper, it also analyzes return sequences (for example, Fibonacci numbers), for the analysis of the properties of which characteristic sequences are used. The properties of sequences defining approximations of fundamental mathematical constants (number e, pi number, golden ratio, Euler constant, Catalan constant, values of Riemann zeta function, etc.) are studied. Complexity estimates are constructed for characteristic sequences that distinguish numbers with specific properties in a natural series, as well as for characteristic sequences that reflect combinations of the properties of numbers.
PubDate: Mar 2020
- Orthogonal Splines in Approximation of Functions
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2 Leontiev V. L. The problem of approximating of a surface given by the values of a function of two arguments in a finite number of points of a certain region in the classical formulation is reduced to solving a system of algebraic equations with tightly filled matrixes or with band matrixes. In the case of complex surfaces, such a problem requires a significant number of arithmetic operations and significant computer time spent on such calculations. The curvilinear boundary of the domain of general type does not allow using classical orthogonal polynomials or trigonometric functions to solve this problem. This paper is devoted to an application of orthogonal splines for creation of approximations of functions in form of finite Fourier series. The orthogonal functions with compact supports give possibilities for creation of such approximations of functions in regions with arbitrary geometry of a boundary in multidimensional cases. A comparison of the fields of application of classical orthogonal polynomials, trigonometric functions and orthogonal splines in approximation problems is carried out. The advantages of orthogonal splines in multidimensional problems are shown. The formulation of function approximation problem in variational form is given, a system of equations for coefficients of linear approximation with a diagonal matrix is formed, expressions for Fourier coefficients and approximations in the form of a finite Fourier series are written. Examples of approximations are considered. The efficiency of orthogonal splines is shown. The development of this direction associated with the use of other orthogonal splines is discussed.
PubDate: Mar 2020
- Numerical Simulation of a Two-Dimensional Vertically Averaged Groundwater
Quality Assessment in Homogeneous Aquifer Using Explicit Finite Difference
Techniques
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2 Supawan Yena and Nopparat Pochai Leachate contamination in a landfill causes pollution that flowing down to the groundwater. There are many methods to measure the groundwater quality. Mathematical models are often used to describe the groundwater flow. In this research, the affection of landfill construction to groundwater-quality around rural area is focused. Three mathematical models are combined. The first model is a two-dimensional groundwater flow model. It provides the hydraulic head of the groundwater. The second model is the velocity potential model. It provides the groundwater flow velocity. The third model is a two-dimensional vertically averaged groundwater pollution dispersion model. The groundwater pollutant concentration is provided. The forward time centered technique with the centered in space, the forward in space and the backward in space with all boundaries are used to obtain approximate hydraulic head, the flow velocity in x- and y- directions, respectively. The approximated groundwater flow velocity is used to input into a two-dimensional vertically averaged groundwater pollution dispersion model. The forward time centered space technique with the centered in space, the forward in space and the backward in space with all boundaries are used to obtain approximate the groundwater pollutant concentration. The proposed explicit forward time centered spaced finite difference techniques to the groundwater flow model the velocity potential model and the groundwater pollution dispersion model give good agreement approximated solutions.
PubDate: Mar 2020
- Probability Aspects of Entrance Exams at University
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2 Jindrich Klufa The entrance examinations tests were shorted from 50 questions to 40 questions at the Faculty of International Relations at University of Economics in Prague due to time reasons. These tests are the multiple choice question tests. The multiple choice question tests are suitable for entrance examinations at University of Economics in Prague - the tests are objective and results can be evaluated quite easily and quickly for large number of students. On the other hand, a student can obtain certain number of points in the test purely by guessing the right answers. This shortening of the tests from 50 questions to 40 questions has negative influence on the probability distributions of number of points in the tests (under assumption of the random choice of answers). Therefore, this paper is suggested a solution of this problem. The comparison of these three ways of acceptance of applicants to study the Faculty of International Relations at University of Economics from probability point of view is performed in present paper. The results of this paper show that there has been a significant improvement of the probability distributions of number of points in the tests. The obtained conclusions can be used for admission process at the Faculty of International Relations in coming years.
PubDate: Mar 2020
- Sufficient conditions for univalence obtained by using Briot-Bouquet
differential subordination
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2 GeorgiaIrina Oros and Alina Alb Lupas In this paper, we define the operator Im : differential-integral operator, where Sm is S˘al˘agean differential operator and Lm is Libera integral operator. By using the operator Im the class of univalent functions denoted by is defined and several differential subordinations are studied. Even if the use of linear operators and introduction of new classes of functions where subordinations are studied is a well-known process, the results are new and could be of interest for young researchers because of the new approach derived from mixing a differential operator and an integral one. By using this differential–integral operator, we have obtained new sufficient conditions for the functions from some classes to be univalent. For the newly introduced class of functions, we show that is it a class of convex functions and we prove some inclusion relations depending on the parameters of the class. Also, we show that this class has as subclass the class of functions with bounded rotation, a class studied earlier by many authors cited in the paper. Using the method of the subordination chains, some differential subordinations in their special Briot-Bouquet form are obtained regarding the differential–integral operator introduced in the paper. The best dominant of the Briot-Bouquet differential subordination is also given. As a consequence, sufficient conditions for univalence are stated in two criteria. An example is also illustrated, showing how the operator is used in obtaining Briot–Bouquet differential subordinations and the best dominant.
PubDate: Mar 2020
- Geometric Topics on Elementary Amenable Groups
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2 Mostafa Ftouhi Mohammed Barmaki and Driss Gretete The class of amenable groups plays an important role in many areas of mathematics such as ergodic theory, harmonic analysis, representation theory, dynamical systems, geometric group theory, probability theory and statistics. The class of amenable groups contains in particular all finite groups, all abelian groups and, more generally, all solvable groups. It is closed under the operations of taking subgroups, taking quotients, taking extensions, and taking inductive limits. In 1959, Harry Kesten proved that there is a relation between the amenability and the estimates of symmetric random walk on finitely generated groups. In this article we study the classification of locally compact compactly generated groups according to return probability to the origin. Our aim is to compare several geometric classes of groups. The central tool in this comparison is the return probability on locally compact groups. we introduce several classes of groups in order to characterize the geometry of locally compact groups compactly generated. Our aim is to compare these classes in order to better understand the geometry of such groups by referring to the behavior of random walks on these groups. As results, we have found inclusion relationships between these defined classes and we have given counterexamples for reciprocal inclusions.
PubDate: Mar 2020
- Semi Bounded Solution of Hypersingular Integral Equations of the First
Kind on the Rectangle
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2 Zainidin Eshkuvatov Massamdi Kommuji Rakhmatullo Aloev Nik Mohd Asri Nik Long and Mirzoali Khudoyberganov A hypersingular integral equations (HSIEs) of the first kind on the interval [ 1 ; 1 ] with the assumption that kernel of the hypersingular integral is constant on the diagonal of the domain is considered. Truncated series of Chebyshev polynomials of the third and fourth kinds are used to find semi bounded (unbounded on the left and bounded on the right and vice versa) solutions of HSIEs of first kind. Exact calculations of singular and hypersingular integrals with respect to Chebyshev polynomials of third and forth kind with corresponding weights allows us to obtain high accurate approximate solution. Gauss-Chebyshev quadrature formula is extended for regular kernel integrals. Three examples are provided to verify the validity and accuracy of the proposed method. Numerical examples reveal that approximate solutions are exact if solution of HSIEs is of the polynomial forms with corresponding weights.
PubDate: Mar 2020
- Comparison Analysis: Large Data Classification Using PLS-DA and Decision
Trees
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2 Nurazlina Abdul Rashid Norashikin Nasaruddin Kartini Kassim and Amirah Hazwani Abdul Rahim Classification studies are widely applied in many areas of research. In our study, we are using classification analysis to explore approaches for tackling the classification problem for a large number of measures using partial least square discriminant analysis (PLS-DA) and decision trees (DT). The performance for both methods was compared using a sample data of breast tissues from the University of Wisconsin Hospital. A partial least square discriminant analysis (PLS-DA) and decision trees (DT) predict the diagnosis of breast tissues (M = malignant, B = benign). A total of 699 patients diagnose (458 benign and 241 malignant) are used in this study. The performance of PLS-DA and DT has been evaluated based on the misclassification error and accuracy rate. The results show PLS-DA can be considered as a good and reliable technique to be used when dealing with a large dataset for the classification task and have good prediction accuracy.
PubDate: Mar 2020
- On Degenerations and Invariants of Low-Dimensional Complex Nilpotent
Leibniz Algebras
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2 Nurul Shazwani Mohamed Sharifah Kartini Said Husain and Faridah Yunos Given two algebras and , if lies in the Zariski closure of the orbit , we say that is a degeneration of . We denote this by . Degenerations (or contractions) were widely applied to a range of physical and mathematical point of view. The most well-known example oriented to the application on degenerations is limiting process from quantum mechanics to classical mechanics under that corresponds to the contraction of the Heisenberg algebras to the abelian ones of the same dimension. Research on degenerations of Lie, Leibniz and other classes of algebras are very active. Throughout the paper we are dealing with mathematical background with abstract algebraic structures. The present paper is devoted to the degenerations of low-dimensional nilpotent Leibniz algebras over the field of complex numbers. Particularly, we focus on the classification of three-dimensional nilpotent Leibniz algebras. List of invariance arguments are provided and its dimensions are calculated in order to find the possible degenerations between each pair of algebras. We show that for each possible degenerations, there exists construction of parameterized basis on parameter We proof the non-degeneration case for mentioned classes of algebras by providing some reasons to reject the degenerations. As a result, we give complete list of degenerations and non-degenerations of low-dimensional complex nilpotent Leibniz algebras. In future research, from this result we can find its rigidity and irreducible components.
PubDate: Mar 2020
- The Semi Analytics Iterative Method for Solving Newell-Whitehead-Segel
Equation
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2 Busyra Latif Mat Salim Selamat Ainnur Nasreen Rosli Alifah Ilyana Yusoff and Nur Munirah Hasan Newell-Whitehead-Segel (NWS) equation is a nonlinear partial differential equation used in modeling various phenomena arising in fluid mechanics. In recent years, various methods have been used to solve the NWS equation such as Adomian Decomposition method (ADM), Homotopy Perturbation method (HPM), New Iterative method (NIM), Laplace Adomian Decomposition method (LADM) and Reduced Differential Transform method (RDTM). In this study, the NWS equation is solved approximately using the Semi Analytical Iterative method (SAIM) to determine the accuracy and effectiveness of this method. Comparisons of the results obtained by SAIM with the exact solution and other existing results obtained by other methods such as ADM, LADM, NIM and RDTM reveal the accuracy and effectiveness of the method. The solution obtained by SAIM is close to the exact solution and the error function is close to zero compared to the other methods mentioned above. The results have been executed using Maple 17. For future use, SAIM is accurate, reliable, and easier in solving the nonlinear problems since this method is simple, straightforward, and derivative free and does not require calculating multiple integrals and demands less computational work.
PubDate: Mar 2020
- From Exploratory Data Analysis to Exploratory Spatial Data Analysis
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2 Patricia Abelairas-Etxebarria and Inma Astorkiza The Exploratory Data Analysis raised by Tuckey [19] has been used in multiple research in many areas but, especially, in the area of the social sciences. This technique searches behavioral patterns of the variables of the study, establishing a hypothesis with the least possible structure. However, in recent times, the inclusion of the spatial perspective in this type of analysis has been revealed as essential because, in many analyses, the observations are spatially autocorrelated and/or they present spatial heterogeneity. The presence of these spatial effects makes necessary to include spatial statistics and spatial tools in the Exploratory Data Analysis. Exploratory Spatial Data Analysis includes a set of techniques that describe and visualize those spatial effects: spatial dependence and spatial heterogeneity. It describes and visualizes spatial distributions, identifies outliers, finds distribution patterns, clusters and hot spots and suggests spatial regimes or other forms of spatial heterogeneity and, it is being increasingly used. With the objective of reviewing the last applications of this technique, this paper, firstly, shows the tools used in Exploratory Spatial Data Analysis and, secondly, reviews the latest Exploratory Spatial Data Analysis applications focused on different areas in the social sciences particularly. As conclusion, it should be noted the growing interest in the use of this spatial technique to analyze different aspects of the social sciences including the spatial dimension.
PubDate: Mar 2020
- A New Method to Estimate Parameters in the Simple Regression Linear
Equation
Abstract: Publication date: Mar 2020
Source:Mathematics and Statistics Volume 8 Number 2 Agung Prabowo Agus Sugandha Agustini Tripena Mustafa Mamat Sukono and Ruly Budiono Linear regression is widely used in various fields. Research on linear regression uses the OLS and ML method in estimating its parameters. OLS and ML method require many assumptions to complete. It is frequently found there is an unconditional assumption that both methods are not successfully used. This paper proposes a new method which does not require any assumption with a condition. The new method is called SAM (Simple Averaging Method) to estimate parameters in the simple linear regression model. The method may be used without fulfilling assumptions in the regression model. Three new theorems are formulated to simplify the estimation of parameters in the simple linear regression model with SAM. By using the same data, the simple linear regression model parameter estimation is conducted using SAM. The result shows that the obtained regression parameter is not quite far different. However, to measure the accuracy of both methods, a comparison of errors made by each method is conducted using Root Mean Square Error (RMSE) and Mean Averaged Error (MAE). By comparing the values of RMSE and MAE for both methods, SAM method may be used to estimate parameters in the regression equation. The advantage of SAM is free from all assumptions required by regression, such as error normality assumption while the data should be from the normal distribution.
PubDate: Mar 2020
- Penalized Maximum Likelihood Estimation of Semiparametric Generalized
Linear Models with Application to Climate Temperature Data
Abstract: Publication date: Jul 2020
Source:Mathematics and Statistics Volume 8 Number 4 Azumah Karim Ananda Omutokoh Kube and Bashiru Imoro Ibn Saeed Global temperature change is an important indicator of climate change. Climate time series data are characterized by trend, seasonal/cyclical as well as irregular components. Adequately modeling these components cannot be overemphasized. In this paper, we have proposed an approach of modeling temperature data using semiparametric additive generalized linear model. We have derived a penalized maximum likelihood estimation of the additive component of the semiparametric generalized linear models, that is, of regression coefficients and smooth functions. A statistical modeling with real time series data set was conducted on temperature data. The study has provided indications on the gain of using semiparametric modeling in situations where a signal component can be additively decomposed in to trend, cyclical and irregular components. Thus, we recommend semiparametric additive penalized models as an option to fit time series data sets in modelling the different component with different functions to adequately explain the relation inherent in data.
PubDate: Jul 2020
- A Modification of Differential Transform Method for Solving Systems of
Second Order Ordinary Differential Equations
Abstract: Publication date: Jul 2020
Source:Mathematics and Statistics Volume 8 Number 4 S. Al-Ahmad I. M. Sulaiman M. Mamat and L. G. Puspa The method of differential transform (DTM) is among the famous mathematical approaches for obtaining the differential equations solutions. This is due to its simplicity and efficient numerical performance. However, the major drawback of the DTM is obtaining a truncated series solution which is often a good approximation to the true solution of the equation in a specified region. In this study, a modification of DMT scheme known as MDTM is proposed for obtaining an accurate approximation of ordinary differential equations of second order. The scheme whose procedure is designed via DTM, the Laplace transforms and finally Padé approximation, gives a good approximate for the true solution of the equations in a large region. The proposed approach would be able to overcome the difficulty encountered using the classical DTM, and thus, can serve as an alternative approach for obtaining the solutions of these problems. Preliminary results are presented based on some examples which illustrate the strength and application of the defined scheme. Also, all the obtained results corresponded to exact solutions.
PubDate: Jul 2020
- Comparative Study on Fuzzy Models for Crop Production Forecasting
Abstract: Publication date: Jul 2020
Source:Mathematics and Statistics Volume 8 Number 4 Amit Kumar Rana Fuzzy sets theory is a very useful technique to increase effectiveness and efficiency of forecasting. The conventional time series is not applicable when the variable of time series are words variables i.e. variables with linguistic terms. As India and most of the Asian countries are of agriculture-based economy with very smaller farmer land holding area in comparison to America, Australia and Europe counterparts, it becomes more important for these countries to have an approximate idea regarding future crop production. It not only will help in planning policies for future but also will be a great help for farmers and agro based companies for their future managements. For small area production, soft computing technique is an important and effective tool for predicting production, as agriculture production involve a high degree of uncertainties in many parameters. In the present study, 21 years agricultural crop yield data is used and a comparative analysis of forecast is done with three fuzzy models. The robustness of the model is tested on real time agricultural farm production data of wheat crop of G.B. Pant University of Agriculture and Technology Pantnagar, India. As soft computing techniques involve uncertainty of the system under study, it becomes more and more important for forecasting models to be accurate with the prediction. The efficiency of the three models is examined on the basis of statistical errors. The models under study are judged on the basis of Mean Square Error and average percentage error. The results of the study are in case of small area production prediction and will encourage for predicting large scale production.
PubDate: Jul 2020
- Triple Laplace Transform in Bicomplex Space with Application
Abstract: Publication date: Jul 2020
Source:Mathematics and Statistics Volume 8 Number 4 Mahesh Puri Goswami and Naveen Jha In this article, we investigate bicomplex triple Laplace transform in the framework of bicomplexified frequency domain with Region of Convergence (ROC), which is generalization of complex triple Laplace transform. Bicomplex numbers are pairs of complex numbers with commutative ring with unity and zero-divisors, which describe physical interpretation in four dimensional spaces and provide large class of frequency domain. Also, we derive some basic properties and inversion theorem of triple Laplace transform in bicomplex space. In this technique, we use idempotent representation methodology of bicomplex numbers, which play vital role in proving our results. Consequently, the obtained results can be highly applicable in the fields of Quantum Mechanics, Signal Processing, Electric Circuit Theory, Control Engineering, and solving differential equations. Application of bicomplex triple Laplace transform has been discussed in finding the solution of third-order partial differential equation of bicomplex-valued function.
PubDate: Jul 2020
- The Implementation of Nonlinear Principal Component Analysis to Acquire
the Demography of Latent Variable Data (A Study Case on Brawijaya
University Students)
Abstract: Publication date: Jul 2020
Source:Mathematics and Statistics Volume 8 Number 4 Solimun Adji Achmad Rinaldo Fernandes and Retno Ayu Cahyoningtyas Nonlinear principal component analysis is used for data that has a mixed scale. This study uses a formative measurement model by combining metric and nonmetric data scales. The variable used in this study is the demographic variable. This study aims to obtain the principal component of the latent demographic variable and to identify the strongest indicators of demographic formers with mixed scales using samples of students of Brawijaya University based on predetermined indicators. The data used in this study are primary data with research instruments in the form of questionnaires distributed to research respondents, which are active students of Brawijaya University Malang. The used method is nonlinear principal component analysis. There are nine indicators specified in this study, namely gender, regional origin, father's occupation, mother's occupation, type of place of residence, father's last education, mother's last education, parents' income per month, and students' allowance per month. The result of this study shows that the latent demographic variable with samples of a student at Brawijaya University can be obtained by calculating its component scores. The nine indicators formed in PC1 or X1 were able to store diversity or information by 19.49%, while the other 80.51% of diversity or other information was not saved in this PC. From these indicators, the strongest indicator in forming latent demographic variables with samples of a student of Brawijaya University is the origin of the region (I2) and type of residence (I5).
PubDate: Jul 2020
- Logistic Map on the Ring of Multisets and Its Application in Economic
Models
Abstract: Publication date: Jul 2020
Source:Mathematics and Statistics Volume 8 Number 4 Iryna Halushchak Zoriana Novosad Yurii Tsizhma and Andriy Zagorodnyuk In this paper, we extend complex polynomial dynamics to a set of multisets endowed with some ring operations (the metric ring of multisets associated with supersymmetric polynomials of infinitely many variables). Some new properties of the ring of multisets are established and a homomorphism to a function ring is constructed. Using complex homomorphisms on the ring of multisets, we proposed a method of investigations of polynomial dynamics over this ring by reducing them to a finite number of scalarvalued polynomial dynamics. An estimation of the number of such scalar-valued polynomial dynamics is established. As an important example, we considered an analogue of the logistic map, defined on a subring of multisets consisting of positive numbers in the interval [0; 1]: Some possible application to study the natural market development process in a competitive environment is proposed. In particular, it is shown that using the multiset approach, we can have a model that takes into account credit debt and reinvestments. Some numerical examples of logistic maps for different growth rate multiset [r] are considered. Note that the growth rate [r] may contain both "positive" and "negative" components and the examples demonstrate the influences of these components on the dynamics.
PubDate: Jul 2020
- 3-Vertex Friendly Index Set of Graphs
Abstract: Publication date: Jul 2020
Source:Mathematics and Statistics Volume 8 Number 4 Girija K. P. Devadas Nayak C Sabitha D’Souza and Pradeep G. Bhat Graph labeling is an assignment of integers to the vertices or the edges, or both, subject to certain conditions. In literature we find several labelings such as graceful, harmonious, binary, friendly, cordial, ternary and many more. A friendly labeling is a binary mapping such that where and represents number of vertices labeled by 1 and 0 respectively. For each edge assign the label , then the function f is cordial labeling of G if and , where and are the number of edges labeled 1 and 0 respectively. A friendly index set of a graph is { runs over all f riendly labeling f of G} and it is denoted by FI(G). A mapping is called ternary vertex labeling and represents the vertex label for . In this article, we extend the concept of ternary vertex labeling to 3-vertex friendly labeling and define 3-vertex friendly index set of graphs. The set runs over all 3 ? vertex f riendly labeling f f or all is referred as 3-vertex friendly index set. In order to achieve , number of vertices are partitioned into such that for all with and la- bel the edge by where . In this paper, we study the 3-vertex friendly index sets of some standard graphs such as complete graph Kn, path Pn, wheel graph Wn, complete bipartite graph Km,n and cycle with parallel chords PCn.
PubDate: Jul 2020
- The Parabolic Transform and Some Singular Integral Evolution Equations
Abstract: Publication date: Jul 2020
Source:Mathematics and Statistics Volume 8 Number 4 Mahmoud M. El-Borai and Khairia El-Said El-Nadi Some singular integral evolution equations with wide class of closed operators are studied in Banach space. The considered integral equations are investigated without the existence of the resolvent of the closed operators. Also, some non-linear singular evolution equations are studied. An abstract parabolic transform is constructed to study the solutions of the considered ill-posed problems. Applications to fractional evolution equations and Hilfer fractional evolution equations are given. All the results can be applied to general singular integro-differential equations. The Fourier Transform plays an important role in constructing solutions of the Cauchy problems for parabolic and hyperbolic partial differential equations. This means that the Fourier transform is suitable but under conditions on the characteristic forms of the partial differential operators. Also, the Laplace transform plays an important role in studying the Cauchy problem for abstract differential equations in Banach space. But in this case, we need the existence of the resolvent of the considered abstract operators. This note is devoted to exploring the Cauchy problem for general singular integro-partial differential equations without conditions on the characteristic forms and also to study general singular integral evolution equations. Our approach is based on applying the new parabolic transform. This transform generalizes the methods developed within the regularization theory of ill-posed problems.
PubDate: Jul 2020
- Global Existence and Nonexistence of Solutions to a Cross Diffusion System
with Nonlocal Boundary Conditions
Abstract: Publication date: Jul 2020
Source:Mathematics and Statistics Volume 8 Number 4 Z. R. Rakhmonov A. Khaydarov and J. E. Urunbaev Mathematical models of nonlinear cross diffusion are described by a system of nonlinear partial parabolic equations associated with nonlinear boundary conditions. Explicit analytical solutions of such nonlinearly coupled systems of partial differential equations are rarely existed and thus, several numerical methods have been applied to obtain approximate solutions. In this paper, based on a self-similar analysis and the method of standard equations, the qualitative properties of a nonlinear cross-diffusion system with nonlocal boundary conditions are studied. We are constructed various self-similar solutions to the cross diffusion problem for the case of slow diffusion. It is proved that for certain values of the numerical parameters of the nonlinear cross-diffusion system of parabolic equations coupled via nonlinear boundary conditions, they may not have global solutions in time. Based on a self-similar analysis and the comparison principle, the critical exponent of the Fujita type and the critical exponent of global solvability are established. Using the comparison theorem, upper bounds for global solutions and lower bounds for blow-up solutions are obtained.
PubDate: Jul 2020
- Construction of Triangles with the Algebraic Geometry Method
Abstract: Publication date: Jul 2020
Source:Mathematics and Statistics Volume 8 Number 4 Viliam Ďuriš and Timotej Šumný The accuracy of geometric construction is one of the important characteristics of mathematics and mathematical skills. However, in geometrical constructions, there is often a problem of accuracy. On the other hand, so-called 'Optical accuracy' appears, which means that the construction is accurate with respect to the drawing pad used. These "optically accurate" constructions are called approximative constructions because they do not achieve exact accuracy, but the best possible approximation occurs. Geometric problems correspond to algebraic equations in two ways. The first method is based on the construction of algebraic expressions, which are transformed into an equation. The second method is based on analytical geometry methods, where geometric objects and points are expressed directly using equations that describe their properties in a coordinate system. In any case, we obtain an equation whose solution in the algebraic sense corresponds to the geometric solution. The paper provides the methodology for solving some specific tasks in geometry by means of algebraic geometry, which is related to cubic and biquadratic equations. It is thus focusing on the approximate geometrical structures, which has a significant historical impact on the development of mathematics precisely because these tasks are not solvable using a compass and ruler. This type of geometric problems has a strong position and practical justification in the area of technology. The contribution of our work is so in approaching solutions of geometrical problems leading to higher degrees of algebraic equations, whose importance is undeniable for the development of mathematics. Since approximate constructions and methods of solution resulting from approximate constructions are not common, the content of the paper is significant.
PubDate: Jul 2020
- Exploring Metallic Ratios
Abstract: Publication date: Jul 2020
Source:Mathematics and Statistics Volume 8 Number 4 R. Sivaraman Huge amount of literature has been written and published about Golden Ratio, but not many had heard about its generalized version called Metallic Ratios, which are introduced in this paper. The methods of deriving them were also discussed in detail. This will help to explore further in the search of universe of real numbers. In mathematics, sequences play a vital role in understanding of the complexities of any given problem which consist of some patterns. For example, the population growth, radioactive decay of a substance, lifetime of an object all follow a sequence called "Geometric Progression". In fact, the rate at which the recent novel corona virus (COVID – 19) is said to follow a Geometric Progression with common ratio approximately between 2 and 3. Almost all branches of science use sequences, for instance, genetic engineers use DNA sequence, Electrical Engineers use Morse-Thue Sequence and this list goes on and on. Among the vast number of sequences used for scientific investigations, one of the most famous and familiar is the Fibonacci Sequence named after the Italian mathematician Leonard Fibonacci through his book "Liber Abaci" published in 1202. In this paper, I shall try to introduce sequences resembling the Fibonacci sequence and try to generalize it to identify general class of numbers called "Metallic Ratios".
PubDate: Jul 2020
- Common Coupled Fixed Point Theorems for Weakly F-contractive Mappings in
Topological Spaces
Abstract: Publication date: Jul 2020
Source:Mathematics and Statistics Volume 8 Number 4 Savita Rathee and Priyanka Gupta In late sixties, Furi and Vignoli proved fixed point results for α-condensing mappings on bounded complete metric spaces. Bugajewski generalized the results to "weakly F-contractive mappings" on topological spaces(TS). Bugajeski and Kasprzak proved several fixed point results for "weakly F-contractive mapping" using the approach of lower(upper) semi-continuous functions. After that, by modifying the concept of "weakly F-contractive mappings", the coupled fixed point results were proved by Cho, Shah and Hussain on topological space. On different spaces, common coupled fixed point results were discussed by Liu, Zhou and damjanovic, Nashine and Shatanawi and many other authors. In this work, we prove the common coupled fixed point theorems by adopting the modified definition of weakly F-contractive mapping r : T→T; where T is a topological space. After that, we extend the result of Cho, Shah and Hussain for Banach spaces to common coupled quasi solutions enriched with a relevant transitive binary relation. Also, we give an example in the support of proved result. Our results extend and generalize several existing results in the literature.
PubDate: Jul 2020
- A Two-dimensional Mathematical Model for Long-term Contaminated
Groundwater Pollution Measurement around a Land Fill
Abstract: Publication date: Jan 2020
Source:Mathematics and Statistics Volume 8 Number 1 Jirapud Limthanakul and Nopparat Pochai A source of contaminated groundwater is governed by the disposal of waste material on a land fill. There are many people in rural areas where the primary source of drinking water is well water. This well water may be contaminated with groundwater from landfills. In this research, a two-dimensional mathematical model for long-term contaminated groundwater pollution measurement around a land fill is proposed. The model is governed by a combination of two models. The first model is a transient two-dimensional groundwater flow model that provides the hydraulic head of the groundwater. The second model is a transient twodimensional advection-diffusion equation that provides the groundwater pollutant concentration. The proposed explicit finite difference techniques are used to approximate the hydraulic head and the groundwater pollutant concentration. The simulations can be used to indicate when each simulated zone becomes a hazardous zone or a protection zone.
PubDate: Jan 2020
- Multiplicity of Approach and Method in Augmentation of Simplex Method: A
Review
Abstract: Publication date: Jan 2020
Source:Mathematics and Statistics Volume 8 Number 1 Nor Asmaa Alyaa Nor Azlan Effendi Mohamad Mohd Rizal Salleh Oyong Novareza Dani Yuniawan Muhamad Arfauz A Rahman Adi Saptari and Mohd Amri Sulaiman The purpose of this review paper is to set an augmentation approach and exemplify distribution of augmentation works in Simplex method. The augmentation approach is classified into three forms whereby it comprises addition, substitution and integration. From the diversity study, the result shows that substitution approach appeared to be the highest usage frequency, which is about 45.2% from the total of percentage. This is then followed by addition approach which makes up 32.3% of usage frequency and integration approach for about 22.6% of usage frequency which makes it the least percentage of the overall usage frequency approach. Since it is being the least usage percentage, the paper is then interested to foresee a future study of integration approach that can be performed from the executed distribution of the augmentation works according to Simplex's computation stages. A theme screening is then conducted with a set of criteria and themes to come out with a proposal of new integration approach of augmentation of Simplex method.
PubDate: Jan 2020
- Gaussian Distribution on Validity Testing to Analyze the Acceptance
Tolerance and Significance Level
Abstract: Publication date: Jan 2020
Source:Mathematics and Statistics Volume 8 Number 1 Arif Rahman Oke Oktavianty Ratih Ardia Sari Wifqi Azlia and Lavestya Dina Anggreni Some researches need data homogeneity. The dispersion of data causes research towards an absurd direction. The outlier makes unrealistic homogeneity. The research can reject the extreme data as outlier to estimate trimmed arithmetic mean. Because of the wide data dispersion, it will fail to identify the outliers. The study will evaluate the confidence interval and compare it with the acceptance tolerance. There are three types of invalidity of data gathering: outliers, too wide dispersion, distracted central tendency.
PubDate: Jan 2020
- Fuzzy Parameterized Dual Hesitant Fuzzy Soft Sets and Its Application in
TOPSIS
Abstract: Publication date: Jan 2020
Source:Mathematics and Statistics Volume 8 Number 1 Zahari Md Rodzi and Abd Ghafur Ahmad The purpose of this work is to present a new theory namely fuzzy parameterized dual hesitant fuzzy soft sets (FPDHFSSs). This theory is an extension of the existing dual hesitant fuzzy soft set whereby the set of parameters have been assigned with respective weightage accordingly. We also introduced the basic operation functions for instance intersection, union, addition and product operations of FPDHFSSs. Then, we proposed the concept of score function of FPDHFSSs of which these scores function were determined based on average mean, geometry mean and fractional score. The said scores function then were divided into the membership and non-membership elements where the distance of FPDHFSSs was introduced. The proposed distance of FPDHFSSs has been applied in TOPSIS which will be able to solve the problem of fuzzy dual hesitant fuzzy soft set environment.
PubDate: Jan 2020
- Usefulness of Mathematics Subjects in the Accounting Courses in
Baccalaureate Education
Abstract: Publication date: Jan 2020
Source:Mathematics and Statistics Volume 8 Number 1 Alec John Villamar Marionne Gayagoy Flerida Matalang and Karen Joy Catacutan This study aimed to determine the usefulness of Mathematics subjects in the accounting courses for Bachelor of Science in Accountancy. Mathematics subjects, which include College Algebra, Mathematics of Investment, Business Calculus and Quantitative Techniques, were evaluated through its Course Learning Objectives, while its usefulness for accounting courses which include Financial Accounting, Advance Accounting, Cost Accounting, Management Advisory Services, Auditing and Taxation, was evaluated by the students. Descriptive research was employed among all students in their 5th-year in BS-Accountancy who were done with all the Accounting Subjects in the Accountancy Program and they all passed the different Mathematics subjects prerequisite to their courses. A survey questionnaire was used to gather data. Using descriptive statistics, results showed that Mathematics of Investment is the most useful subject in the different accounting courses particularly in Financial Accounting, Advance Accounting and Auditing. Further, by using Mean, the results showed that several skills that can be acquired in the Mathematics subjects are found to be useful in accounting courses and the use of the fundamental operations is the most useful skill in all accounting subjects.
PubDate: Jan 2020
- On A 3-Points Inflated Power Series Distributions Characterizations
Abstract: Publication date: Jan 2020
Source:Mathematics and Statistics Volume 8 Number 1 Rafid S. A. Alshkaki Differential equations are used in modelling many disciplines, in engineering, chemistry, physics, biology, economics, and other fields of sciences, hence can be used to understand and to determine the underlying probabilistic behavior of phenomena through their probability distributions. This paper came to use a simple form of differential equations, namely, the linear form, to determine the probabilistic distributions of some of the most important and popular sub class of discrete distributions used in real-life, the Poisson, the binomial, the negative binomial, and the logarithmic series distributions. A class of finite number of inflated points power series distributions, that contains the Poisson, the binomial, the negative binomial, and the logarithmic series distributions as some of its members, was defined and some of its characteristics properties, along with characterization of the 3-points inflated of these four distributions, through a linear differential equation for their probability generating functions were given. Further, some previous known results were shown to be special cases of our results.
PubDate: Jan 2020
- A Comparative Study of Space and Time Fractional KdV Equation through
Analytical Approach with Nonlinear Auxiliary Equation
Abstract: Publication date: Jan 2020
Source:Mathematics and Statistics Volume 8 Number 1 Hasibun Naher Humayra Shafia Md. Emran Ali and Gour Chandra Paul In this article, the nonlinear partial fractional differential equation, namely the KdV equation is renewed with the help of modified Riemann- Liouville fractional derivative. The equation is transformed into the nonlinear ordinary differential equation by using the fractional complex transformation. The goal of this paper is to construct new analytical solutions of the space and time fractional nonlinear KdV equation through the extended -expansion method. The work produces abundant exact solutions in terms of hyperbolic, trigonometric, rational, exponential, and complex forms, which are new and more general than existing results in literature. The newly generated solutions show that the executed method is a well-organized and competent mathematical tool to investigate a class of nonlinear evolution fractional order equations.
PubDate: Jan 2020