Similar Journals
![]() |
Mathematics and Statistics
Number of Followers: 3 ![]() ISSN (Print) 2332-2071 - ISSN (Online) 2332-2144 Published by Horizon Research Publishing ![]() |
- The Locating and Local Locating Domination of Prism Family Graphs
Abstract: Publication date: May 2024
Source:Mathematics and Statistics Volume 12 Number 3 Jebisha Esther S and Veninstine Vivik J In the fields of combinatorics and graph theory, prism graphs are very important. They provide insights into the structural features of many real-world networks and act as a model for them. In graph theory, the study of dominant sets is essential for a variety of applications, including social network research and network design. A dominating set in a graph G is a subset D of vertices V having the property that each vertex w belongs to V − D is neighbouring to at least one vertex D. Determining the minimum cardinal number of dominating sets, locating dominating sets, and local locating dominating sets is of critical importance in such fields as network design and social network analysis. In this paper, we determine these minimum cardinal bounds for families of prism graphs. The study adds to the basic understanding of graph theory by methodically disentangling the intricate relationships between dominating sets in prism graphs. The exploration of lowest cardinal value of locating dominating sets yields solutions to optimisation issues in network design. In this work, we determine the upper bounds of locating domination and local locating domination for prsim, antiprism, crossed prism and circulant ladder prism graph.
PubDate: May 2024
- On Use of Entropy Function for Validating Differential Calculus Results
Abstract: Publication date: May 2024
Source:Mathematics and Statistics Volume 12 Number 3 Omdutt Sharma Surender Kumar Naveen Kumar and Pratiksha Tiwari Rolle's Theorem (RT) and Lagrange's Mean-value Theorem (LMVT) are significant for pure and applied mathematics, and they have applications in various other fields such as management, physics etc. RT is significant in finding the projectile trajectory's maximum height and in information theory, and the entropy function (measure) is used to measure the uncertainty of information. RT is used to analyze the graphs of annual performance in any field. Since information is necessary to analyze any performance and in information theory, entropy measure is a significant tool to quantize the uncertainty so by using the concept of RT and LMVT in information theory the uncertainty and vagueness or noise can be minimized or maximized. In this manuscript, the concept of differential calculus, i.e., RT and LMVT are used for validation of the entropy function. In this paper, characteristics of differential calculus in information entropy function have been discussed. It has been shown that the entropy function satisfies RT and LMVT. It also describes the conditions when Rolle's Theorem becomes the necessary and sufficient condition for entropy function. Theorems are proved related to the concept of differential calculus in information theory which shows that by using the existing entropy function some new entropies can be derived.
PubDate: May 2024
- A Numerical Study of Newell-Whitehead-Segel Type Equations Using Fourth
Order Cubic B-spline Collocation Method
Abstract: Publication date: May 2024
Source:Mathematics and Statistics Volume 12 Number 3 Maheshwar Pathak Rachna Bhatia Pratibha Joshi and Ramesh Chand Mittal Newell-Whitehead-Segel (NWS) type equations arise in solid-state physics, optics, dispersion, convection system, mathematical biology, quantum mechanics, plasma physics and oil pollution in ocean environment. Extensive applications of such type of equations draw attention of scientists toward their numerical solutions. In this work, we propose fourth order numerical method based on cubic B-spline functions for the numerical solutions of nonlinear NWS type equations. The Crank Nicolson finite difference scheme is used to discretize the equation and quasi-linearization is use to linearize the nonlinear term. As a result, we get a system of linear equation, which we solve using Gauss elimination method. Stability analysis has been carried out by a thorough Fourier series analysis and stability conditions have been obtained. The scheme has been applied to five numerical problems having quadratic, cubic and forth order nonlinear terms. The effectiveness and robustness of the proposed technique have been demonstrated by comparing the obtained numerical results with the exact solutions and numerical results obtained by other existing methods. A comparison of the numerical results obtained using the proposed technique with exact solutions shows excellent agreement. Graphs of numerical solutions have been drawn at different times and also compared with the graphs of the exact solutions. The comparative analysis shows that the proposed scheme outperformed other methods in terms of accuracy and produced good results.
PubDate: May 2024
- Confidence Intervals for the Parameter of the Juchez Distribution and
Their Applications
Abstract: Publication date: May 2024
Source:Mathematics and Statistics Volume 12 Number 3 Patchanok Srisuradetchai and Wararit Panichkitkosolkul This paper presents four types of confidence intervals (CIs) for parameter estimation of the Juchez distribution, a robust model in the domain of lifetime data analysis. The likelihood-based, Wald-type, bootstrap-t, andbias-corrected and accelerated (BCa) bootstrap confidence intervals are proposed and evaluated through simulation studies and application to real datasets. The effectiveness of these methods is assessed in terms of the empirical coverage probability (CP) and average length (AL) of the confidence intervals, providing an understanding of their performance under various conditions. Additionally, we derive the Wald-type CI formula in explicit form, making it readily calculable. The results show that when the sample size is small, such as 10, 20, or 30, the bootstrap-t and BCa bootstrap methods produce CPs less than 0.95. However, as sample sizes increase, the CPs of all methods tend to converge towards the nominal level of 0.95. The parameter values also affect the CP. At low values of the parameter, the CPs are quite close to the ideal, with both the Wald-type and likelihood-based methods achieving a CP of approximately 0.95. However, at higher parameter values with small sample sizes, the CPs for the bootstrap-t and BCa bootstrap methods tend to have lower coverage.
PubDate: May 2024
- Partial Product-Exponential Method of Estimation
Abstract: Publication date: May 2024
Source:Mathematics and Statistics Volume 12 Number 3 Gagandeep Kaur and Sarbjit Singh Brar This research introduces the Partial Product- Exponential Method of Estimation, focusing on utilizing partial auxiliary information for estimating population mean in simple random sampling without replacement. The method proposes novel estimators tailored for situations where only partial auxiliary information is available, particularly when it demonstrates a negative correlation with the study variable within sub-populations. The paper evaluates the performance of the suggested method under two cases: when sub-population weights are known and when they are unknown. Approximate expressions for bias and variance, up to the first order, are derived for the suggested estimators. A comprehensive comparative analysis concludes that the proposed estimators are more efficient than existing estimators, such as mean per unit estimator, partial product estimator, and weighted post-stratified estimator, under specific conditions. Particularly, the proposed estimators outperform the corresponding existing methods when certain conditions are true, demonstrating superiority in both known and unknown weight cases. Furthermore, a simulation study using R software validates the theoretical findings for normal and non-normal populations. The study showcases the practical utility of the proposed estimators, emphasizing their superiority over existing counterparts in real-world applications. Particularly, the proposed estimators are increased accuracy and efficiency in estimating the population mean, enhancing the reliability of sample survey results. In summary, the Partial Product-Exponential Method of Estimation presents a valuable addition to the domain of sample survey methodology, addressing the challenge of partial auxiliary information. The suggested methods demonstrated advantages in efficiency and accuracy, and highlights its potential for practical applications, promising enhanced estimation accuracy in various cases of sample survey.
PubDate: May 2024
- Coneighbor Graphs and Related Topologies
Abstract: Publication date: May 2024
Source:Mathematics and Statistics Volume 12 Number 3 Nechirvan B. Ibrahim and Alias B. Khalaf The primary aim of this paper is to establish and analyze certain topological structures linked with a specified graph. In a graph
, a vertex u is considered a neighbor of another vertex v if there exists an edge uv in
. Furthermore, we define two vertices (or edges) in
as coneighbors if they share identical sets of neighboring vertices (or edges). The topology under consideration arises from the collections of vertices that are coneighbor and the collections of edges that are coneighbor within the graph. It is proved that the coneighbor topology of every non-coneighbor graph is homeomorphic to the included point topology while this space is quasi-discrete if and only if the graph contains at least one coneighbor set of vertices and some examples of coneighbor topologies of special graphs are presented to be quasi-discrete spaces such as (a path, a cycle and a bipartite) graphs. Moreover, several topological properties of the coneighbor space are presented. We proved that the coneighbor topological space associated with a graph
always has dimension one and satisfies the T1/2 axiom. Also, the family of θ-open sets is determined in this spaces and it is proved that this space is almost compact whenever the family of coneighbor sets is finite. Finally, we looked at some graphs in which the coneighbor space fulfills other topological concepts such as connectedness, compactness and countable compactness.
PubDate: May 2024
- Sharp Bounds on Vertex N-magic Total Labeling Graphs
Abstract: Publication date: May 2024
Source:Mathematics and Statistics Volume 12 Number 3 R. Nishanthini and R. Jeyabalan A vertex N-magic total labeling is a bijective function that maps the vertices and edges of a graph G onto the successive integers from 1 to p + q. The labeling exhibits two distinct properties: First, the count of unique magic constants ki for i belonging to the set {1, 2, ...,N} is equivalent to the cardinality of N; secondly, the magic constants ki must be arranged in a strictly ascending order. In the present context, the constant N is employed to represent different degrees of vertices. The term “magic constant values ki” for i ∈ {1, 2, ...,N} refers to specific numbers that exhibit unique and interesting properties and are employed in the context of this investigation. By adding up the weights of each vertex in V (G), we might receive a magical constant number ki for i ∈ {1, 2, ...,N}. Within the scope of this study, we discuss the sharp bounds of vertex N-magic total labeling graphs. In terms of magic constants ki for i ∈ {1, 2, ...,N}, we also found the requirement for vertex N-magic total labeling of trees. We investigated the potential for vertex N-magic total labeling at vertices in graphs with varying vertex degrees.
PubDate: May 2024
- On Questions Concerning Finite Prime Distance Graphs
Abstract: Publication date: May 2024
Source:Mathematics and Statistics Volume 12 Number 3 Ram Dayal A. Parthiban and P. Selvaraju Graph labeling is an allocation of labels (mostly integers) to the nodes/lines or both of a graph Gα subject to a few conditions. The field of graph theory, specifically graph labeling, plays a vital role in various fields. To name a few, graph labeling is utilized in coding, x−ray crystallography, radar, astronomy, circuit design, communication network addressing, and data base management. It can also be applied to network security, network addressing, channel assignment process, and social networks. A graph Gβ is a prime distance graph (PDG) if its nodes can be assigned with distinct integers such that for any two adjacent nodes, the positive difference of their labels is a prime number. A complete characterization of prime distance graphs is an open problem of high interest. This paper contributes partially towards the same. More specifically, Laison et al. raised the following questions. (1) Is there a family of graphs which are PDGs if and only if Goldbach’s Conjecture is true' (2) What other families of graphs are PDGs' In this paper, these questions are answered partially and also show certain families of graphs that admit prime distance labeling (PDL) if and only if the Twin Prime Conjecture holds, besides establishing PDL of some special graphs.
PubDate: May 2024
- Computational Solution and Analysis of Fuzzy Nonlinear HIV Infection Model
via New Multistage Fuzzy Variational Iteration Method
Abstract: Publication date: May 2024
Source:Mathematics and Statistics Volume 12 Number 3 Hafed. H. Saleh Amirah Azmi and Ali. F. Jameel In order to obtain sufficient solutions for fuzzy differential equations (FDEs), reliable and efficient approximation methods are necessary. Approximate numerical methods can not directly solve fuzzy HIV models. Meanwhile, the approximate analytical methods can potentially provide more straightforward solutions without the need for extensive numerical computations or linearization and discretization techniques, which may be challenging to apply to fuzzy models. One significant advantage of approximate analytical methods is their ability to provide insights into solution accuracy without requiring an exact solution for comparison, where an exact solution may not be readily available. In this work, the fuzzy nonlinear HIV infection model is analyzed and solved using the new fuzzy form of an approximate analytical method. Fuzzy set theory mixed with standard fuzzy variational iteration method (FVIM) properties is utilized to produce a new formulation denoted by the multistage fuzzy variational iteration method (MFVIM) to process and solve a fuzzy nonlinear HIV infection model. MFVIM offers an effective method for attaining convergence in the series solution presented as a polynomial function. This approach enables efficient solutions to diverse mathematical challenges. The solution methodology is reliant on fuzzy differential equations conversion into systems of ordinary differential equations, utilizing the parametric form regarding its r-level representations, and considering the approximate solution of the system in a sequence of intervals. Subsequently, the equivalent classical systems are resolved by applying FVIM algorithms in each subinterval. Also, the existence and unique solution analysis of the proposed problem have been described, along with a fuzzy optimal control analysis. A tabular and graphical representation of the MFVIM of the proposed models is presented and analyzed in comparison with the numerical method and FVIM. The new method produces better performance in terms of solutions than a numerical method with a simple implementation for solving fuzzy nonlinear HIV infection model associated with FIVPs. The ability to better comprehend the behavior of the system under investigation can enable researchers and scientists to work on models incorporating systems with long memories and ill-defined notions to make more effective design and decision-making.
PubDate: May 2024
- Integral Graph Spectrum and Energy of Interconnected Balanced Multi-star
Graphs
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 B. I. Andrew and A. Anuradha Balanced multi-star graphis a specialized type of graph formed by connecting apex vertices of star graphs to create a cohesive structure known as a clique. These graphs comprise r star graphs, where each star graph has an apex vertex connected to n pendant vertices. Balanced multistar graphs offer benefits in scenarios requiring equal distances between peripheral nodes, such as sensor networks, distributed computing, traffic engineering, telecommunications, supply chain management, and power distribution. The integral graph spectrum derived from the adjacency matrix of balanced multistar graphs holds significance across various domains. It aids in network analysis to understand connectivity patterns, facilitates efficient computation of structural properties through graph algorithms, and enables graph partitioning and community detection. Spectral graph theory assists in identifying connectivity patterns in network visualization, supports modeling biological networks in biomedical research, aids in generating personalized recommendations in recommendation systems and contributes to graph-based segmentation and scene analysis tasks in image processing. This paper aims to characterize the integral graph spectrum of balanced multi-star graphs
by focusing on spectral parameters of double-star graphs (r=2), triple-star graphs (r=3), and quadruple-star graphs (r=4). This spectrum serves as an important tool across disciplines, providing insights into graph structure and facilitating tasks ranging from network analysis to computational biology and image processing.
PubDate: Mar 2024
- Variations of Rigidity for Abelian Groups
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 Inessa I. Pavlyuk and Sergey V. Sudoplatov A series of basic characteristics of structures and of elementary theories reflects their complexity and richness. Among these characteristics, four kinds of degrees of rigidity and the index of rigidity are considered as measures of how far the given structure is situated from rigid one, both with respect to the automorphism group and to the definable closure, for some or any subset of the universe, which has the given finite cardinality. Thus, a natural question arises on a classification of model-theoretic objects with respect to rigidity characteristics. We apply a general approach of studying the rigidity values and related classification to abelian groups and their theories. We describe possibilities of degrees and indexes of rigidity for finite abelian groups and for standard infinite abelian groups. This description is based both on general consideration of rigidity, on its application for finite structures, and on their specificity for abelian groups including Szmielew invariants, combinatorial formulas for cardinalities of orbits, links with dimensions, and on their combinations. It shows how characteristics of infinite abelian groups are related to them with respect to finite ones. Some applications for non-standard abelian groups are discussed.
PubDate: Mar 2024
- Emerging Frameworks: 2-Multiplicative Metric and Normed Linear Spaces
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 B. Surender Reddy S. Vijayabalaji N. Thillaigovindan and K. Punniyamoorthy This new study helps us understand 2-multiplicative or product metric spaces and normed linear spaces (NDLS) better than before, going beyond what we already know. Seeing a gap in existing research, our main aim is to thoroughly explore the natural properties of 2-multiplicative NDLS. Using a careful approach that looks at continuity, compactness, and convergence properties, our research finds results that point out the special features of these spaces and show the connections between their algebraic and topological sides. The importance of our findings goes beyond just theory, affecting practical uses and encouraging collaboration across different fields. Our research builds a strong base in mathematical analysis, giving useful insights for making nuanced decisions. Acknowledging some limitations in our study opens the door for future improvements, creating promising paths for further exploration. In real-world terms, what we learn from this thorough study not only informs but also changes how we make decisions in mathematical analysis. In research community, our work makes people appreciate the connection between algebraic and topological spaces more deeply, sparking curiosity and inspiring future research. In essence, this research acts as a guiding light, showcasing the unique features of 2-multiplicative NDLS and paving the way for a deeper understanding of mathematical structures and their flexible uses in both theory and practice. Furthermore, our exploration motivates future researchers to dive into the details of 2-multiplicative NDLS, expanding their knowledge and looking into broader implications in the field of mathematical analysis.
PubDate: Mar 2024
- A Class of Efficient Shrinkage Estimators for Modelling the Reliability of
Burr XII Distribution
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 Zuhair A. Al-Hemyari Alaa Khlaif Jiheel and Iman Jalil Atewi For the purpose of modelling the Reliability of Burr XII Distribution, a family of shrinkage estimators is proposed for any parameter of any distribution when a prior guess valueof
is available from the past. In addition, two sub-models of the shrinkage type estimators for estimating the reliability and parameters of the Burr XII Distribution using two types of shrinkage weight functions with the preliminary test of the hypothesis
against the alternative
have been proposed and studied. The criteria for studying the properties of two sub-models of the reliability estimators which are the Bias, Bias ratio, Mean Squared Error and Relative Efficiency were derived and computed numerically for each sub-model for the purpose of studying the behavior of the estimators for the Burr XII Distribution because they are complicated and contain many complex functions. The numerical results showed the usefulness of the proposed two sub-models of the reliability estimators of Burr XII Distribution relative to the classical estimators for both of the shrinkage functions when the value of the a priori guess value
is close to the true value of
. In addition, the comparison between the proposed two sub-models of the shrinkag
PubDate: Mar 2024
- Product Signed Domination in Probabilistic Neural Networks
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 T. M. Velammal A. Nagarajan and K. Palani Domination plays a very important role in graph theory. It has a lot of applications in various fields like communication, social science, engineering, etc. Letbe a simple graph. A function
is said to be a product signed dominating function if each vertex
in
satisfies the condition
where
denotes the closed neighborhood of
. The weight
of a function
is defined as
. The product signed domination number of a graph
is the minimum positive weight of a product signed dominating function and is denoted as
. Product signed dominating function assigns 1 or -1 to the nodes of the graph. This variation of dominating function has applications in social networks of people or organizations. Probabilistic Neural Network (PNN) was first proposed by Specht. This is a classifier that maps input patterns in a number of class levels and estimates the probability of a sample being part of learned theory. This paper studies the existence of product signed dominating functions in probabilistic neural networks and calculates the accurate values of product signed domination numbers of three layered and four layered probabilistic neural networks.
PubDate: Mar 2024
- Forecasts with SPR Model Using Bootstrap-Reversible Jump MCMC
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 Suparman Eviana Hikamudin Hery Suharna Aryanti In Hi Abdullah and Rina Heryani Polynomial regression (PR) is a stochastic model that has been widely used in forecasting in various fields. Stationary stochastic models play a very important role in forecasting. Generally, PR model parameter estimation methods have been developed for non-stationary PR models. This article aims to develop an algorithm to estimate the parameters of a stationary polynomial regression (SPR) model. The SPR model parameters are estimated using the Bayesian method. The Bayes estimator cannot be determined analytically because the posterior distribution for the SPR model parameters has a complex structure. The complexity of the posterior distribution is caused by the SPR model parameters which have a variable dimensional space. Therefore, this article uses the reversible jump MCMC algorithm which is suitable for estimating the parameters of variable-dimensional models. Applying the reversible jump MCMC algorithm to big data requires many iterations. To reduce the number of iterations, the reversible jump MCMC algorithm is combined with the Bootstrap algorithm via the resampling method. The performance of the Bootstrap-reversible jump MCMC algorithm is validated using 2 simulated data sets. These findings show that the Bootstrap-reversible jump MCMC algorithm can estimate the SPR model parameters well. These findings contribute to the development of SPR models and SPR model parameter estimation methods. In addition, these findings contribute to big data modeling. Further research can be done by replacing Gaussian noise in SPR with non-Gaussian noise.
PubDate: Mar 2024
- Decision Making with Parametric Reduction and Graphical Representation of
Neutrosophic Soft Set
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 Sonali Priyadarsini Ajay Vikram Singh and Said Broumi The neutrosophic soft set is one of the most significant mathematical approaches for uncertainty description, and it has a multitude of practical applications in the realm of decision making. On the other hand, the decision-making process is often made more difficult and complex since these situations contain criteria that are less significant and more redundant. In neutrosophic soft set-based decision-making problems, parameter reduction is an efficient method for cutting down on redundant and superfluous factors, and it does so without damaging the decision-makers' ability to make decisions. In this work, a parametric reduction strategy has been proposed. This approach lessens the difficulties associated with decision making while maintaining the existing order of available options. Because the decision sequence is maintained while the process of reduction is streamlined, utilizing this tactic results in an experience that is both less difficult and more convenient. This article demonstrates the applicability of this method by outlining a decision-making dilemma that was taken from the actual world and providing a solution for it. This article discusses a novel method for dealing with neutrosophic soft graphs by merging graph theory with neutrosophic soft set theory. An illustration of a graphical depiction of a neutrosophic soft set is provided alongside an explanation of neutrosophic graphs and neutrosophic soft set graphs in this article.
PubDate: Mar 2024
- Recursive Estimation of the Multidimensional Distribution Function Using
Bernstein Polynomial
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 D. A. N. Njamen B. Baldagaï G. T. Nguefack and A. Y. Nana The recursive method known as the stochastic approximation method, can be used among other things, for constructing recursive nonparametric estimators. Its aim is to ease the updating of the estimator when moving from a sample of size n to n + 1. Some authors have used it to estimate the density and distribution functions, as well as univariate regression using Bernstein's polynomials. In this paper, we propose a nonparametric approach to the multidimensional recursive estimators of the distribution function using Bernstein's polynomial by the stochastic approximation method. We determine an asymptotic expression for the first two moments of our estimator of the distribution function, and then give some of its properties, such as first- and second-order moments, the bias, the mean square error (MSE), and the integrated mean square error (IMSE). We also determine the optimal choice of parameters for which the MSE is minimal. Numerical simulations are carried out and show that, under certain conditions, the estimator obtained converges to the usual laws and is faster than other methods in the case of distribution function. However, there is still a lot of work to be done on this issue. These include the studies of the convergence properties of the proposed estimator and also the estimation of the recursive regression function; the developments of a new estimator based on Bernstein polynomials of a regression function using the semi-recursive estimation method; and also a new recursive estimator of the distribution function, density and regression functions; when the variables are dependent.
PubDate: Mar 2024
- Applications of Onto Functions in Cryptography
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 K Krishna Sowmya and V Srinivas The concept of onto functions plays a very important role in the theory of Analysis and has got rich applications in many engineering and scientific techniques. Here in this paper, we are proposing a new application in the field of cryptography by using onto functions on the algebraic structures like rings and fields to get a strong encryption technique. A new symmetric cryptographic system based on Hill ciphers is developed using onto functions with two keys- Primary and Secondary, to enhance the security. This is the first algorithm in cryptography developed using onto functions which ensures a strong security for the system while maintaining the simplicity of the existing Hill cipher. The concept of using two keys is also novel in the symmetric key cryptography. The usage of onto functions in the encryption technique eventually gives the highest security to the algorithm which has been discussed through different examples. The original Hill cipher is obsolete in the present-day technology and serves as pedagogical purpose but whereas this newly proposed algorithm can be safely used in the present-day technology. Vulnerability from different types of attacks of the algorithm and the cardinality of key spaces have also been discussed.
PubDate: Mar 2024
- A Pivotal Operation on Triangular Fuzzy Number for Solving Fuzzy Nonlinear
Programming Problems
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 D. Bharathi and A. Saraswathi Fuzzy nonlinear programming plays a vital role in decision-making where uncertainties and nonlinearity significantly impact outcomes. Real-world situations often involve imprecise or vague information. Fuzzy nonlinear programming allows for the representation of uncertainty through fuzzy sets, enabling more accurate modeling of real-world complexities. Many optimization problems exhibit nonlinear relationships among variables. Fuzzy nonlinear programming addresses these complex relationships, providing solutions that linear programming methods cannot accommodate. The objective of this research article proposes Fuzzy Non-Linear Programming Problems (FNLPP) under environment of triangular Fuzzy numbers. This paper proposed a method based on the pivotal operation with aid of Wolfe's technique. Fuzzy non-linear programming is an area of study that deals with optimization problems in which the objective function and constraints involve fuzzy numbers, which represent uncertainty or vagueness in real-world data. We claim that the proposed method is easier to understand and apply compared to existing methods for solving similar problems that arise in real-life situations. To demonstrate the effectiveness of the method, the authors have solved a numerical example and provided illustrations in the paper. This proposed method in the paper aims to address such complexities and find solutions to these problems more efficiently.
PubDate: Mar 2024
- Convergence of Spectral-Grid Method for Burgers Equation with
Initial-Boundary Conditions
Abstract: Publication date: Mar 2024
Source:Mathematics and Statistics Volume 12 Number 2 Chori Normurodov Akbar Toyirov Shakhnoza Ziyakulova and K. K. Viswanathan In this study, initial-boundary value problem for the Burgers equation is solved using the theoretical substantiation of the spectral-grid method. Using the theory of Green's functions, an operator equation of the second kind is obtained with the corresponding initial-boundary conditions for a continuous problem. To approximately solve the differential problem, the spectral grid method is used, i.e. a grid is introduced on the integration interval, and approximate solutions of the differential problem on each of the grid elements are presented as a finite series in Chebyshev polynomials of the first kind. At the internal nodes of the grid, the requirement for the continuity of the approximate solution and its first derivative is imposed. The corresponding boundary conditions are satisfied at the boundary nodes. A discrete analogue of the operator equation of the second kind is obtained using the spectral-grid method. The convergence theorems for the spectral-grid method are proven and estimates for the method's convergence rate are obtained. To discretize the Burgers equation in time on the interval [0,T], a grid with a uniform step ofis introduced, i.e.
, where
- given number. Numerical calculations have been carried out at sufficiently low values of viscosity, which cannot be obtained by other numerical methods. The high accuracy and efficiency of the spectral-grid method used in solving the initial-boundary value problem for the Burgers equation is shown.
PubDate: Mar 2024
- Hyers-Ulam Stability of the Hexic-Quadratic-Additive Mixed-Type Functional
Equation in Non-Archimedean Normed Spaces
Abstract: Publication date: Jul 2024
Source:Mathematics and Statistics Volume 12 Number 4 Koushika Dhevi S and Sangeetha S Functional equations are important and exciting concepts in mathematics. They make it possible to investigate fundamental algebraic operations and create fascinating solutions. The concept of functional equations develops further creative methods and techniques for resolving issues in information theory, finance, geometry, wireless sensor networks, and other domains. These include geometry, algebra, analysis, and so on. In recent decades, several writers in many domains have covered the study of various types of stability. Many authors have studied the stability of various functional equations in great detail, with the traditional case (Archimedean) revealing more fascinating results. Recently, researchers have used NANS to study the equivalent conclusions of stability problems from various functional equations. In this research, we examine the Hyers-Ulam stability of the hexic-quadraticadditive mixed-type functional equationwhere
is fixed such that
and
in NANS and also provided some suitable counterexamples.
PubDate: Jul 2024
- Homogeneous Spaces and Induced Transformation Groups of S-Topological
Transformation Group
Abstract: Publication date: Jul 2024
Source:Mathematics and Statistics Volume 12 Number 4 C. Rajapandiyan and V. Visalakshi This paper explores the homogeneous spaces and induced transformation groups of S-topological transformation group. S-topological transformation group is a structure constructed by concatenating a topological group with a topological space through a semi totally continuous action. It is shown that any map from a topological group to the quotient group of a finite Hausdorff topological group by the isotropy group is surjective, continuous, open and it has been proven that any map from the quotient group of a finite Hausdorff topological group by the isotropy group to the homogenous space is both H-isomorphism and semi totally continuous. Furthermore, an equivariant map has been established between homogeneous spaces and discussed the partial order relation on the family of all Hausdorff homogeneous spaces for a compact Hausdorff topological group. Subsequently, an induced S-topological transformation group is constructed by an induced H-action. For any compact subgroup K of a topological group H, it is verified that any map from the topological spcae Y to the orbit space of K-action is continuous and a K-map. For any H-space, K-map and an induced S-topological transformation group; it is proved that there is a unique semi totally continuous H-map. Additionally, it is shown that for a topological group, a subgroup K of topological group and a K-space, there is a unique H-space and a unique injective K-map and also it is established that for a H-space and a semi totally continuous K-map, there exists a unique semi totally continuous H-map. Finally, it is demonstrated that for a finite Hausdorff topological group, finite Frechet space and a M-space, any map from the orbit space of M-action tois semi totally continuous, for the subgroups M and N of topological group.
PubDate: Jul 2024
- Incident Vertex Pi Coloring of Graph Families: Fan, Book, Gear, Windmill,
Dutch Windmill and Crown Graph
Abstract: Publication date: Jul 2024
Source:Mathematics and Statistics Volume 12 Number 4 Sunil B. Thakare Archana Bhange and H. R. Bhapkar In graph theory, the notion of graph coloring plays an important role and has several applications in the fields of science and engineering. Since the concept of map coloring was first proposed, many researchers have invented a wide range of graph coloring techniques, among which are vertex coloring, edge coloring, total coloring, perfect coloring, list coloring, acyclic coloring, strong coloring, radio coloring, and rank coloring; these are some of the important graph coloring methods that color the graph's vertices, edges, and regions with certain conditions. One of the coloring method is Incident Vertex PI coloring. This is a function of coloring from the set of pairs of incident vertices of every edge of a graph to the power set of colors. This method ensures that all vertices are properly colored, with an additional condition that ordered pair vertices for all edges of graph receive distinct colors. Many types of graphs are defined in the graph theory. In this paper, we have discussed the Incident Vertex PI Coloring numbers for the class of graph families, Fan graph, Book graph, Gear graph, Windmill graph, Dutch Windmill graph and Crown graph.
PubDate: Jul 2024
- An Extension of The Hesitant Fuzzy Weight Averaging Operator-VIKOR Method
under Hesitant Fuzzy Sets
Abstract: Publication date: Jul 2024
Source:Mathematics and Statistics Volume 12 Number 4 Rafi Raza Ahmad Termimi Ab Ghani∗ and Lazim Abdullah The hesitant fuzzy set (HFS) is an innovative approach to decision-making under uncertainty. This study addresses the aggregated operation of the HFS decision matrix. The introduction of induced VIKOR procedures, various extensions of HFSs aggregation operator, and essential approaches for multi-criteria decision-making (MCDM) are presented. This technique uses the aggregation operator, HFWA operator, to rank alternatives and identify the compromise solution that comes closest to the ideal solution. We developed the hesitant fuzzy weight averaging VIKOR (HFWA-VIKOR) model as a novel technique to achieve this. By combining the hesitant fuzzy elements, the HFWA aggregation operator creates aggregated values that are expressed as a single value. The primary advantage of the HFWA-VIKOR model lies in its initial step of aggregating the hesitant fuzzy element. This results in an initial hesitant fuzzy decision matrix, which provides much more detailed information for decision-making and, through the use of the inducing HFWA operator, represents the complex attitudinal nature of the decision-makers. The multi-criteria location selection problem is then solved using the combined HFWA-VIKOR technique, and the outcomes are presented in an easy-to-understand way owing to aggregation operators. A numerical example is also applied in this new method which gives out the best alternative result. As per the scope of our research work, MCDM under hesitant fuzzy sets with HFWA-VIKOR method have been used and their result revealed the best alternative is to find out. These results indicate good potential for objectives. This technique may also be used for other studies or applications. Further research in this area may provide a more developed technique for this application.
PubDate: Jul 2024
- On the Jaulent-Miodek System for Fluid Mechanics Using Combination of
Adomian Decomposition and Padé Techniques
Abstract: Publication date: Jul 2024
Source:Mathematics and Statistics Volume 12 Number 4 Ahmed J. Sabali Saad A. Manaa and Fadhil H. Easif Solving nonlinear partial differential equations (PDEs) is crucial in various scientific and engineering domains. The Adomian Decomposition Method (ADM) has emerged as a promising technique for tackling such problems. However, its effectiveness diminishes over extended time intervals due to divergence issues. This limitation hampers its practical applicability in real-world scenarios where stable and accurate numerical solutions are essential. To address the divergence problem associated with ADM, this research explores the combination of the Adomian Decomposition Method (ADM) with the Padé technique – a method known for its accuracy and efficiency. This combination's purpose is to mitigate ADM's shortcomings, particularly when dealing with extended time intervals. Experimental analysis was conducted across varying time intervals to compare the performance of the combined technique with traditional ADM. Mathematica software was used to obtain all calculations, including the creation of tables and figures. Results from the experiments demonstrate the superiority of the combined technique in producing accurate results regardless of the time interval used. Furthermore, the combined method improves accuracy and ensures result stability over long time intervals, creating new possibilities for its use in scientific and engineering fields. This research contributes to the field by offering a solution to the divergence issue associated with ADM, thereby enhancing its applicability in solving nonlinear PDEs. While acknowledging limitations such as reliance on numerical simulations, the study highlights the practical implications of its findings, including more accurate predictions and modeling in complex systems, with potential social implications in decision-making and problem-solving contexts.
PubDate: Jul 2024
- Viscosity Approximation Methods for Generalized Modification of the System
of Equilibrium Problem and Fixed Point Problems of an Infinite Family of
Nonexpansive Mappings
Abstract: Publication date: Jul 2024
Source:Mathematics and Statistics Volume 12 Number 4 Prashant Patel and Rahul Shukla Fixed points (FP) of infinite families of nonexpansive mappings find diverse applications across various disciplines. In economics, they help to find stable prices and quantities in markets. In game theory, fixed points help to find Nash equilibria. In computer science, fixed points are used to understand program meanings and help in making better algorithms for tasks like data analysis, checking models, and improving compilers. Solutions to equilibrium problems have practical uses in various areas. For instance, in physics, these solutions assist in analyzing systems at rest or in motion. In engineering, they aid in designing structures that can withstand forces without collapsing, ensuring safety and stability in construction projects. The main aim of the article is to present the concept of generalized modification of the system of equilibrium problems (GMSEP) for an infinite family of nonexpansive mappings. In this paper, we study viscosity approximation methods and present a new algorithm to find a common element of the fixed point of an infinite family of nonexpansive mappings and the set of solutions of generalized modification of the system of equilibrium problem in the setting of Hilbert spaces. Under some conditions, we prove that the sequence generated by the algorithm converges strongly to this common solution.
PubDate: Jul 2024
- On Zagreb Energy of Certain Classes of Graphs
Abstract: Publication date: Jul 2024
Source:Mathematics and Statistics Volume 12 Number 4 S. Sripriya and A. Anuradha Energy of the graph G is the sum of absolute values of eigenvalues of its adjacency matrix. Given a simple connected graph G, its first (second) Zagreb matrix is constructed by including the sum (product) of the degrees of each pair of adjacent vertices of G. Computation of sum of absolute eigen values of these matrices yields the corresponding Zagreb energies. In this paper, the first and second Zagreb energies of certain families of graphs have been computed and a criterion to discern the nature of graph G based on their energies is obtained. The paper focuses on the comparative analysis of first and second Zagreb energies in terms of regular graphs such as cycle graphs, bipartite and tripartite graphs. Our findings reveal that the second Zagreb energy is always greater than first Zagreb energy for all complete bipartite graphs of even order greater than or equal to 4. Also we have established that the same is the case for complete tripartite graphs too. Furthermore, we illustrate that the two Zagreb energies coincide exclusively for the complete bipartite graph with equal partite sets if and only if the graph is of order 2. Additionally, we provide a criterion leading to an infinite set of non-isomorphic Zagreb equi-energetic graphs for all r>1 within partite graphs. The computations of two Zagreb energies for graph operations like t-splitting graph and t-shadow graph are also illustrated. The first and second Zagreb energies for some specific graphs along with bounds on Zagreb energies for wheel graphs are also discussed.
PubDate: Jul 2024
- Some New Oscillation Criteria for Euler-Bernoulli Beam Equations with
Damping Term
Abstract: Publication date: Jul 2024
Source:Mathematics and Statistics Volume 12 Number 4 S. Priyadharshini V. Sadhasivam and K. K. Viswanathan The main objective of this study is to investigate some new oscillation criteria for Euler-Bernoulli beam equations with damping term by using the integral average method and Riccati technique. Philo introduces the following new integral operator, which is the main tool in this paper. Our plan of action is to reduce the multidimensional problems to ordinary differential problem by using Jenson's inequality, assuming the assumptions and integration by parts with boundary conditions. With hinged, sliding and hinged-sliding end boundary conditions, several new sufficient conditions are established. The results improve and generalize those given in some previous papers, which can be seen by the examples given at the end of this paper. The majority of engineering constructions, ships, support buildings, airplanes, and rotor blades all use beams as structural elements. It is presumed that these elements are only subjected to static loads; yet, dynamic loads induce vibrations, which affect the stress and strain values. These mechanical phenomena also result in noise, instability, and the potential for resonance, which enhances deflections and failure. We analyze the spatial force loadthe equations of a damped Euler-Bernoulli beam derived from the equation for the velocity or final time displacement that we measured. Usually, internal damping determines the nature of this term.
PubDate: Jul 2024
- Spatial Autoregressive Model with Mixture of Gaussian Distribution for the
Random Effect: Formulation, Estimation and Application
Abstract: Publication date: Jul 2024
Source:Mathematics and Statistics Volume 12 Number 4 Prem Antony J and Edwin Prabakaran T Spatial econometrics is pivotal in understanding spatial dependencies across diverse fields like urban economics, environmental economics, and disease spread. This study highlights the significance of spatial grouping for data management and pattern detection, particularly in epidemiological analysis and policy planning. The Spatial Autoregressive random effect (SAR-RE) model is a classical model for analysing datasets with repeated observations across units over time, particularly when these units are situated in a spatial context. The mixture effect models account for the presence of different sub-groups within the overall population, each of which has a unique response pattern. In this paper, the proposed methodology integrates the SAR-RE model into a mixture framework, allowing for the consideration of diverse spatial patterns and class-specific coefficients. By incorporating class-specific coefficients, the model accommodates heterogeneous spatial structures within the data, providing a more nuanced understanding of spatial dependencies. The Spatial autoregressive model along with the assumption that the random effect follows a mixture of Gaussian distributions is developed to analyse panel data with spatial dependency and unobserved heterogeneity. The parameters of the model are estimated using the Limited-Memory BFGS (L-BFGS) quasi-Newton method-based EM algorithm for good convergence of the estimated. The classification of subjects into different latent classes is carried out based on their posterior probabilities. The model is applied to state-wise COVID-19 confirmed rates, revealing insightful patterns. The analysis employs the estimated model for the interpretation and comprehensive understanding of spatially dependent panel data with unobserved heterogeneity. The results of the empirical study show that the proposed model outperforms the existing model based on performance metrics criteria.
PubDate: Jul 2024
- Construction of Bivariate Transmuted Frechet Distribution with its
Properties
Abstract: Publication date: Jul 2024
Source:Mathematics and Statistics Volume 12 Number 4 Hayfa Abdul Jawad Saieed Mhasen Saleh Altalib Safwan Nathem Rashed and Manaf Hazim Ahmed In multivariate data modeling, the statistical analyst can desire to construct a multivariate distribution with correlated variables. For this reason, there is a need to generalize univariate distributions, but this generalization is not easy. Many methods have been presented for construction of continuous multivariate families with univariate distributions. Some of these methods are based on a single baseline, while others are based on more than one baseline, so that their variables are dependent. Some authors were interested in expanding a univariate transmuted family into multivariate case. Some suggestions were made about extension of univariate quadratic transmuted (QT) family to bivariate ones, and another modification was made to this family by replacing the (c.d.f.) with exponentiated (c.d.f.). Another construction of bivariate family is based on probability distribution of paired order Statistics for a sample size two drawn from quadratic ranked transmuted (QRT) margin, and this bivariate family allows for positive and negative dependence between variables. Another family proposed an extension of univariate mixture of standard continuous uniform, with decreasing densities to a bivariate case. Our proposed (CT2) reduces to a bivariate quadratic transmuted (QT2) family if the cubic transmutation parameters equal to zero. (CT2) family can be used for modeling positive and negative correlated variables. Some statistical properties of (CT2) family have been studied which comprise joint, marginal and conditional (c.d.f., p.d.f), joint, marginal and conditional moments, data generation and dependence coefficients. It is seen that (joint, marginal and conditional) moments depend on raw moments of (baseline variables and largest order statistics of samples sizes 2 and 3). The Egyptian bivariate economic data are fitted by (CT2Fr), (FGMFr), (T2Fr) and (DSASFr). The (CT2Fr) is the fit to which has smallest (AIC) and (BIC) criteria.
PubDate: Jul 2024
- Mixture of Ailamujia and Size Biased Ailamujia Distributions: Estimation
and Application
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 Bader Alruwaili In this article, we introduce a new model entitled a mixture of the Ailamujia and size biased Ailamujia distributions. We present and discuss some statistical properties of this mixture of the Ailamujia and size biased Ailamujia distributions, such as moments, skewness, and kurtosis. We also provide some graphical results on the mixture of the Ailamujia and size biased Ailamujia distributions and provide some numerical results to understand the behavior of the proposed mixture and its properties. Also, we provide some reliability analysis results on the proposed mixture. The parameters of the Ailamujia and size biased Ailamujia distributions are estimated by using the maximum likelihood method. The usefulness of the proposed combination is illustrated by using a real-life dataset. We use the Ailamujia distribution and the size biased Ailamujia distribution, in addition to the mixture of the Ailamujia and size biased Ailamujia distributions to fit the real-life dataset. We use different criteria in this comparison; the results show that the proposed mixture fits the dataset better than the use of the Ailamujia distribution and the size biased Ailamujia distribution alone.
PubDate: Jan 2024
- The ARCH Model for Analyzing and Forecasting Temperature Data
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 Ali Sadig Mohommed Bager The chaotic nature of the earth's atmosphere and the significant impact of weather on various fields necessitate accurate weather forecasting. Time series analysis plays a crucial role in predicting future values based on past data. The Autoregressive Conditional Heteroscedasticity (ARCH) model is widely used for forecasting, especially in the field of temperature analysis. This study focuses on the ARCH model for analyzing and forecasting temperature changes. The ARCH model is selected based on its ability to capture the regular variations in the predictability of meteorological variables. The methodology section explains the ARCH model and various statistical tests used, such as the heteroscedasticity test (ARCH test), Jarque-Bera test, and Augmented Dickey-Fuller test (ADF). A sample study is conducted on monthly average temperature data from Athenry, Ireland, over a period of four years. The study utilizes the ARCH model to calculate temperature series volatility and assesses the model's performance using goodness-of-fit measures and predictive accuracy. The results show that the ARCH model successfully predicts temperature changes for three years, as indicated by the forecasted temperature series. The statistical performance of the ARCH model is evaluated using in-sample and out-of-sample analyses, demonstrating its effectiveness in capturing temperature variations. The study highlights the importance of time series forecasting and the significant impact of the ARCH model in temperature analysis.
PubDate: Jan 2024
- Moments of Gaussian Distributions for Small and Large Sample Sizes
Revisited
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 Florian Heiser and E W Knapp Central moments of statistical samples provide coarse-grained information on width, symmetry and shape of the underlying probability distribution. They need appropriate corrections for fulfilling two conditions: (1) yielding correct limiting values for large samples; (2) yielding these values also, if averaged over many samples of the same size. We provide correct expressions of unbiased central moments up to the fourth and provide an unbiased expression for the kurtosis, which generally is available in a biased form only. We have verified the derived general expressions by applying them to the Gaussian probability distribution (GPD) and we show how unbiased central moments and kurtosis behave for finite samples. For this purpose, we evaluated precise distributions of all four moments for finite samples of the GPD. These distributions are based on up to 3.2*108 randomly generated samples of specific sizes. For large samples, these moment distributions become Gaussians whose second moments decay with the inverse sample size. We parameterized the corresponding decay laws. Based on these moment distributions, we demonstrate how p-values can be computed to compare the values of mean and variance evaluated from a sample with the corresponding expected values. We also show how one can use p-values for the third moment to investigate the symmetry and for the fourth moment to investigate the shape of the underlying probability distribution, certifying or ruling out a Gaussian distribution. This all provides new power for the usage of statistical moments. Finally, we apply the evaluation of p-values for a dataset of the percent of people of age 65 and above in the 50 different states of the USA.
PubDate: Jan 2024
- Functional Continuum Regression Approach to Wavelet Transformation Data in
a Non-Invasive Glucose Measurement Calibration Model
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 Ismah Ismah Erfiani Aji Hamim Wigena and Bagus Sartono Functional data has a data structure with large dimensions and is a broad source of information, but it is very possible that there are problems in analyzing functional data. Functional continuum regression is an alternative method that can be used to overcome calibration modeling with functional data. This study aimed to determine the robustness of Functional continuum regression in overcoming multicollinearity problems or the number of independent variables greater than the number of observations, with functional data. The research method used in this study is the analysis of the Functional continuum regression method on the results of the Wavelet transform of blood glucose measurements with noninvasive techniques in the calibration model, and making comparisons with non-functional methods, namely Principal component regression, partial least square regression, least square regression, and functional method namely functional regression. The results of the analysis using the five methods obtained the root mean square error prediction (RMSEP), the correlation between the observed data and the estimated observation data, and the mean absolute error (MAE). The results of the analysis can be said that reduction methods such as Functional continuum regression, Principal component regression, and partial least square regression are superior methods when used when multicollinearity occurs or the number of independent variables is greater than the number of observations. In the case of functional data analysis, the application of Functional continuum regression is better because it does not eliminate data patterns. Thus it can be said that Functional continuum regression is an effective approach in analyzing calibration models which generally have functional data, and there are several problems which include multicollinearity or the number of independent variables is greater than the number of observations.
PubDate: Jan 2024
- Derivation and Evaluation of Monte Carlo Estimators of the Scattering
Equation Using the Ward BRDF and Different Sample Allocation Strategies
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 Carlos Lopez Garces and Nayeong Kong This paper investigates three distinct Monte Carlo estimators derived from the research of Sbert et al. These estimators are specifically tailored to the scattering equations using the Ward Bidirectional Reflectance Distribution Function (BRDF) integrated with a designed cosine-weighted environment map. In this paper, we have two goals. First, to bridge the gap between theoretical foundations and practical applicability to understand how these estimators can be seamlessly integrated as extensions to the acclaimed PBRT renderer. And the second is to measure their real-world performance. We aim to validate our methodology by comparing rendered images with varying convergence rates and deviations to the results of Sbert et al. This validation will ensure the robustness and reliability of our approaches. We analyze the analytical structure of these estimators to derive their precise form. We then implement the three estimators as extensions to the PBRT renderer, subjecting them to a numerical evaluation. We further evaluate the results of the estimator set and sampling strategy by utilizing another pair of incident radiance functions and BRDFs. The final step is to generate rendered images from the implementation to verify the results observed by Svart et el. and extend them with this new pair of functions.
PubDate: Jan 2024
- On Intuitionistic Hesitancy Fuzzy Graphs
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 Sunil M.P. and J. Suresh Kumar A graph is a basic representation of relationship between vertices and edges. This can be used when the relationships are normal and straight forward. But most of the real life situations are rather complex and it calls for advance development in graph theory. The concept of fuzzy graph addresses uncertainity to a certain extent. But, situations arise when we have to address complex hesitant situations such as taking major decisions regarding merging of companies. Intuitionistic fuzzy graph (IFG) and Hesitancy fuzzy graph (HFG) were developed to resolve this uncertainity. But it also fell short in resolving problems related to hesitant situations. In this paper, we present the concepts of IFG and HFG, which serve as the foundation for introducing, defining and analysing Intuitionistic hesitancy fuzzy graph (IHFG). We explore the concepts such as λ-strong, δ-strong and ρ-strong IHFGs. Also, we make a detailed comparative study on the cartesian product and composition of HFGs and IHFGs, establishing essential theorems related to the properties of such products. We prove that the cartesian product and composition of two strong HFGs need not be a strong HFG, but the cartesian product and composition of two strong IHFGs is a strong IHFG. Also we prove that if the cartesian product of two IHFGs is strong, then, at least one of the IHFG will be strong and if the composition of two IHFGs is strong, then at least one of the IHFG will be strong. IHFG models provide exact and accurate results for taking apt decisions in problems involving hesitant situations.
PubDate: Jan 2024
- Complex Neutrosophic Fuzzy Set
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 V. Kalaiyarasan and K. Muthunagai Complex number system is an extension of the real number system which came into existence during the attempts to find a solution for cubic equations. A set characterized by a membership (characteristic) function which assigns to each object a grade of membership ranging between zero and one is called a Fuzzy set. A new development of Fuzzy system is a Complex Fuzzy system in which the membership function is complex- valued and the range of which is represented by the unit disk. The fuzzy similarity measure helps us to find the closeness among the Fuzzy sets. Due to the wide range of applications to various fields, Fuzzy Multi Criteria Decision Making (FMCDM) has gained its importance in Fuzzy set theory. A combination of Complex Fuzzy set, Fuzzy similarity measure and Fuzzy Multi Criteria Decision Making has resulted in this research contribution. In this article, we have introduced and investigated Complex neutrosophic fuzzy set, which involves complex- valued neutrosophic component. We have discussed two real life examples, one on selecting the best variety of a seed that gives the maximum yield and profit in a short period of time and another on choosing the best company to invest. Similarity measure between Complex neutrosophic fuzzy sets has been used to take a decision.
PubDate: Jan 2024
- A New Robust Interval Estimation for the Median of An Exponential
Population When Some of the Observations are Extreme Values
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 Faris Muslim Al-Athari The issue of obtaining accurate interval estimates for the median of an exponential population when some of the observations are extreme values is an important issue for researchers in the fields of reliability applications and survival analysis. In this research paper, a new method is proposed for obtaining a robust confidence interval which is a substitute for the known ordinary (classical) confidence interval when there are extreme values in the sample. The proposed method is simply a result of changing the sample mean by a constant multiple of a sample median and adjusting the upper percentile point of the chi-square of the ordinary confidence interval formula. Further, the performance of the proposed method is evaluated and compared with the ordinary one by using Monte Carlo simulation based on 100,000 trials for each sample size with 5% and 10% extreme values showing that the proposed method, under the contaminated exponential distribution, is always performing better than the ordinary method in the sense of having simulated confidence probability quite close to the aimed confidence level with shorter width and smaller standard error. The use and the application of the proposed method to real-life data are presented and compared with the simulation results.
PubDate: Jan 2024
- Communications to the Pseudo-Additive Probability Measure and the Induced
Probability Measure Realized by
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 Dhurata Valera Bederiana Shyti and Silvana Paralloj The Theory of Pseudo-Additive Measures has been studied by analyzing and evaluating significant results. The system of pseudo-arithmetic operations (SPAO)as a system generated by the generator
is shown directly by taking results of Rybárik and Pap, but
is a further development of
. Using the meaning of entropy as a logarithmic measure in information theory. Through examples we present the relation between the
and the entropy, realized by the
, i.e. a
. The paper studies the construction of relationships between entropy and
supported by
and the connection with Shannon Entropy. For the pseudo-additive probabilistic measure
, using
as well as in the system
generated by
, the problem of modification of this measure by
is addressed. The modifications of the Pseudo-Additive Probability Measure
and the Induced Probability Measure
supported by
are presented, showing the relationships between the two modifications of the Pseudo-Additive Probability Measure (PAPM)
and the Induced Probability Measure (IPM)
. Further, the Bi-Pseudo-Integral for
and the Lebesgue Integral are represented in a relationship.
PubDate: Jan 2024
- Other New Versions of Generalized Neutrosophic Connectedness and
Compactness and Their Applications
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 Alaa. M. F. AL. Jumaili The concepts of neutrosophic connectedness and compactness between neutrosophic sets find extensive applications in various fields, including sensor networks, physics, mechanical engineering, robotics and data analysis involving numerous variables. Neutrosophic set theory also plays a pivotal role in addressing complex problems in engineering, environment science, economics, and advanced mathematical disciplines. Hence, this paper aims to extend the classical definitions of neutrosophic connectedness and compactness within neutrosophic topological spaces. We introduce new classes of neutrosophic connectedness and compactness, specifically, neutrosophic δ-ß-connectedness and neutrosophic δ-ß-compactness, defined using a generalized neutrosophic open set known as "neutrosophic δ-ß-open sets". We explore several essential properties and characterizations of these spaces and introduce new notions of neutrosophic covers, which lead to the concept of neutrosophic compact spaces. Additionally, we present characterizations related to neutrosophic δ-ß-separated sets. A noteworthy feature of these concepts is their ability to model intricate connectedness networks and facilitate optimal solutions for problems involving a multitude of variables, each with degrees of acceptance, rejection, and indeterminacy. We provide relevant examples to illustrate our main findings.
PubDate: Jan 2024
- Some New Kind of Contra Continuous Functions in Nano Ideal Topological
Spaces
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 S. Manicka Vinayagam L. Meenakshi Sundaram and C. Devamanoharan The main objective of this paper is to introduce a new type of contra continuous function namelybased on the concept of
set and
function in Nano Ideal Topological Spaces. The conceptualisation of contra continuous functions, which is an alteration of continuity that requires inverse images of open sets to be closed rather than open. We compare
function with
function and establish the independent relation between
and
functions by providing suitable counter examples. Fundamental properties of
with
and
are investigated. We study the behaviour of
with
. We define
space and describe its relation upon
space and
space. Characterizations of
based on
space,
space and graph function namely
are explored. As like the continuity, the
preserves the property that it maps
and
sets to the same type of sets in co-domain. We defined
space and described its nature over
. Also we have introduced
functions with an example and discussed its relation with
and analysed its basic properties. Composition of functions under
,
and
are examined.
PubDate: Jan 2024
- Homomorphism of
Neutrosophic Fuzzy
Subgroup over a Finite Group
Abstract: Publication date: Jan 2024
Source:Mathematics and Statistics Volume 12 Number 1 V Dhanya M Selvarathi and M Ambika Neutrosophic fuzzy sets are an extension of fuzzy sets. Fuzzy sets can only handle vague information, and it cannot deal with incomplete and inconsistent information. But neutrosophic fuzzy sets and their combinations are one technique for handling incomplete and inconsistent information. Neutrosophic fuzzy set theory provides the groundwork for a whole group of new mathematical theories and summarizes both the traditional and fuzzy counterparts. Following this, the area of neutrosophic fuzzy sets is being developed intensively, with the goal of strengthening the foundations of the theory, creating new applications, and enhancing its practicality in a range of real-life scenarios. Further, neutrosophic fuzzy sets are characterized by three components. One is truth (), the second is indeterminacy (
), and the third is falsity (
). In this paper, we have examined the idea of homomorphism of implication-based (
) neutrosophic fuzzy subgroups over a finite group. Then,
neutrosophic fuzzy subgroups over a finite group and
neutrosophic fuzzy normal subgroups over a finite group were defined. Finally, we have demonstrated some basic properties of homomorphism of
neutrosophic fuzzy subgroups over a finite group in this study.
PubDate: Jan 2024