Abstract: There has been a lot of software design concerns in recent years that come under the code smell. Android Applications Developments experiences more security issues related to code smells that lead to vulnerabilities in software. This research focuses on the vulnerability detection in Android applications which consists of code smells. A multi-layer perceptron-based ANN model is generated for detection of software vulnerabilities and has a precision value of 74.7% and 79.6% accuracy with 2 hidden layers. The focus is laid on 1390 Android classes and involves association mining of the software vulnerabilities with android code smells using APRIORI algorithm. The generated ANN model The findings represent that Member Ignoring Method (MIM) code smell shows an association with Bean Member Serialization (BMS) vulnerability having 86% confidence level and 0.48 support value. An algorithm has also been proposed that would help developers in detecting software vulnerability in the smelly source code of an android applications at early stages of development. PubDate: Wed, 23 Feb 2022 00:00:00 GMT

Abstract: Glossary of Terms extraction from textual requirements is an important step in ontology engineering methodologies. Although initially it was intended to be performed manually, last years have shown that some degree of automatization is possible. Based on these promising approaches, we introduce a novel, human interpretable, rule-based method named ReqTagger, which can extract candidates for ontology entities (classes or instances) and relations (data or object properties) from textual requirements automatically. We compare ReqTagger to existing automatic methods on an evaluation benchmark consisting of over 550 requirements and tagged with over 1700 entities and relations expected to be extracted. We discuss the quality of ReqTagger and provide details showing why it outperforms other methods. We also publish both the evaluation dataset and the implementation of ReqTagger. PubDate: Wed, 23 Feb 2022 00:00:00 GMT

Abstract: Bat algorithm is an effective swarm intelligence optimization algorithm which is widely used to solve continuous optimization problems. But it still has some limitations in search process and can’t solve discrete optimization problems directly. Therefore, this paper introduces an unordered pair and proposes an unordered pair bat algorithm (UPBA) to make it more suitable for solving symmetric discrete traveling salesman problems. To verify the effectiveness of this method, the algorithm has been tested on 23 symmetric benchmarks and compared its performance with other algorithms. The results have shown that the proposed UPBA outperforms all the other alternatives significantly in most cases. PubDate: Wed, 23 Feb 2022 00:00:00 GMT

Abstract: A method of solving a three-person game defined on a product of staircase-function strategy spaces is presented. The spaces can be finite and continuous. The method is based on stacking equilibria of “short” three-person games, each defined on an interval where the pure strategy value is constant. In the case of finite three-person games, which factually are trimatrix games, the equilibria are considered in general terms, so they can be in mixed strategies as well. The stack is any interval-wise combination (succession) of the respective equilibria of the “short” trimatrix games. Apart from the stack, there are no other equilibria in this “long” trimatrix game. An example is presented to show how the stacking is fulfilled for a case of when every “short” trimatrix game has a pure-strategy equilibrium. The presented method, further “breaking” the initial “long” game defined on a product of staircase-function finite spaces, is far more tractable than a straightforward approach to solving directly the “long” trimatrix game would be. PubDate: Wed, 23 Feb 2022 00:00:00 GMT

Abstract: Implementing a large genomic project is a demanding task, also from the computer science point of view. Besides collecting many genome samples and sequencing them, there is processing of a huge amount of data at every stage of their production and analysis. Efficient transfer and storage of the data is also an important issue. During the execution of such a project, there is a need to maintain work standards and control quality of the results, which can be difficult if a part of the work is carried out externally. Here, we describe our experience with such data quality analysis on a number of levels - from an obvious check of the quality of the results obtained, to examining consistency of the data at various stages of their processing, to verifying, as far as possible, their compatibility with the data describing the sample. PubDate: Fri, 17 Dec 2021 00:00:00 GMT

Abstract: In this paper, we consider the problem of allocating resources among Decision Making Units (DMUs). Regarding the concept of overall (cost) efficiency, we consider three different scenarios and formulate three Resource Allocation (RA) models correspondingly. In the first scenario, we assume that overall efficiency of each unit remains unchanged. The second scenario is related to the case where none of overall efficiency scores is deteriorated. We improve the overall efficiencies by a pre-determined percentage in the last scenario. We formulate Linear Programming problems to allocate resources in all scenarios. All three scenarios are illustrated through numerical and empirical examples. PubDate: Fri, 17 Dec 2021 00:00:00 GMT

Abstract: Conflict is recognized as a major barrier in socio-economic development. In conflict situations, most sectors such as health, food, shelter and education are adversely affected. The provision of education services to conflict-affected children saves them from becoming a lost generation and contributes to community building. Thus, we conducted this research to investigate the potential of a GIS (Geographic Information Systems) approach and risk assessment based multi-criteria decision making (MCDM) for the allocation of displaced dropped-out children to the most appropriate educational centres, taking into account multiple goals related to cost, distance, risk, etc. A two-stage approach was adopted, utilizing a risk assessment approach, and a location-allocation approach. The risk assessment approach was carried out using GIS and F-AHP (Fuzzy Analytic Hierarchy Process) to determine the risk value of each candidate educational centre in the conflict area. In the location-allocation stage, a mathematical model was developed to allocate all demands to the chosen centres. All presented methods were computationally conducted on real case data provided by direct beneficiaries and stakeholders in the 26 sub-districts in the Idleb governorate, Syria. The computational results demonstrate that the proposed approaches ensure practical and theoretical impacts. PubDate: Fri, 17 Dec 2021 00:00:00 GMT

Abstract: The recommender system (RS) filters out important information from a large pool of dynamically generated information to set some important decisions in terms of some recommendations according to the user’s past behavior, preferences, and interests. A recommender system is the subclass of information filtering systems that can anticipate the needs of the user before the needs are recognized by the user in the near future. But an evaluation of the recommender system is an important factor as it involves the trust of the user in the system. Various incompatible assessment methods are used for the evaluation of recommender systems, but the proper evaluation of a recommender system needs a particular objective set by the recommender system. This paper surveys and organizes the concepts and definitions of various metrics to assess recommender systems. Also, this survey tries to find out the relationship between the assessment methods and their categorization by type. PubDate: Fri, 17 Dec 2021 00:00:00 GMT

Abstract: Rough set theory is a mathematical approach to imperfect knowledge. The near set approach leads to partitions of ensembles of sample objects with measurable information content and an approach to feature selection. In this paper, we apply the previous results of Bagirmaz [Appl. Algebra Engrg. Comm. Comput., 30(4) (2019) 285-29] and [Davvaz et al., Near approximations in rings. AAECC (2020). https://doi.org/10.1007/s00200-020-00421-3] to module theory. We introduce the notion of near approximations in a module over a ring, which is an extended notion of a rough approximations in a module presented in [B. Davvaz and M. Mahdavipour, Roughness in modules, Information Sciences, 176 (2006) 3658-3674]. Then we define the lower and upper near submodules and investigate their properties. PubDate: Fri, 17 Dec 2021 00:00:00 GMT

Abstract: In this study, unit-speed the Legendre curves are studied in Sasakian 3-manifold. Firstly, differential equations characterizing the Legendre curves are obtained and the method used for the approximate solution is explained. Then, the approximate solution is found for one of the characterizations of the Legendre curve with the Legendre matrix collocation method. In addition, a sample application is made to make the method more understandable. And finally, with the help of these equations and the approximate solution, the geometric properties of this curve type are examined. PubDate: Fri, 17 Sep 2021 00:00:00 GMT

Abstract: In this study, investigation of the economic growth of the Organization for Economic Cooperation and Development (OECD) countries and the countries in different income groups in the World Data Bank is conducted by using causality analyses and Generalized Estimating Equations (GEEs) which is an extension of Generalized Linear Models (GLMs). Eight different macro-economic, energy and environmental variables such as the gross domestic product (GDP) (current US$), CO2 emission (metric tons per capita), electric power consumption (kWh per capita), energy use (kg of oil equivalent per capita), imports of goods and services (% of GDP), exports of goods and services (% of GDP), foreign direct investment (FDI) and population growth rate (annual %) have been used. These countries have been categorized according to their OECD memberships and income groups. The causes of the economic growth of these countries belonging to their OECD memberships and income groups have been determined by using the Toda-Yamamoto causality test. Furthermore, various GEE models have been established for the economic growth of these countries belonging to their OECD membership and income groups in the aspect of the above variables. These various GEE models for the investigation of the economic growth of these countries have been compared to examine the contribution of the causality analyses to the statistical model establishment. As a result of this study, the highlight is found as the use of causally-related variables in the causality-based GEE models is much more appropriate than in the non-causality based GEE models for determining the economic growth profiles of these countries. PubDate: Fri, 17 Sep 2021 00:00:00 GMT

Abstract: In this work, a matrix method based on Laguerre series to solve singularly perturbed second order delay parabolic convection-diffusion and reaction-diffusion type problems involving boundary and initial conditions is introduced. The approximate solution of the problem is obtained by truncated Laguerre series. Moreover convergence analysis is introduced and stability is explained. Besides, a test case is given and the error analysis is considered by the different norms in order to show the applicability of the method. PubDate: Fri, 17 Sep 2021 00:00:00 GMT

Abstract: The special issue: “Numerical Techniques Meet with OR” of the Foundations of Computing and Decision Sciences consists of two parts which are of the main theme of numerical techniques and their applications in multi-disciplinary areas. The first part of this special issue was already collected in the FCDS Vol. 46, issue 1. In this second part of our special issue editorial, a description of the special issue presents numerical methods which can be used as alternative techniques for Scientific Computing and led Operational Research applications in many fields for further investigation. PubDate: Fri, 17 Sep 2021 00:00:00 GMT

Abstract: In today’s society, decision making is becoming more important and complicated with increasing and complex data. Decision making by using soft set theory, herein, we firstly report the comparison of soft intervals (SI) as the generalization of interval soft sets (ISS). The results showed that SIs are more effective and more general than the ISSs, for solving decision making problems due to allowing the ranking of parameters. Tabular form of SIs were used to construct a mathematical algorithm to make a decision for problems that involves uncertainties. Since these kinds of problems have huge data, constructing new and effective methods solving these problems and transforming them into the machine learning methods is very important. An important advance of our presented method is being a more general method than the Decision-Making methods based on special situations of soft set theory. The presented method in this study can be used for all of them, while the others can only work in special cases. The structures obtained from the results of soft intervals were subjected to test with examples. The designed algorithm was written in recently used functional programing language C# and applied to the problems that have been published in earlier studies. This is a pioneering study, where this type of mathematical algorithm was converted into a code and applied successfully. PubDate: Fri, 17 Sep 2021 00:00:00 GMT

Abstract: A Laplace operator and harmonic curve have very important uses in various engineering science such as quantum mechanics, wave propagation, diffusion equation for heat, and fluid flow. Additionally, the differential equation characterizations of the harmonic curves play an important role in estimating the geometric properties of these curves. Hence, this paper proposes to compute some new differential equation characterizations of the harmonic curves in Euclidean 3-space by using an alternative frame named the N-Bishop frame. Firstly, we investigated some new differential equation characterizations of the space curves due to the N-Bishop frame. Secondly, we firstly introduced some new space curves which have the harmonic and harmonic 1-type vectors due to alternative frame N-Bishop frame. Finally, we compute new differential equation characterizations using the N-Bishop Darboux and normal Darboux vectors. Thus, using these differential equation characterizations we have proved in which conditions the curve indicates a helix. PubDate: Fri, 17 Sep 2021 00:00:00 GMT

Abstract: In this article, we present an efficient method for solving Abel’s integral equations. This important equation is consisting of an integral equation that is modeling many problems in literature. Our proposed method is based on first taking the truncated Taylor expansions of the solution function and fractional derivatives, then substituting their matrix forms into the equation. The main character behind this technique’s approach is that it reduces such problems to solving a system of algebraic equations, thus greatly simplifying the problem. Numerical examples are used to illustrate the preciseness and effectiveness of the proposed method. Figures and tables are demonstrated to solutions impress. Also, all numerical examples are solved with the aid of Maple. PubDate: Fri, 17 Sep 2021 00:00:00 GMT

Abstract: We consider a multicriteria decision analysis (MCDA) problem where importance of criteria, and evaluations of alternatives with respect to the criteria, are expressed on a qualitative ordinal scale. Using the extreme-point principle of Data Envelopment Analysis (DEA), we develop a two-parameter method for obtaining overall ratings of the alternatives when preferences and evaluations are made on an ordinal scale. We assume no parametric setup other than the two parameters that reflect minimum intensities of discriminating among rank positions: one parameter for the alternatives’ ranking and one for the criteria ranking. These parameters are bounded by the ordinal input data, and they imply a universal tie among the alternatives when both parameters are selected to be zero. We describe the model, discuss its theoretical underpinning, and demonstrate its application. PubDate: Thu, 17 Jun 2021 00:00:00 GMT

Abstract: A problem of solving a continuous noncooperative game is considered, where the player’s pure strategies are sinusoidal functions of time. In order to reduce issues of practical computability, certainty, and realizability, a method of solving the game approximately is presented. The method is based on mapping the product of the functional spaces into a hyperparallelepiped of the players’ phase lags. The hyperparallelepiped is then substituted with a hypercubic grid due to a uniform sampling. Thus, the initial game is mapped into a finite one, in which the players’ payoff matrices are hypercubic. The approximation is an iterative procedure. The number of intervals along the player’s phase lag is gradually increased, and the respective finite games are solved until an acceptable solution of the finite game becomes sufficiently close to the same-type solutions at the preceding iterations. The sufficient closeness implies that the player’s strategies at the succeeding iterations should be not farther from each other than at the preceding iterations. In a more feasible form, it implies that the respective distance polylines are required to be decreasing on average once they are smoothed with respective polynomials of degree 2, where the parabolas must be having positive coefficients at the squared variable. PubDate: Thu, 17 Jun 2021 00:00:00 GMT

Abstract: The Depth of Inheritance Tree (DIT) metric, along with other ones, is used for estimating some quality indicators of software systems, including open-source applications (apps). In cases involving multiple inheritances, at a class level, the DIT metric is the maximum length from the node to the root of the tree. At an application (app) level, this metric defines the corresponding average length per class. It is known, at a class level, a DIT value between 2 and 5 is good. At an app level, similar recommended values for the DIT metric are not known. To find the recommended values for the DIT mean of an app we have proposed to use the confidence and prediction intervals. A DIT mean value of an app from the confidence interval is good since this interval indicates how reliable the estimate is for the DIT mean values of all apps used for estimating the interval. A DIT mean value higher than an upper bound of prediction interval may indicate that some classes have a large number of the inheritance levels from the object hierarchy top. What constitutes greater app design complexity as more classes are involved. We have estimated the confidence and prediction intervals of the DIT mean using normalizing transformations for the data sample from 101 open-source apps developed in Java hosted on GitHub for the 0.05 significance level. PubDate: Thu, 17 Jun 2021 00:00:00 GMT

Abstract: In software, code is the only part that remains up to date, which shows how important code is. Code readability is the capability of the code that makes it readable and understandable for professionals. The readability of code has been a great concern for programmers and other technical people in development team because it can have a great influence on software maintenance. A lot of research has been done to measure the influence of program constructs on the code readability but none has placed the highly influential constructs together to predict the readability of a code snippet. In this article, we propose a novel framework using statistical modeling that extracts important features from the code that can help in estimating its readability. Besides that using multiple correlation analysis, our proposed approach can measure dependencies among di erent program constructs. In addition, a multiple regression equation is proposed to predict the code readability. We have automated the proposals in a tool that can do the aforementioned estimations on the input code. Using those tools we have conducted various experiments. The results show that the calculated estimations match with the original values that show the effectiveness of our proposed work. Finally, the results of the experiments are analyzed through statistical analysis in SPSS tool to show their significance. PubDate: Thu, 17 Jun 2021 00:00:00 GMT