Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract All the classification solutions in artificial intelligence can be summed up as explicit factor implicit problem, and the explicit factor implicit problem should be solved by linear programming. The simplex linear programming algorithm is simple and fast, but it is not a polynomial algorithm. Whether it can be improved into a “strong polynomial algorithm”, that is, in any case, the number of operations of this algorithm is a polynomial function of the number of equations and variables, is a trans-century international mathematical problem that has been unsolved for decades. This question, which involves the mathematical boundaries of ai development, is crucial. The method of solving programming problem from the demand and specialty of artificial intelligence is called factor programming. This paper will introduce the basic ideas of factor explicit and implicit programming and factor programming, and write programs for some of the algorithms, and prove the theorem of triangular matrix optimization. PubDate: 2022-05-13

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Artificial intelligence technology has made important progress in machine learning and problem-solving with relatively determined boundary conditions. However, the more common open problems with uncertain boundary conditions in management practice still depend on the experience mastered by individuals. The combination of Extenics, Factor Space, and knowledge management will potentially solve this kind of problem intelligently to a extent. Based on Extenics and factor space theory, this paper studies the Extension model of open problems, explores the intelligent expansion mechanism of factor knowledge in big data environment, and constructs the double integration of multi granularity factor knowledge space and expert experience knowledge. We try to make Extenics and factor space theory complement each other in the field of problem solving, reveal the knowledge expansion mechanism of open problem solving in the big data environment, provide a novel theoretical perspective and method basis for knowledge based intelligent service on factor mining. This paper will also provide theoretical research directions for building a new generation of problem-oriented new factor knowledge base, promote the deep integration of knowledge management and artificial intelligence leading to a new direction of knowledge engineering based on factor space and Extenics. PubDate: 2022-05-11

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract In this paper, I have developed a multi item production inventory model for the non-deteriorating items with constant demand rate under the limitation on set up cost. The production price and set-up price are the most vital problem within the inventory system of the marketplace in international. Here the production cost is dependent on the demand as well as populations. Set up cost is dependent on the average inventory level. Holding cost is the most challenging issue in the business world. In order to reduce the holding cost, the holding cost function has been considered as on the number of peoples. Due to uncertainty all the cost parameters are taken as the generalized triangular fuzzy number. Multi objective fuzzy inventory model has been solved by various techniques like Fuzzy programming technique with hyperbolic membership function, Fuzzy non-linear programming technique and Fuzzy additive goal programming technique. Numerical example is given to illustrate the inventory model. Sensitivity analysis and the graphical representations have been shown to illustrate the reality of the inventory model. PubDate: 2022-05-10

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract In the contemporary era, collaborative computing is the widely used model to exploit geographically distributed heterogeneous computing resources. Mobile Cloud Computing (MCC) offers an infrastructure that helps in offloading storage and computing resources to a public cloud. It has several advantages. However, in the context of modern Internet of Things based applications, it is essential to exploit idle resources of mobile devices as well. However, it is a challenging problem as mobile devices are resource-constrained and have mobility. Many existing MCC solutions concentrated on offloading tasks to outside mobile devices. In this paper, we investigate the possibility of using idle resources in mobile devices besides offloading tasks to the cloud. We proposed a novel algorithm known as Delay-aware Energy-Efficient Task Scheduling. The algorithm analyses locally available idle resources and schedules tasks over heterogeneous cores in mobile devices and also the cloud. In the process, it achieves strict deadlines associated with tasks and promotes energy conservation. A prototype application is built to simulate and evaluate the proposed algorithm. The experimental results revealed that the algorithm outperforms the existing baseline algorithms. PubDate: 2022-05-08

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract The geographical information system has been generally used for analyzing different types of data. The notions and results of topology have been applied in this connection, known as a spatial topological relation. In this article, we have studied the different layers of geographical data and their intersection property, separation axioms on spatial topological space, spatial analysis. PubDate: 2022-05-07

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract The incursion of COVID-19 into global space has constituted both public health emergency and economic crisis, thus there is need to investigate the transmission of inherent uncertainty associated with the pandemic on stock markets. Based on this, this study investigates the dynamic interaction of COVID-19 incidence and stock market performance in Nigeria. The study uses daily time series data between 2/4/2020 and 8/8/2020 of All Share Index (ASI), COVID-19 pandemic confirmed cases, Nigerian borrowing rate and exchange rate to conduct the analysis. Sequel to careful econometric investigation of data, vector autoregressive model was adopted for estimation due to the dynamic nature of the study. The estimation results show that the lagged value of COVID-19 infections exerts negative impact on ASI; specifically, a unit increase in COVID-19 infections causes ASI to fall by 0.066%. Similarly, the lagged value of ASI exerts negative impact on COVID-19 cases. Equally notable, a unit increase in ASI causes covid-19 cases to fall by 0.02% though it is not statistically significant. The study concludes that COVID-19 has a negative effect on Nigerian stock market performance; therefore, apart from small and medium enterprises government may need to extend stimulus package to public quoted firms as part of the efforts to bring the economy back on track. PubDate: 2022-05-05

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract The worldwide spread of the novel coronavirus originating from Wuhan, China led to an ongoing pandemic as COVID-19. The disease being a contagion transmitted rapidly in India through the people having travel histories to the affected countries, and their contacts that tested positive. Millions of people across all states and union territories (UT) were affected leading to serious respiratory illness and deaths. In the present study, two unsupervised clustering algorithms namely k-means clustering and hierarchical agglomerative clustering are applied on the COVID-19 dataset in order to group the Indian states/UTs based on the pandemic effect and the vaccination program from the period of March, 2020 to early June, 2021. The aim of the study is to observe the plight of each state and UT of India combating the novel coronavirus infection and to monitor their vaccination status. The research study will be helpful to the government and to the frontline workers coping to restrict the transmission of the virus in India. Also, the results of the study will provide a source of information for future research regarding the COVID-19 pandemic in India. PubDate: 2022-05-04

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract The power load forecasting plays an important role in the economical and safe operation of the modern power system. However, the characteristics of power load such as non-stationarity, nonlinearity, and multiple quasi-periodicities make power load forecasting a challenging task. The present work focuses on developing a multi-model ensemble forecasting strategy by using prediction phase space construction, similar scenario improved support vector machine, and variable-weighted ensemble method, based on the Factor Space Theory. Firstly, the concept of “Prediction Scenario” is proposed to describe the “Internal historical facts in time series form” and the “External space–time environment composed of external influence factors” of power load forecasting. Next, the candidate input features for power load forecasting are selected based on the correlation analysis between the power load to be predicted, its historical load, and external influence factors. Then, based on the Factor Space Theory, the feature description of the “Prediction Scenarios” is studied and a series of prediction phase space are constructed by randomly selecting some strongly correlated features. An improved support vector machine is proposed based on similar historical scenario screening to set up the unit prediction sub models in each corresponding prediction phase space. The performance of these models is tested by simulation experiments and the variable weight of each model is designed based on the results. Finally, the power loads are forecasted by variable weighted ensemble of multiple models in different prediction phase spaces. The results of mid-Atlantic region load forecasting analysis suggest that the proposed method has better performance in almost cases, comparing with Support Vector Machine, Recurrent Neural Network, Self-partitioning Local Neuro Fuzzy method, Random Forest, Ensemble Neuro-fuzzy method and other state of art forecasting methods. PubDate: 2022-04-30

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract A constant-stress partially accelerated life test (CSPALT) is the most widespread type where each examination unit is subjected to only one chosen stress level until its failure or the termination of the experiment, whichever occurs first. This paper presents the CSPALT with Type-I and -II censoring schemes in the occurrence of competing failure causes when the lifetime of test units follows the two-parameter Fréchet distribution. The lifetime of test units follows the two-parameter Fréchet distribution. The maximum likelihood method is used to estimate the parameters of the failure distribution. The Fisher Information Matrix and variance–covariance matrix are also assembled. Furthermore, a simulation technique is applied to investigate the performance of the theoretical estimators of the parameters. PubDate: 2022-04-27

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract The geometric process (GP) is used to conduct a statistical analysis of accelerated life testing under constant stress using type-I censored data for the Generalized Exponential failure distribution. The lifespan of test items forms a GP as stress levels increases. The technique of maximum likelihood estimation is used to estimate the parameters. To determine the asymptotic variance of maximum likelihood estimators, the Fisher information matrix is constructed. This asymptotic variance is then used to provide asymptotic interval estimates for the distribution parameters. Finally, a simulation approach is used to demonstrate the parameters' statistical properties and confidence ranges. PubDate: 2022-04-20

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract The outbreak of the novel coronavirus led everyone globally to face various setbacks. Such a sector that oversaw several shocks worldwide was the financial sector, namely the stock and the commodity markets. Both the markets had different and unprecedented reactions in the different corners of the world. This was due to several reasons like government intervention, welfare policies, investor behaviour etc. This paper discusses that topic in further detail, with examples and studies from all around the planet. The main objective is to expand the pre-existing knowledge on how different regions had different reactions to the pandemic and the policies that it brought along. The stock market, in general, faced an adverse shock that led to low investments and careful foreign investment. The commodity market saw the prices of all commodities on an upward trend except for gold which observed a downward trend. Moreover, this paper also discusses the future scope and the challenges that might be faced by the markets further down the line. PubDate: 2022-04-19

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Data intelligence is the core task of the information revolution entering the Internet era. It brings opportunities, but also makes human civilization face risks. Data drowns the idea and data is supreme. People regard manufacturing data as the goal of digital economy, stook data up hoarding and turn data into an immortal holy thing, which is very harmful. This paper insists on leading the data with thinking, and puts forward the blueprint of constructing a huge knowledge base with factor pedigree and factor encoding. Factor pedigree is an embedded high-level knowledge graph. Factor encoding is a program for organizing concepts according to connotation. It can not only prevent the proliferation of data, but also be of great significance for natural language understanding. PubDate: 2022-04-19

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract In this study, a one-parameter discrete probability distribution is proposed and studied. The understudy distribution is named “Poisson Moment Exponential distribution”. Mathematical properties of proposed distribution are derived and discussed. For parameter estimation purposes seven different methods maximum likelihood, maximum product spacing, Anderson-Darling, Cramer von-Misses, least-squares, weighted least-squares and right tailed Anderson-Darling are used. The behavior of these estimators is assessed using a Monte Carlo simulation study. Four real datasets from different fields (i.e. failure times, slow-pace students’ marks, epileptic seizure counts, and European corn borer) are used to show the flexibility of the proposed distribution. It is evident that the proposed discrete distribution efficiently analyzed these datasets. PubDate: 2022-04-19

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract In recent years, deep neural networks (DNNs) have attracted extensive attention due to their excellent performance in many fields of vision and speech recognition. With the increasing scale of tasks to be solved, the network used is becoming wider and deeper, which requires millions or even billions of parameters. The deep and wide network with many parameters brings the problems of memory requirement, computing overhead and over fitting, which seriously hinder the application of DNNs in practice. Therefore, a natural idea is to train sparse networks and floating-point operators with fewer parameters while maintaining considerable performance. In the past few years, people have done a lot of research in the field of neural network compression, including sparse-inducing methods, quantization, knowledge distillation and so on. And the sparse-inducing methods can be roughly divided into pruning, dropout and sparse regularization based optimization. In this paper, we briefly review and analyze the sparse regularization optimization methods. For the model and optimization method of sparse regularization based compression, we discuss both the different advantages and disadvantages. Finally, we provide some insights and discussions on how to make sparse regularization fit within the compression framework. PubDate: 2022-04-16

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract Accelerated life testing has now become the primary method for rapidly assessing product reliability and designing efficient test plans is a vital step in ensuring that accelerated life tests can properly, quickly, and economically assess product reliability. These tests subject the sample to high levels of stress. Then, based on the stress-life relationship, the product life at a normal stress level can be calculated by extrapolating the life information from a sample at a high-stress level to the normal level. The purpose of the study is to investigate the estimation of failure time data for step-stress partially accelerated life testing using multiply censored data. The test components’ lifetime distribution is assumed to follow Fréchet’s distribution. The distribution parameter and tampering coefficient are estimated using maximum-likelihood point and interval estimations. A Monte Carlo simulation study is used to evaluate and compare the performance of model parameter estimators utilizing multiply censored data in terms of biases and root mean squared errors. PubDate: 2022-04-12

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract The determining degree-based classification methods, new types of classification methods built in the frame of factor space theory, mainly include factorial analysis, improved factorial analysis and set subtraction and rotation calculation (S&R). This paper first compares the three methods to present a comprehensive understanding of them and claims that whether to reuse dominant factors and to use synthetic partitioning are the main differences between factorial analysis and S&R. Furthermore, this paper introduces S&R definitively and concisely through an example. Based on the investigation, we propose a novel method for classification problems with interval-valued attributes that uses a determining degree to discretize interval values, and takes S&R as one of its steps. Experimental results show that this method is effective and reasonable. PubDate: 2022-04-09

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract This study identified some of the problems of collecting ground truth data for supervised classification and the presence of mixed pixels. The mixed pixel problem is one of the main factors affecting classification precision in classifying the remotely sensed images. Mixed pixels are usually the biggest reason for degrading the success in image classification and object recognition.In this study, a fuzzy supervised classification method in which geographical information is represented as fuzzy sets is used to overcome the problem of mixed pixels. Partial membership of the mixed pixels allows component cover classes to be identified and more accurate statistical parameters to be generated. As a result, the error rates get reduced compared with the conventional classification methods like linear discriminant function(LDF) and quadratic discriminant function(QDF).The study used real satellite image data of some terrain in western Uttar Pradesh India. PubDate: 2022-04-06

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract In large-scale observational data with a hierarchical structure, both clusters and interventions often have more than two levels. Popular methods in the binary treatment literature do not naturally extend to the hierarchical multilevel treatment case. For example, most K-12 and universities have moved to an unprecedented hybrid learning module during the COVID-19 pandemic where learning modes include hybrid and fully remote learning, while students were clustered within a class and school region. It is challenging to evaluate the effectiveness of the learning outcomes of the multilevel treatments in a hierarchically data structured. In this paper, we study a covariates matching method and develop a generalized propensity score matching method to reduce the bias of estimation in the intervention effect. We also propose simple algorithms to assess the covariates balance for each approach. We examine the finite sample performance of the methods via simulation studies and apply the proposed methods to analyze the effectiveness of learning modes during the COVID-19 pandemic. PubDate: 2022-04-04

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract This paper takes into consideration statistical inferences in competing risk models with Akshaya sub-distributions based on the type-II censoring scheme. It is supposed to be the k causes of failures. In the analysis of point and interval estimations of all model parameters, maximum likelihood and Bayesian procedures are applied. The Gibbs within Metropolis–Hasting samplers procedure is applied using the Markov chain Monte Carlo (MCMC) technique to get the Bayes estimates of the unknown parameters, their credible intervals (CRIs) and to estimate the relative risks. Furthermore, the survivor functions for subsystems and the overall system are evaluated. Finally, a real-life data set, which represents the times (in years) from HIV infection to AIDS and death in 329 men who had sex with men (MSM), is considered an application of the proposed methods. PubDate: 2022-04-02

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Abstract As one of the important research topics in machine learning, loss function plays an important role in the construction of machine learning algorithms and the improvement of their performance, which has been concerned and explored by many researchers. But it still has a big gap to summarize, analyze and compare the classical loss functions. Therefore, this paper summarizes and analyzes 31 classical loss functions in machine learning. Specifically, we describe the loss functions from the aspects of traditional machine learning and deep learning respectively. The former is divided into classification problem, regression problem and unsupervised learning according to the task type. The latter is subdivided according to the application scenario, and here we mainly select object detection and face recognition to introduces their loss functions. In each task or application, in addition to analyzing each loss function from formula, meaning, image and algorithm, the loss functions under the same task or application are also summarized and compared to deepen the understanding and provide help for the selection and improvement of loss function. PubDate: 2022-04-01