Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The exponential growth of technology has made images and videos popular digital objects. The increase in the number of visual imagery, crimes such as Identity theft, privacy invasion, fake news, etc. has also increased. The paper proposes a simple, easy-to-train, fully Convolutional Neural network, named MiniNet to detect forged images with high accuracy. The model is evaluated on existing image forgery datasets which consist of Authentic and tampered images. The proposed model achieved an accuracy of more than \(95\%\) for the 140 K Real and Fake Faces and \(93\%\) for CASIA datasets. Multiple Ablation studies are conducted on various state-of-the-art (SOTA) CNN models to check their performance on the given dataset. The objective is to assess the ability of CNN in detecting Image tampering. The experiments are done based on different aspects such as self-attention, positional encoding, and depth of the model. The minimal architecture used for image forgery detection is presented along with the performance achieved on different well-known datasets. PubDate: 2022-06-24
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, transfer learning is used for the adaptation of pre-trained deep convolutional neural networks (DCNNs) to find the best appropriate method for the classification of obstructive sleep apnea (OSA) using electrocardiogram (ECG) signals. The Physio Apnea-ECG data set has been used for the evaluation of the proposed method. In deep learning algorithms, especially in image data classification, more data leads to better performance. For this reason, in this paper, we propose a novel technique as follows. First, the ECG signal is divided into 2-s segments and filtered, then the recurrence plots (RP) algorithm is used to convert these segments into two-dimensional images. The RP is an advanced tool that can indicate how resemblances between particular orders vary over time. These plots are generally used for the qualitative evaluation of the time series in dynamic systems. Finally, in the classification stage, 5 pre-trained DCNN models on ImageNet datasets including EfficienNet-B0, EfficienNet-B1, EfficienNet-B2, Inception-v3, and Xception are considered for final decision. By using these methods, the classification accuracies of 88.67%, 90.59%, 90.52%, 93.33%, and 93.19%, were obtained respectively. In this research, by analyzing the performance of the models used, we can see that by increasing the input image size, the number of network parameters, its depth, and the classification performance is improved. Also, the performance of the Inception-v3 model which has the largest input image size and number of parameters with 93.33% accuracy, is better than other models for OSA detection. PubDate: 2022-06-22
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Attention is one of the most important cognitive functions since it allows us to discriminate irrelevant stimuli when performing an activity. The presence of an attention deficit significantly affects a person’s performance. This is one of the reasons why it is of utmost importance to determine the state of attention mechanisms.A tool that allows determining the level of attention could be of great help in the diagnosis of syndromes or disorders, as well as in the rehabilitation and treatment of people suffering from attention deficits. In this work, a methodology is proposed based on a Random Forest algorithm optimized with PSO (Particle Swarm Optimization) for the classification of attention levels. These attention levels are divided into three main categories: High Attention, Normal Attention, and Low Attention. The proposed approach demonstrated reaching an accuracy of up to 96%. Finally, the approach from this contribution was compared with the state of the art, demonstrating that this work is a feasible methodology for this application. PubDate: 2022-06-11
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This article investigates cybersecurity (and risk) in the context of ‘technological singularity’ from artificial intelligence. The investigation constructs multiple risk forecasts that are synthesised in a new framework for counteracting risks from artificial intelligence (AI) itself. In other words, the research in this article is not just concerned with securing a system, but also analysing how the system responds when (internal and external) failure(s) and compromise(s) occur. This is an important methodological principle because not all systems can be secured, and totally securing a system is not feasible. Thus, we need to construct algorithms that will enable systems to continue operating even when parts of the system have been compromised. Furthermore, the article forecasts emerging cyber-risks from the integration of AI in cybersecurity. Based on the forecasts, the article is concentrated on creating synergies between the existing literature, the data sources identified in the survey, and forecasts. The forecasts are used to increase the feasibility of the overall research and enable the development of novel methodologies that uses AI to defend from cyber risks. The methodology is focused on addressing the risk of AI attacks, as well as to forecast the value of AI in defence and in the prevention of AI rogue devices acting independently. PubDate: 2022-06-04
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper introduces a proposed method for hidden community detection using genetic algorithm to consider network immunization against malware propagation. A dynamic spreading model is proposed, namely the susceptible–infected–recovered–susceptible with vaccination and quarantine states (SIRS-QV) to control the speed of malware propagation in communities. The vital nodes in communities are vaccinated to improve immunization of social networks. Moreover, the genetic algorithm is used to discover hidden network communities based on modularity criteria to measure the strength of a set of communities that partition the network. The hiddenness value is calculated to select a community with a higher hiddenness value and vaccinate the nodes in these communities to reduce the rapid spread of malware and after a short time halt the malware in the network. PubDate: 2022-06-03
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Penetration of Distributed Generation (DG) is growing due to increase in the load demands. This paper addresses ant colony optimization (ACO) technique for optimum integration of DG in a distribution system for minimizing techno-economic objective function (TEOF). TEOF is composed of a power loss index, voltage deviation index and operating cost index. The various costs such as purchasing active power from the grid, DG installation, DG operation and maintenance are evaluated at normal and heavy load scenarios. The proposed methodology is examined on IEEE 33-bus (Sahoo and Prasad, Energy Convers Manag 47:3288-3306) and 85-bus Indian utility radial networks (Shuaib et al., Int J Electr Power Energy Syst 64:384-397, 2015) to carry out technical and economic analysis for solar and wind based DG. The obtained results are compared with the research outcomes of other researchers. Our results confirm that appropriate allocation of DG gives remarkable reduction in power losses of around 50–90%, improvement of minimum voltage magnitude of around 6–13% and net saving of 14–39% with two DG placement. PubDate: 2022-06-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper considers the development of a novel hybrid metaheuristic algorithm which is proposed to achieve an optimum design for automobile cruise control (ACC) system by using a proportional-integral-derivative (PID) controller based on Bode’s ideal transfer function. The developed algorithm (AOA-NM) adopts one of the recently published metaheuristic algorithms named the arithmetic optimization algorithm (AOA) to perform explorative task whereas another well-known local search method known as Nelder–Mead (NM) simplex search to perform exploitative task. The developed hybrid algorithm was initially tested on well-known benchmark functions by comparing the results with only its original version since AOA has already been shown to be better than other state-of-the-art algorithms. The statistical results obtained from benchmark functions have demonstrated better capability of AOA-NM. Furthermore, a PID controller based on Bode’s ideal transfer function was adopted to regulate an ACC system optimally. Statistical, convergence rate, time domain and frequency domain analyses were performed by comparing the performance of AOA-NM with AOA. The respective analyses have shown better capability of the proposed hybrid algorithm. Moreover, the capability of the proposed AOA-NM based PID control scheme was compared with other available approaches in the literature by using time domain analysis. The latter case has also confirmed enhanced capability of the proposed approach for regulating an ACC system which further verified the ability of the proposed AOA-NM algorithm. Lastly, other recently reported and effective metaheuristic algorithms were also used to assess the performance of the proposed approach. The obtained comparative results further confirmed the AOA-NM to be a greater tool to achieve more successful results for ACC system. PubDate: 2022-06-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The paper introduces an evolving hyperbox granulation and functional fuzzy rule-based modeling approach within the framework of min–max learning. Evolving hyperbox fuzzy modeling is a per sample, one pass recursive learning mechanism suitable for on-line and real-time adaptive stream data-based modeling. Granulation of the data space is done as data are input, and undergoes continuous adaptation using expansion, contraction, and redundancy avoidance operations to encounter the number of hyperboxes that best matches the data, adjusting the granular structure of the data space whenever necessary. A functional fuzzy rule with Gaussian membership function in the rule antecedent, and affine function in the rule consequent is assigned to each hyperbox. The granular rule-based model developed during learning is transparent, understandable and easily interpretable. Hyperbox fuzzy modeling scales up well for data intensive applications because the models it develops are parsimonious, and min–max learning operates primarily with additions and comparisons. The use of evolving hyperbox fuzzy modeling approach to forecast a stock market index using actual time series data, to identify a synthetic high dimensional nonlinear system, and to predict a chaotic time series shows that it outperforms several state of the art evolving modeling counterparts. PubDate: 2022-06-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In recent years social media became an important part of everyday life for many people. A big challenge of social media is, to find posts, that are interesting for the user. Many social networks like Twitter handle this problem with so-called hashtags. A user can label his own Tweet (post) with a hashtag, while other users can search for posts containing a specified hashtag. But what about finding posts which are not labeled by the creator' We provide a way of completing hashtags for unlabeled posts using classification on a novel real-world Twitter data stream. New posts will be created every second, thus this context fits perfectly for non-stationary data analysis. Our goal is to show, how labels (hashtags) of social media posts can be predicted by stream classifiers. In particular, we employ random projection (RP) as a preprocessing step in calculating streaming models. Also, we provide a novel real-world data set for streaming analysis called NSDQ with a comprehensive data description. We show that this dataset is a real challenge for state-of-the-art stream classifiers. While RP has been widely used and evaluated in stationary data analysis scenarios, non-stationary environments are not well analyzed. In this paper, we provide a use case of RP on real-world streaming data, especially on NSDQ dataset. We discuss why RP can be used in this scenario and how it can handle stream-specific situations like concept drift. We also provide experiments with RP on streaming data, using state-of-the-art stream classifiers like adaptive random forest and concept drift detectors. Additionally, we experimentally evaluate an online principal component analysis (PCA) approach in the same fashion as we do for RP. To obtain higher dimensional synthetic streams, we use random Fourier features (RFF) in an online manner which allows us, to increase the number of dimensions of low dimensional streams. PubDate: 2022-06-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The progress of the Internet of Medical Things (IoMT) and mobile technologies is a crucial driver for the evolution of healthcare systems in the path of prevention, early diagnosis and care, consequently unleashing the full potential of medical devices. Especially in intensive care, several vital signs can be monitored to provide an Early Warning Score (EWS) useful to detect the onset of pathological events or severe conditions. However, under these conditions, it would be beneficial to design a system that can provide a risk assessment even in the presence of a reduced number of vital signs. In this work, we propose an on-edge system, connected to one or more wearable medical devices, that is able to collect, analyze and interpret real-time clinical parameters and to provide an EWS-like clinical risk measurement. The system shows an evolutionary behavior by dividing the learning problem in two simpler ones, in order to correctly distinguish between low-urgency and emergency scenarios, with the possibility of selecting the most convenient configuration able to choose the most appropriate classifier even when the feature set does not allow a robust model selection. In particular, we focus on a comparative analysis of machine learning (ML) methods in different conditions of available vital parameter sets, evolving therefore to an adaptive ML approach. Moreover, since from the integration of artificial intelligence tools and IoMT, emerging ethical issues may arise about lack of transparency, a semantic-based explanation is associated to enrich the predictions along with the health data by means of Semantic Web technologies. PubDate: 2022-06-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper present a novel system for person authentication based on score level fusion of Minor and Major dorsal finger knuckle patterns. In the proposed method, the adaptive single scale retinex method is used to extract the reflectance and the illumination of Major and Minor traits respectively, also the binarized statistical image features method is used to extract normalized histogram features. Furthermore, the Cosine Mahalanobis distance is used in the matching stage. Moreover, a multi-biometric system based score level fusion has been proposed. In attempt to enhance the performance recognition, the symmetric sum-based rules based on triangular norms are applied. The system is evaluated on the publically Minor/Major knuckle database. Experiments conducted on this database achieved good results. Besides, the proposed system outperforms the previous methods given in the state of the art. PubDate: 2022-06-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Multilevel image thresholding and image clustering, two extensively used image processing techniques, have sparked renewed interest in recent years due to their wide range of applications. The approach of yielding multiple threshold values for each color channel to generate clustered and segmented images appears to be quite efficient and it provides significant performance, although this method is computationally heavy. To ease this complicated process, nature inspired optimization algorithms are quite handy tools. In this paper, the performance of Chimp Optimization Algorithm (ChOA) in image clustering and segmentation has been analyzed, based on multilevel thresholding for each color channel. To evaluate the performance of ChOA in this regard, several performance metrics have been used, namely, Segment evolution function, peak signal-to-noise ratio, Variation of information, Probability Rand Index, global consistency error, Feature Similarity Index and Structural Similarity Index, Blind/Referenceless Image Spatial Quality Evaluatoe, Perception based Image Quality Evaluator, Naturalness Image Quality Evaluator. This performance has been compared with eight other well known metaheuristic algorithms: Particle Swarm Optimization Algorithm, Whale Optimization Algorithm, Salp Swarm Algorithm, Harris Hawks Optimization Algorithm, Moth Flame Optimization Algorithm, Grey Wolf Optimization Algorithm, Archimedes Optimization Algorithm, African Vulture Optimization Algorithm using two popular thresholding techniques-Kapur’s entropy method and Otsu’s class variance method. The results demonstrate the effectiveness and competitive performance of Chimp Optimization Algorithm. PubDate: 2022-05-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Random Forest is one of the most popular supervised machine learning algorithms; it is an ensemble of decision trees combined together to accurately discover more rules and ensure diversity. Generally, constructing a large number of trees could lead to constructing redundant ones which may badly affect storage memory, computational time resources, performance attainability and interpretability. A plenty of methods have been proposed in the literature for the sake of selecting a sub-forest while maintaining or even increasing the whole performance. In this paper, a new sub-forest selection method is proposed. It comes in two flavours: first, selecting the minimal number of trees possible, and second, maintaining or even ameliorating the original ensemble performance attainability. A noisy variable technique is introduced as an indicator of underperforming trees. This generated variable is injected in the feature space at each node during the tree’s construction. As a result, the noisy trees are eliminated from the final sub-forest. To test the validity of the proposed method, we have employed real and artificial benchmarking datasets. The obtained results confirm that the generated sub-forest is of a small size and high performance compared to the state-of-the-art algorithms. PubDate: 2022-05-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper presents the handling of nonlinear system identification problem based on Volterra-type nonlinear systems. An efficient arithmetic optimization algorithm (AOA) along with the Kalman filter (KF) is being used for the estimation/identification purpose. The KF is proved to be a good state estimator in estimation theory. It is used to estimate the unknown variables with some given measurements observed over time. However, the performance of KF technique degrades while dealing with real-time state estimation problems. To overcome the problem encountered in KF technique, two steps are followed for nonlinear system identification. The first one involves evaluation of the KF parameters using the AOA algorithm by taking a considerable fitness function. The second step is to estimate the parameters of Volterra model using the KF method utilizing the optimal KF parameters attained in first step. In order to prove the efficiency of the proposed KF assisted AOA algorithm is further tested on various benchmark unknown Volterra models. Simulated results are reported in terms of mean square error (MSE), mean square deviation (MSD), Volterra coefficients estimation error, and fitness percentage. The results are compared with other similar algorithms such as sine cosine algorithm (SCA) assisted KF (SCA-KF), cuckoo search algorithm (CSA) assisted KF (CSA-KF), particle swarm optimization (PSO) assisted KF (PSO-KF) and genetic algorithm (GA) assisted KF (GA-KF). The reported results reveal that AOA-KF algorithm is the right choice for nonlinear system identification problem compared to the SCA-KF, CSA-KF, PSO-KF and GA-KF. PubDate: 2022-05-25
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper focuses on the new model of classification of wagon bogie springs condition through images acquired by a wayside equipment. As such, we are discussing the application of a semi-supervised learning approach based on a deep rules-based (DRB) classifier learning approach to achieve a high classification of a bogie, and check if they either have spring problems or not. We use a pre-trained VGG19 deep convolutional neural network to extract the attributes from images to be used as input to the classifiers. The performance is calculated based on the data set composed of images provided by a Brazilian railway company which covers the two spring condition : normal condition (no elastic reserve problems) and bad condition (with elastic reserve problems). Also, an additive Gaussian noise level is applied to the images to challenge the proposed model. Finally, we discuss the performance analysis of the semi-supervised DRB (SSDRB) classifier and its distinctive characteristics compared with other classifiers. The reported results demonstrate a relevant performance of the SSDRB classifier applied to the questions raised. PubDate: 2022-05-24
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The importance of neural networks in control systems has grown in recent years as a result of their learning and universal approximation capabilities. When the plant dynamics are complex, system recognition and controller design become particularly difficult. In this paper, we propose a technique for identifying the system dynamics and neural network based Fractional Order Proportional Integral Derivative (FOPID) controller design for separately excited DC motor. A category of Recurrent Neural Networks (RNNs) called Nonlinear Auto Regressive with eXogenous input networks (NARXnets) are used to recognize the plant dynamics. To verify the proposed method, a separately excited DC motor is considered as plant and Harris Hawks Optimization (HHO) algorithm tuned FOPID controller as the model controller. The motor and controller dynamics are identified using NARXnets. The simulation results demonstrate that the proposed controller is performing superior to the conventional FOPID/PID controllers. The step and load response analysis shows stable and robust performance of neural network based FOPID controller. In addition, the proposed method can also be used as an alternative technique to approximate FOPID controllers using neural networks. PubDate: 2022-04-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, a new evolutionary method designs and improves the reliability of Supervisory Control and Data Acquisition (SCADA) of reservoir station systems in the water transfer network. The proposed mathematical model uses a reliability Block Diagram (RBD) and redundancy policies. Then a bi-objective non-linear mathematical RAP model considering cost and reliability optimizes the number of redundant components in each subsystem. A customized hybrid dynamic NSGA II mixed with the MOPSO algorithm solves the proposed RAP. The customized algorithm uses a dynamic repository to save the elites for each generation. These elites will form the final solutions. Also, the parameters will dynamically change with the progress of the algorithm. This approach was compared to the mathematical method, meta-heuristic method and it had a better performance. Finally, the mathematical relations of control centers and stations calculate the total reliability of the SCADA system concerning the k-out-of-n-systems regarding minimum stations for acceptable system performance. PubDate: 2022-04-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract A precise localization of sensor nodes with minimum energy consumption in Green wireless sensor networks (GWSNs) is always required to signify its field observations. Localization through regression rather than iterative-classification is preferred energy-savvy machine learning model in GWSN. While the regression models decay network energy in hop-size approximation and communication too harshly to sustain in outdoor range-free obstructed-field with high path-loss-exponent. Considering these constraints, we propose a model KGR for the localization of sensor nodes by eliminating two dominant energy consumption cycles- hop-size computation and hop-size communication. The proposed KGR is an energy-efficient hop-based range-free model which works in three steps- Training Data Segregation, Regression Model Design, and Best Fit Estimation. In Training Data Segregation, we obtain a training data set of anchor nodes who behaves similar to the intended unknown node in terms of hop-counts, by using discrete Fréchet-distance and Lagrange-polynomial. Thereafter, in the Regression Model Design, local linear Gaussian-kernel regression (LLGKR) is defined by taking bandwidth from the training data set through k-NN algorithm. Consequently, the unknown node is able to learn a relationship between hop-counts and its distance from the respective anchor nodes. Further, in Best Fit Estimation, a possible location of the unknown node is obtained by using the distances obtained in Regression Model Design. At every stage of KGR, steps are simplified to pull-down the computational efforts. Simulation of KGR validates its localization accuracy and energy efficiency in comparison to other leading range-free regression algorithms. PubDate: 2022-04-13 DOI: 10.1007/s12530-022-09436-2
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Due to increasing volume of big data the high volume of information in Social Network put a stop to users from acquiring serviceable information intelligently so many recommendation systems have emerged. Multi-agent Deep Learning gains rapid attraction, and the latest accomplishments address problems with real-world complexity. With big data precise recommendation has yet to be answered. In proposed work Deep Recurrent Gaussian Nesterov’s Optimal Gradient (DR-GNOG) that combines deep learning with a multi-agent scenario for optimal and precise recommendation. The DR-GNOG is split into three layers, an input layer, two hidden layers and an output layer. The tweets obtained from the users are provided to the input layer by the Tweet Accumulator Agent. Then, in the first hidden layer, Tweet Classifier Agent performs optimized and relevant tweet classification by means of Gaussian Nesterov’s Optimal Gradient model. In the second layer, a Deep Recurrent Predictive Recommendation model is designed to concentrate on the vanishing gradient issue arising due to updated tweets obtained from same user at different time instance. Finally, with the aid of hyperbolic activation function in the output layer, building block of the predictive recommendation is obtained. In the experimental study the proposed method is found better than existing GANCF and Bootstrapping method 13–21% in case of recommendation accuracy, 22–32% better in recommendation time and 15–22% better in recall rate. PubDate: 2022-04-09 DOI: 10.1007/s12530-022-09435-3
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Past decade has witnessed the progress of cross-coupled LC voltage controlled oscillator (VCO) in both academic and industrial communities. In this work, a new multi-objective optimization methodology is proposed to introduce an optimal design of a complementary cross-coupled LC-VCO. The design objective is to minimize the phase noise and power consumption of the oscillator at the oscillation frequency of 2.5 GHz and 1.5 V supply voltage. The important characteristics of the complementary LC-VCO which is one of the more popular cross-coupled configurations are described in sufficient details. In addition, the confirmation theorems of the proposed method are proven to show that the new version of Multi-Objective Gravitational Search Algorithm (MOGSA) can control the exploitation and exploration abilities of the algorithm. Hence the improved version of MOGSA has better performance against other popular multi-objective methods. The simulation results obtained from the circuit optimization are summarized to confirm the robustness of the proposed method. PubDate: 2022-04-05 DOI: 10.1007/s12530-022-09433-5