Similar Journals
Algorithms
Journal Prestige (SJR): 0.217 Citation Impact (citeScore): 1 Number of Followers: 15 Open Access journal ISSN (Print) 1999-4893 Published by MDPI [258 journals] |
- Algorithms, Vol. 17, Pages 375: Synthetic Face Discrimination via Learned
Image Compression
Authors: Sofia Iliopoulou, Panagiotis Tsinganos, Dimitris Ampeliotis, Athanassios Skodras
First page: 375
Abstract: The emergence of deep learning has sparked notable strides in the quality of synthetic media. Yet, as photorealism reaches new heights, the line between generated and authentic images blurs, raising concerns about the dissemination of counterfeit or manipulated content online. Consequently, there is a pressing need to develop automated tools capable of effectively distinguishing synthetic images, especially those portraying faces, which is one of the most commonly encountered issues. In this work, we propose a novel approach to synthetic face discrimination, leveraging deep learning-based image compression and predominantly utilizing the quality metrics of an image to determine its authenticity.
Citation: Algorithms
PubDate: 2024-08-23
DOI: 10.3390/a17090375
Issue No: Vol. 17, No. 9 (2024)
- Algorithms, Vol. 17, Pages 376: Integrating IoMT and AI for Proactive
Healthcare: Predictive Models and Emotion Detection in Neurodegenerative
Diseases
Authors: Virginia Sandulescu, Marilena Ianculescu, Liudmila Valeanu, Adriana Alexandru
First page: 376
Abstract: Neurodegenerative diseases, such as Parkinson’s and Alzheimer’s, present considerable challenges in their early detection, monitoring, and management. The paper presents NeuroPredict, a healthcare platform that integrates a series of Internet of Medical Things (IoMT) devices and artificial intelligence (AI) algorithms to address these challenges and proactively improve the lives of patients with or at risk of neurodegenerative diseases. Sensor data and data obtained through standardized and non-standardized forms are used to construct detailed models of monitored patients’ lifestyles and mental and physical health status. The platform offers personalized healthcare management by integrating AI-driven predictive models that detect early symptoms and track disease progression. The paper focuses on the NeuroPredict platform and the integrated emotion detection algorithm based on voice features. The rationale for integrating emotion detection is based on two fundamental observations: a) there is a strong correlation between physical and mental health, and b) frequent negative mental states affect quality of life and signal potential future health declines, necessitating timely interventions. Voice was selected as the primary signal for mood detection due to its ease of acquisition without requiring complex or dedicated hardware. Additionally, voice features have proven valuable in further mental health assessments, including the diagnosis of Alzheimer’s and Parkinson’s diseases.
Citation: Algorithms
PubDate: 2024-08-23
DOI: 10.3390/a17090376
Issue No: Vol. 17, No. 9 (2024)
- Algorithms, Vol. 17, Pages 377: Star Bicolouring of Bipartite Graphs
Authors: Daya Gaur, Shahadat Hossain, Rishi Ranjan Singh
First page: 377
Abstract: We give an integer linear program formulation for the star bicolouring of bipartite graphs. We develop a column generation method to solve the linear programming relaxation to obtain a lower bound for the minimum number of colours needed. We determine the star bicolouring using the iterative rounding method. We give computational results on arrowhead matrices, sparse random matrices, complete bipartite graphs, and matrices from the Harwell–Boeing collection. The findings demonstrate that the proposed method effectively establishes lower and upper bounds for the minimum number of colours needed for a star bicolouring of bipartite graphs, particularly for sparse bipartite graphs.
Citation: Algorithms
PubDate: 2024-08-24
DOI: 10.3390/a17090377
Issue No: Vol. 17, No. 9 (2024)
- Algorithms, Vol. 17, Pages 378: Parallel PSO for Efficient Neural Network
Training Using GPGPU and Apache Spark in Edge Computing Sets
Authors: Manuel I. Capel, Alberto Salguero-Hidalgo, Juan A. Holgado-Terriza
First page: 378
Abstract: The training phase of a deep learning neural network (DLNN) is a computationally demanding process, particularly for models comprising multiple layers of intermediate neurons.This paper presents a novel approach to accelerating DLNN training using the particle swarm optimisation (PSO) algorithm, which exploits the GPGPU architecture and the Apache Spark analytics engine for large-scale data processing tasks. PSO is a bio-inspired stochastic optimisation method whose objective is to iteratively enhance the solution to a (usually complex) problem by approximating a given objective. The expensive fitness evaluation and updating of particle positions can be supported more effectively by parallel processing. Nevertheless, the parallelisation of an efficient PSO is not a simple process due to the complexity of the computations performed on the swarm of particles and the iterative execution of the algorithm until a solution close to the objective with minimal error is achieved. In this study, two forms of parallelisation have been developed for the PSO algorithm, both of which are designed for execution in a distributed execution environment. The synchronous parallel PSO implementation guarantees consistency but may result in idle time due to global synchronisation. In contrast, the asynchronous parallel PSO approach reduces the necessity for global synchronization, thereby enhancing execution time and making it more appropriate for large datasets and distributed environments such as Apache Spark. The two variants of PSO have been implemented with the objective of distributing the computational load supported by the algorithm across the different executor nodes of the Spark cluster to effectively achieve coarse-grained parallelism. The result is a significant performance improvement over current sequential variants of PSO.
Citation: Algorithms
PubDate: 2024-08-26
DOI: 10.3390/a17090378
Issue No: Vol. 17, No. 9 (2024)
- Algorithms, Vol. 17, Pages 379: Hybrid Particle Swarm Optimization-Jaya
Algorithm for Team Formation
Authors: Sandip Shingade, Rajdeep Niyogi, Mayuri Pichare
First page: 379
Abstract: Collaboration in a network is crucial for effective team formation. This paper addresses challenges in collaboration networks by identifying the skills required for effective team formation. The communication cost is low when agents with the same skills are connected. Our main objective is to minimize team communication costs by selecting agents with the required skills. However, finding an optimal team is a computationally hard problem. This study introduces a novel hybrid approach called I-PSO-Jaya (improved PSO-Jaya, which combines PSO (Particle Swarm Optimization) and the Jaya algorithm with the Modified Swap Operator to form efficient teams. A potential application scenario of the algorithm is to build a team of engineers for an IT project. The implementation results show that our approach gives an improvement of 73% in the Academia dataset and 92% in the ACM dataset compared to existing methods.
Citation: Algorithms
PubDate: 2024-08-26
DOI: 10.3390/a17090379
Issue No: Vol. 17, No. 9 (2024)
- Algorithms, Vol. 17, Pages 380: An Improved Negotiation-Based Approach for
Collecting and Sorting Operations in Waste Management and Recycling
Authors: Massimiliano Caramia, Giuseppe Stecca
First page: 380
Abstract: This paper addresses the problem of optimal planning for collection, sorting, and recycling operations. The problem arises in industrial waste management, where distinct actors manage the collection and the sorting operations. In a weekly or monthly plan horizon, they usually interact to find a suitable schedule for servicing customers but with a not well-defined scheme. We proposal an improved negotiation-based approach using an auction mechanism for optimizing these operations. Two interdependent models are presented: one for waste collection by a logistics operator and the other for sorting operations at a recycling plant. These models are formulated as mixed-integer linear programs where costs associated with sorting and collection are to be minimized, respectively. We describe the negotiation-based approach involving an auction where the logistics operator bids for collection time slots, and the recycling plant selects the optimal bid based on the integration of sorting and collection costs. This approach aims to achieve an optimization of the entire waste management process. Computational experiments are presented.
Citation: Algorithms
PubDate: 2024-08-27
DOI: 10.3390/a17090380
Issue No: Vol. 17, No. 9 (2024)
- Algorithms, Vol. 17, Pages 381: Multithreading-Based Algorithm for
High-Performance Tchebichef Polynomials with Higher Orders
Authors: Ahlam Hanoon Al-sudani, Basheera M. Mahmmod, Firas A. Sabir, Sadiq H. Abdulhussain, Muntadher Alsabah, Wameedh Nazar Flayyih
First page: 381
Abstract: Tchebichef polynomials (TPs) play a crucial role in various fields of mathematics and applied sciences, including numerical analysis, image and signal processing, and computer vision. This is due to the unique properties of the TPs and their remarkable performance. Nowadays, the demand for high-quality images (2D signals) is increasing and is expected to continue growing. The processing of these signals requires the generation of accurate and fast polynomials. The existing algorithms generate the TPs sequentially, and this is considered as computationally costly for high-order and larger-sized polynomials. To this end, we present a new efficient solution to overcome the limitation of sequential algorithms. The presented algorithm uses the parallel processing paradigm to leverage the computation cost. This is performed by utilizing the multicore and multithreading features of a CPU. The implementation of multithreaded algorithms for computing TP coefficients segments the computations into sub-tasks. These sub-tasks are executed concurrently on several threads across the available cores. The performance of the multithreaded algorithm is evaluated on various TP sizes, which demonstrates a significant improvement in computation time. Furthermore, a selection for the appropriate number of threads for the proposed algorithm is introduced. The results reveal that the proposed algorithm enhances the computation performance to provide a quick, steady, and accurate computation of the TP coefficients, making it a practical solution for different applications.
Citation: Algorithms
PubDate: 2024-08-27
DOI: 10.3390/a17090381
Issue No: Vol. 17, No. 9 (2024)
- Algorithms, Vol. 17, Pages 382: Review Quantum Circuit Synthesis for
Grover’s Algorithm Oracle
Authors: Miguel A. Naranjo, Luis A. Fletscher
First page: 382
Abstract: The search for information in a system has been a continuous problem for a computer. This has resulted in the construction of a set of classical algorithms that can search for a set of data. This is why search systems can be divided into the type of information being searched, the number of solutions to find, and even the terms used for searching. With the emergence of quantum computing, new algorithms have been generated for this type of process. An example is the Grover algorithm, which performs theoretically better than traditional algorithms. This is why there has been research on optimizing it, applying it to new fields, and making it more accessible to industry users. Even if the algorithm is a promising alternative, one of the disadvantages of Grover’s algorithm is the use of an oracle function that must be generated for every set of search data. This review describes three sets of methodologies for generating quantum circuits that can be applied to constructing this oracle quantum circuit.
Citation: Algorithms
PubDate: 2024-08-28
DOI: 10.3390/a17090382
Issue No: Vol. 17, No. 9 (2024)
- Algorithms, Vol. 17, Pages 320: Comparison of Reinforcement Learning
Algorithms for Edge Computing Applications Deployed by Serverless
Technologies
Authors: Mauro Femminella, Gianluca Reali
First page: 320
Abstract: Edge computing is one of the technological areas currently considered among the most promising for the implementation of many types of applications. In particular, IoT-type applications can benefit from reduced latency and better data protection. However, the price typically to be paid in order to benefit from the offered opportunities includes the need to use a reduced amount of resources compared to the traditional cloud environment. Indeed, it may happen that only one computing node can be used. In these situations, it is essential to introduce computing and memory resource management techniques that allow resources to be optimized while still guaranteeing acceptable performance, in terms of latency and probability of rejection. For this reason, the use of serverless technologies, managed by reinforcement learning algorithms, is an active area of research. In this paper, we explore and compare the performance of some machine learning algorithms for managing horizontal function autoscaling in a serverless edge computing system. In particular, we make use of open serverless technologies, deployed in a Kubernetes cluster, to experimentally fine-tune the performance of the algorithms. The results obtained allow both the understanding of some basic mechanisms typical of edge computing systems and related technologies that determine system performance and the guiding of configuration choices for systems in operation.
Citation: Algorithms
PubDate: 2024-07-23
DOI: 10.3390/a17080320
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 321: Multi-Head Self-Attention-Based Fully
Convolutional Network for RUL Prediction of Turbofan Engines
Authors: Zhaofeng Liu, Xiaoqing Zheng, Anke Xue, Ming Ge, Aipeng Jiang
First page: 321
Abstract: Remaining useful life (RUL) prediction is widely applied in prognostic and health management (PHM) of turbofan engines. Although some of the existing deep learning-based models for RUL prediction of turbofan engines have achieved satisfactory results, there are still some challenges. For example, the spatial features and importance differences hidden in the raw monitoring data are not sufficiently addressed or highlighted. In this paper, a novel multi-head self-Attention fully convolutional network (MSA-FCN) is proposed for predicting the RUL of turbofan engines. MSA-FCN combines a fully convolutional network and multi-head structure, focusing on the degradation correlation among various components of the engine and extracting spatially characteristic degradation representations. Furthermore, by introducing dual multi-head self-attention modules, MSA-FCN can capture the differential contributions of sensor data and extracted degradation representations to RUL prediction, emphasizing key data and representations. The experimental results on the C-MAPSS dataset demonstrate that, under various operating conditions and failure modes, MSA-FCN can effectively predict the RUL of turbofan engines. Compared with 11 mainstream deep neural networks, MSA-FCN achieves competitive advantages in terms of both accuracy and timeliness for RUL prediction, delivering more accurate and reliable forecasts.
Citation: Algorithms
PubDate: 2024-07-23
DOI: 10.3390/a17080321
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 322: Energy Consumption Outlier Detection with
AI Models in Modern Cities: A Case Study from North-Eastern Mexico
Authors: José-Alberto Solís-Villarreal, Valeria Soto-Mendoza, Jesús Alejandro Navarro-Acosta, Efraín Ruiz-y-Ruiz
First page: 322
Abstract: The development of smart cities will require the construction of smart buildings. Smart buildings will demand the incorporation of elements for efficient monitoring and control of electrical consumption. The development of efficient AI algorithms is needed to generate more accurate electricity consumption predictions; therefore; anomaly detection in electricity consumption predictions has become an important research topic. This work focuses on the study of the detection of anomalies in domestic electrical consumption in Mexico. A predictive machine learning model of future electricity consumption was generated to evaluate various anomaly-detection techniques. Their effectiveness in identifying outliers was determined, and their performance was documented. A 30-day forecast of electrical consumption and an anomaly-detection model have been developed using isolation forest. Isolation forest successfully captured up to 75% of the anomalies. Finally, the Shapley values have been used to generate an explanation of the results of a model capable of detecting anomalous data for the Mexican context.
Citation: Algorithms
PubDate: 2024-07-24
DOI: 10.3390/a17080322
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 323: Computational Test for Conditional
Independence
Authors: Christian B. H. Thorjussen, Kristian Hovde Liland, Ingrid Måge, Lars Erik Solberg
First page: 323
Abstract: Conditional Independence (CI) testing is fundamental in statistical analysis. For example, CI testing helps validate causal graphs or longitudinal data analysis with repeated measures in causal inference. CI testing is difficult, especially when testing involves categorical variables conditioned on a mixture of continuous and categorical variables. Current parametric and non-parametric testing methods are designed for continuous variables and can quickly fall short in the categorical case. This paper presents a computational approach for CI testing suited for categorical data types, which we call computational conditional independence (CCI) testing. The test procedure is based on permutation and combines machine learning prediction algorithms and Monte Carlo cross-validation. We evaluated the approach through simulation studies and assessed the performance against alternative methods: the generalized covariance measure test, the kernel conditional independence test, and testing with multinomial regression. We find that the computational approach to testing has utility over the alternative methods, achieving better control over type I error rates. We hope this work can expand the toolkit for CI testing for practitioners and researchers.
Citation: Algorithms
PubDate: 2024-07-24
DOI: 10.3390/a17080323
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 324: Trajectory Classification and Recognition
of Planar Mechanisms Based on ResNet18 Network
Authors: Jianping Wang, Youchao Wang, Boyan Chen, Xiaoyue Jia, Dexi Pu
First page: 324
Abstract: This study utilizes the ResNet18 network to classify and recognize trajectories of planar mechanisms. This research begins by deriving formulas for trajectory points in various typical planar mechanisms, and the resulting trajectory images are employed as samples for training and testing the network. The classification of trajectory images for both upright and inverted configurations of a planar four-bar linkage is investigated. Compared with AlexNet and VGG16, the ResNet18 model demonstrates superior classification accuracy during testing, coupled with reduced training time and memory consumption. Furthermore, the ResNet18 model is applied to classify trajectory images for six different planar mechanisms in both upright and inverted configurations as well as to identify whether the trajectory images belong to the upright or inverted configuration for each mechanism. The test results affirm the feasibility and effectiveness of the ResNet18 network in the classification and recognition of planar mechanism trajectories.
Citation: Algorithms
PubDate: 2024-07-25
DOI: 10.3390/a17080324
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 325: Label-Setting Algorithm for
Multi-Destination K Simple Shortest Paths Problem and Application
Authors: Sethu Vinayagam Udhayasekar, Karthik K. Srinivasan, Pramesh Kumar, Bhargava Rama Chilukuri
First page: 325
Abstract: The k shortest paths problem finds applications in multiple fields. Of particular interest in the transportation field is the variant of finding k simple shortest paths (KSSP), which has a higher complexity. This research presents a novel label-setting algorithm for the multi-destination KSSP problem in directed networks that obviates repeated applications of the algorithm to each destination (necessary in existing deviation-based algorithms), resulting in a significant computational speedup. It is shown that the proposed algorithm is exact and flexible enough to handle several variants of the problem by appropriately modifying the termination condition. Theoretically, it is also shown to be faster than state-of-the-art algorithms in sparse and dense networks whenever the number of labels created is sub-polynomial in network size. A heuristic method and optimized data structures are proposed to improve the algorithm’s scalability and worst-case performance. The computational results show that the proposed heuristic provides two to three orders of magnitude computational time speedups (29–1416 times across different networks) with negligible loss in solution quality (maximum average deviation of 0.167% from the optimal solution). Finally, a practical application of the proposed method is illustrated to determine the gravity of an edge (relative structural importance) in a network.
Citation: Algorithms
PubDate: 2024-07-25
DOI: 10.3390/a17080325
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 326: Enhancing Indoor Positioning Accuracy with
WLAN and WSN: A QPSO Hybrid Algorithm with Surface Tessellation
Authors: Edgar Scavino, Mohd Amiruddin Abd Rahman, Zahid Farid, Sadique Ahmad, Muhammad Asim
First page: 326
Abstract: In large indoor environments, accurate positioning and tracking of people and autonomous equipment have become essential requirements. The application of increasingly automated moving transportation units in large indoor spaces demands a precise knowledge of their positions, for both efficiency and safety reasons. Moreover, satellite-based Global Positioning System (GPS) signals are likely to be unusable in deep indoor spaces, and technologies like WiFi and Bluetooth are susceptible to signal noise and fading effects. For these reasons, a hybrid approach that employs at least two different signal typologies proved to be more effective, resilient, robust, and accurate in determining localization in indoor environments. This paper proposes an improved hybrid technique that implements fingerprinting-based indoor positioning using Received Signal Strength (RSS) information from available Wireless Local Area Network (WLAN) access points and Wireless Sensor Network (WSN) technology. Six signals were recorded on a regular grid of anchor points covering the research surface. For optimization purposes, appropriate raw signal weighing was applied in accordance with previous research on the same data. The novel approach in this work consisted of performing a virtual tessellation of the considered indoor surface with a regular set of tiles encompassing the whole area. The optimization process was focused on varying the size of the tiles as well as their relative position concerning the signal acquisition grid, with the goal of minimizing the average distance error based on tile identification accuracy. The optimization process was conducted using a standard Quantum Particle Swarm Optimization (QPSO), while the position error estimate for each tile configuration was performed using a 3-layer Multilayer Perceptron (MLP) neural network. These experimental results showed a 16% reduction in the positioning error when a suitable tile configuration was calculated in the optimization process. Our final achieved value of 0.611 m of location incertitude shows a sensible improvement compared to our previous results.
Citation: Algorithms
PubDate: 2024-07-25
DOI: 10.3390/a17080326
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 327: A Quantum Approach for Exploring the
Numerical Results of the Heat Equation
Authors: Beimbet Daribayev, Aksultan Mukhanbet, Nurtugan Azatbekuly, Timur Imankulov
First page: 327
Abstract: This paper presents a quantum algorithm for solving the one-dimensional heat equation with Dirichlet boundary conditions. The algorithm utilizes discretization techniques and employs quantum gates to emulate the heat propagation operator. Central to the algorithm is the Trotter–Suzuki decomposition, enabling the simulation of the time evolution of the temperature distribution. The initial temperature distribution is encoded into quantum states, and the evolution of these states is driven by quantum gates tailored to mimic the heat propagation process. As per the literature, quantum algorithms exhibit an exponential computational speedup with increasing qubit counts, albeit facing challenges such as exponential growth in relative error and cost functions. This study addresses these challenges by assessing the potential impact of quantum simulations on heat conduction modeling. Simulation outcomes across various quantum devices, including simulators and real quantum computers, demonstrate a decrease in the relative error with an increasing number of qubits. Notably, simulators like the simulator_statevector exhibit lower relative errors compared to the ibmq_qasm_simulator and ibm_osaka. The proposed approach underscores the broader applicability of quantum computing in physical systems modeling, particularly in advancing heat conductivity analysis methods. Through its innovative approach, this study contributes to enhancing modeling accuracy and efficiency in heat conduction simulations across diverse domains.
Citation: Algorithms
PubDate: 2024-07-25
DOI: 10.3390/a17080327
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 328: High-Fidelity Steganography: A Covert
Parity Bit Model-Based Approach
Authors: Tamer Rabie, Mohammed Baziyad, Ibrahim Kamel
First page: 328
Abstract: The Discrete Cosine Transform (DCT) is fundamental to high-capacity data hiding schemes due to its ability to condense signals into a few significant coefficients while leaving many high-frequency coefficients relatively insignificant. These high-frequency coefficients are often replaced with secret data, allowing for the embedding of many secret bits while maintaining acceptable stego signal quality. However, because high-frequency components still affect the stego signal’s quality, preserving their structure is beneficial. This work introduces a method that maintains the structure of high-frequency DCT components during embedding through polynomial modeling. A scaled-down version of the secret signal is added to or subtracted from the polynomial-generated signal to minimize the error between the cover signal and the polynomial-generated signal. As a result, the stego image retains a structure similar to the original cover image. Experimental results demonstrate that this scheme improves the quality and security of the stego image compared to current methods. Notably, the technique’s robustness is confirmed by its resistance to detection by deep learning methods, as a Convolutional Neural Network (CNN) could not distinguish between the cover and stego images.
Citation: Algorithms
PubDate: 2024-07-27
DOI: 10.3390/a17080328
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 329: Convolutional Neural Network-Based Digital
Diagnostic Tool for the Identification of Psychosomatic Illnesses
Authors: Marta Narigina, Andrejs Romanovs, Yuri Merkuryev
First page: 329
Abstract: This paper appraises convolutional neural network (CNN) models’ capabilities in emotion detection from facial expressions, seeking to aid the diagnosis of psychosomatic illnesses, typically made in clinical setups. Using the FER-2013 dataset, two CNN models were designed to detect six emotions with 64% accuracy—although not evenly distributed; they demonstrated higher effectiveness in identifying “happy” and “surprise.” The assessment was performed through several performance metrics—accuracy, precision, recall, and F1-scores—besides further validation with an additional simulated clinical environment for practicality checks. Despite showing promising levels for future use, this investigation highlights the need for extensive validation studies in clinical settings. This research underscores AI’s potential value as an adjunct to traditional diagnostic approaches while focusing on wider scope (broader datasets) plus focus (multimodal integration) areas to be considered among recommendations in forthcoming studies. This study underscores the importance of CNN models in developing psychosomatic diagnostics and promoting future development based on ethics and patient care.
Citation: Algorithms
PubDate: 2024-07-30
DOI: 10.3390/a17080329
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 330: Lester: Rotoscope Animation through Video
Object Segmentation and Tracking
Authors: Ruben Tous
First page: 330
Abstract: This article introduces Lester, a novel method to automatically synthesize retro-style 2D animations from videos. The method approaches the challenge mainly as an object segmentation and tracking problem. Video frames are processed with the Segment Anything Model (SAM) and the resulting masks are tracked through subsequent frames with DeAOT, a method of hierarchical propagation for semi-supervised video object segmentation. The geometry of the masks’ contours is simplified with the Douglas–Peucker algorithm. Finally, facial traits, pixelation and a basic rim light effect can be optionally added. The results show that the method exhibits an excellent temporal consistency and can correctly process videos with different poses and appearances, dynamic shots, partial shots and diverse backgrounds. The proposed method provides a more simple and deterministic approach than diffusion models based video-to-video translation pipelines, which suffer from temporal consistency problems and do not cope well with pixelated and schematic outputs. The method is also more feasible than techniques based on 3D human pose estimation, which require custom handcrafted 3D models and are very limited with respect to the type of scenes they can process.
Citation: Algorithms
PubDate: 2024-07-30
DOI: 10.3390/a17080330
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 331: A Swarm Intelligence Solution for the
Multi-Vehicle Profitable Pickup and Delivery Problem
Authors: Abeer I. Alhujaylan, Manar I. Hosny
First page: 331
Abstract: Delivery apps are experiencing significant growth, requiring efficient algorithms to coordinate transportation and generate profits. One problem that considers the goals of delivery apps is the multi-vehicle profitable pickup and delivery problem (MVPPDP). In this paper, we propose eight new metaheuristics to improve the initial solutions for the MVPPDP based on the well-known swarm intelligence algorithm, Artificial Bee Colony (ABC): K-means-GRASP-ABC(C)S1, K-means-GRASP-ABC(C)S2, Modified K-means-GRASP-ABC(C)S1, Modified K-means-GRASP-ABC(C)S2, ACO-GRASP-ABC(C)S1, ACO-GRASP-ABC(C)S2, ABC(S1), and ABC(S2). All methods achieved superior performance in most instances in terms of processing time. For example, for 250 customers, the average times of the algorithms was 75.9, 72.86, 79.17, 73.85, 76.60, 66.29, 177.07, and 196.09, which were faster than those of the state-of-the-art methods that took 300 s. Moreover, all proposed algorithms performed well on small-size instances in terms of profit by achieving thirteen new best solutions and five equal solutions to the best-known solutions. However, the algorithms slightly lag behind in medium- and large-sized instances due to the greedy randomised strategy and GRASP that have been used in the scout bee phase. Moreover, our algorithms prioritise minimal solutions and iterations for rapid processing time in daily m-commerce apps, while reducing iteration counts and population sizes reduces the likelihood of obtaining good solution quality.
Citation: Algorithms
PubDate: 2024-07-31
DOI: 10.3390/a17080331
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 332: Preptimize: Automation of Time Series Data
Preprocessing and Forecasting
Authors: Mehak Usmani, Zulfiqar Ali Memon, Adil Zulfiqar, Rizwan Qureshi
First page: 332
Abstract: Time series analysis is pivotal for business and financial decision making, especially with the increasing integration of the Internet of Things (IoT). However, leveraging time series data for forecasting requires extensive preprocessing to address challenges such as missing values, heteroscedasticity, seasonality, outliers, and noise. Different approaches are necessary for univariate and multivariate time series, Gaussian and non-Gaussian time series, and stationary versus non-stationary time series. Handling missing data alone is complex, demanding unique solutions for each type. Extracting statistical features, identifying data quality issues, and selecting appropriate cleaning and forecasting techniques require significant effort, time, and expertise. To streamline this process, we propose an automated strategy called Preptimize, which integrates statistical and machine learning techniques and recommends prediction model blueprints, suggesting the most suitable approaches for a given dataset as an initial step towards further analysis. Preptimize reads a sample from a large dataset and recommends the blueprint model based on optimization, making it easy to use even for non-experts. The results of various experiments indicated that Preptimize either outperformed or had comparable performance to benchmark models across multiple sectors, including stock prices, cryptocurrency, and power consumption prediction. This demonstrates the framework’s effectiveness in recommending suitable prediction models for various time series datasets, highlighting its broad applicability across different domains in time series forecasting.
Citation: Algorithms
PubDate: 2024-08-01
DOI: 10.3390/a17080332
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 333: Deep Learning-Based Boolean, Time Series,
Error Detection, and Predictive Analysis in Container Crane Operations
Authors: Amruta Awasthi, Lenka Krpalkova, Joseph Walsh
First page: 333
Abstract: Deep learning is crucial in marine logistics and container crane error detection, diagnosis, and prediction. A novel deep learning technique using Long Short-Term Memory (LSTM) detected and anticipated errors in a system with imbalanced data. The LSTM model was trained on real operational error data from container cranes. The custom algorithm employs the Synthetic Minority Oversampling TEchnique (SMOTE) to balance the imbalanced data for operational data errors (i.e., too few minority class samples). Python was used to program. Pearson, Spearman, and Kendall correlation matrices and covariance matrices are presented. The model’s training and validation loss is shown, and the remaining data are predicted. The test set (30% of actual data) and forecasted data had RMSEs of 0.065. A heatmap of a confusion matrix was created using Matplotlib and Seaborn. Additionally, the error outputs for the time series for the next n seconds were projected, with the n seconds input by the user. Accuracy was 0.996, precision was 1.00, recall was 0.500, and f1 score was 0.667, according to the evaluation criteria that were produced. Experiments demonstrated that the technique is capable of identifying critical elements. Thus, future attempts will improve the model’s structure to forecast industrial big data errors. However, the advantage is that it can handle imbalanced data, which is usually what most industries have. With additional data, the model can be further improved.
Citation: Algorithms
PubDate: 2024-08-01
DOI: 10.3390/a17080333
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 334: An Efficient Optimization of the Monte
Carlo Tree Search Algorithm for Amazons
Authors: Lijun Zhang, Han Zou, Yungang Zhu
First page: 334
Abstract: Amazons is a computerized board game with complex positions that are highly challenging for humans. In this paper, we propose an efficient optimization of the Monte Carlo tree search (MCTS) algorithm for Amazons, fusing the ‘Move Groups’ strategy and the ‘Parallel Evaluation’ optimization strategy (MG-PEO). Specifically, we explain the high efficiency of the Move Groups strategy by defining a new criterion: the winning convergence distance. We also highlight the strategy’s potential issue of falling into a local optimum and propose that the Parallel Evaluation mechanism can compensate for this shortcoming. Moreover, We conducted rigorous performance analysis and experiments. Performance analysis results indicate that the MCTS algorithm with the Move Groups strategy can improve the playing ability of the Amazons game by 20–30 times compared to the traditional MCTS algorithm. The Parallel Evaluation optimization further enhances the playing ability of the Amazons game by 2–3 times. Experimental results show that the MCTS algorithm with the MG-PEO strategy achieves a 23% higher game-winning rate on average compared to the traditional MCTS algorithm. Additionally, the MG-PEO Amazons program proposed in this paper won first prize in the Amazons Competition at the 2023 China Collegiate Computer Games Championship & National Computer Games Tournament.
Citation: Algorithms
PubDate: 2024-08-01
DOI: 10.3390/a17080334
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 335: Color Standardization of Chemical Solution
Images Using Template-Based Histogram Matching in Deep Learning Regression
Authors: Patrycja Kwiek, Małgorzata Jakubowska
First page: 335
Abstract: Color distortion in an image presents a challenge for machine learning classification and regression when the input data consists of pictures. As a result, a new algorithm for color standardization of photos is proposed, forming the foundation for a deep neural network regression model. This approach utilizes a self-designed color template that was developed based on an initial series of studies and digital imaging. Using the equalized histogram of the R, G, B channels of the digital template and its photo, a color mapping strategy was computed. By applying this approach, the histograms were adjusted and the colors of photos taken with a smartphone were standardized. The proposed algorithm was developed for a series of images where the entire surface roughly maintained a uniform color and the differences in color between the photographs of individual objects were minor. This optimized approach was validated in the colorimetric determination procedure of vitamin C. The dataset for the deep neural network in the regression variant was formed from photos of samples under two separate lighting conditions. For the vitamin C concentration range from 0 to 87.72 µg·mL−1, the RMSE for the test set ranged between 0.75 and 1.95 µg·mL−1, in comparison to the non-standardized variant, where this indicator was at the level of 1.48–2.29 µg·mL−1. The consistency of the predicted concentration results with actual data, expressed as R2, ranged between 0.9956 and 0.9999 for each of the standardized variants. This approach allows for the removal of light reflections on the shiny surfaces of solutions, which is a common problem in liquid samples. This color-matching algorithm has universal character, and its scope of application is not limited.
Citation: Algorithms
PubDate: 2024-08-01
DOI: 10.3390/a17080335
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 336: Pitfalls in Metaheuristics Solving
Stoichiometric-Based Optimization Models for Metabolic Networks
Authors: Mónica Fabiola Briones-Báez, Luciano Aguilera-Vázquez, Nelson Rangel-Valdez, Cristal Zuñiga, Ana Lidia Martínez-Salazar, Claudia Gomez-Santillan
First page: 336
Abstract: Flux Balance Analysis (FBA) is a constraint-based method that is commonly used to guide metabolites through restricting pathways that often involve conditions such as anaplerotic cycles like Calvin, reversible or irreversible reactions, and nodes where metabolic pathways branch. The method can identify the best conditions for one course but fails when dealing with the pathways of multiple metabolites of interest. Recent studies on metabolism consider it more natural to optimize several metabolites simultaneously rather than just one; moreover, they point out the use of metaheuristics as an attractive alternative that extends FBA to tackle multiple objectives. However, the literature also warns that the use of such techniques must not be wild. Instead, it must be subject to careful fine-tuning and selection processes to achieve the desired results. This work analyses the impact on the quality of the pathways built using the NSGAII and MOEA/D algorithms and several novel optimization models; it conducts a study on two case studies, the pigment biosynthesis and the node in glutamate metabolism of the microalgae Chlorella vulgaris, under three culture conditions (autotrophic, heterotrophic, and mixotrophic) while optimizing for three metabolic intermediaries as independent objective functions simultaneously. The results show varying performances between NSGAII and MOEA/D, demonstrating that the selection of an optimization model can greatly affect predicted phenotypes.
Citation: Algorithms
PubDate: 2024-08-01
DOI: 10.3390/a17080336
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 337: Hyperspectral Python: HypPy
Authors: Wim Bakker, Frank van Ruitenbeek, Harald van der Werff, Christoph Hecker, Arjan Dijkstra, Freek van der Meer
First page: 337
Abstract: This paper describes the design, implementation, and usage of a Python package called Hyperspectral Python (HypPy). Proprietary software for processing hyperspectral images is expensive, and tools developed using these packages cannot be freely distributed. The idea of HypPy is to be able to process hyperspectral images using free and open-source software. HypPy was developed using Python and relies on the array-processing capabilities of packages like NumPy and SciPy. HypPy was designed with practical imaging spectrometry in mind and has implemented a number of novel ideas. To name a few of these ideas, HypPy has BandMath and SpectralMath tools for processing images and spectra using Python statements, can process spectral libraries as if they were images, and can address bands by wavelength rather than band number. We expect HypPy to be beneficial for research, education, and projects using hyperspectral data because it is flexible and versatile.
Citation: Algorithms
PubDate: 2024-08-01
DOI: 10.3390/a17080337
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 338: Design of Multichannel Spectrum
Intelligence Systems Using Approximate Discrete Fourier Transform
Algorithm for Antenna Array-Based Spectrum Perception Applications
Authors: Arjuna Madanayake, Keththura Lawrance, Bopage Umesha Kumarasiri, Sivakumar Sivasankar, Thushara Gunaratne, Chamira U. S. Edussooriya, Renato J. Cintra
First page: 338
Abstract: The radio spectrum is a scarce and extremely valuable resource that demands careful real-time monitoring and dynamic resource allocation. Dynamic spectrum access (DSA) is a new paradigm for managing the radio spectrum, which requires AI/ML-driven algorithms for optimum performance under rapidly changing channel conditions and possible cyber-attacks in the electromagnetic domain. Fast sensing across multiple directions using array processors, with subsequent AI/ML-based algorithms for the sensing and perception of waveforms that are measured from the environment is critical for providing decision support in DSA. As part of directional and wideband spectrum perception, the ability to finely channelize wideband inputs using efficient Fourier analysis is much needed. However, a fine-grain fast Fourier transform (FFT) across a large number of directions is computationally intensive and leads to a high chip area and power consumption. We address this issue by exploiting the recently proposed approximate discrete Fourier transform (ADFT), which has its own sparse factorization for real-time implementation at a low complexity and power consumption. The ADFT is used to create a wideband multibeam RF digital beamformer and temporal spectrum-based attention unit that monitors 32 discrete directions across 32 sub-bands in real-time using a multiplierless algorithm with low computational complexity. The output of this spectral attention unit is applied as a decision variable to an intelligent receiver that adapts its center frequency and frequency resolution via FFT channelizers that are custom-built for real-time monitoring at high resolution. This two-step process allows the fine-gain FFT to be applied only to directions and bands of interest as determined by the ADFT-based low-complexity 2D spacetime attention unit. The fine-grain FFT provides a spectral signature that can find future use cases in neural network engines for achieving modulation recognition, IoT device identification, and RFI identification. Beamforming and spectral channelization algorithms, a digital computer architecture, and early prototypes using a 32-element fully digital multichannel receiver and field programmable gate array (FPGA)-based high-speed software-defined radio (SDR) are presented.
Citation: Algorithms
PubDate: 2024-08-01
DOI: 10.3390/a17080338
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 339: Complete Subhedge Projection for Stepwise
Hedge Automata
Authors: Al Serhali, Niehren
First page: 339
Abstract: We demonstrate how to evaluate stepwise hedge automata (SHAs) with subhedge projection while completely projecting irrelevant subhedges. Since this requires passing finite state information top-down, we introduce the notion of downward stepwise hedge automata. We use them to define in-memory and streaming evaluators with complete subhedge projection for SHAs. We then tune the evaluators so that they can decide on membership at the earliest time point. We apply our algorithms to the problem of answering regular XPath queries on XML streams. Our experiments show that complete subhedge projection of SHAs can indeed speed up earliest query answering on XML streams so that it becomes competitive with the best existing streaming tools for XPath queries.
Citation: Algorithms
PubDate: 2024-08-02
DOI: 10.3390/a17080339
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 340: Hierarchical Optimization Framework for
Layout Design of Star–Tree Gas-Gathering Pipeline Network in
Discrete Spaces
Authors: Yu Lin, Yanhua Qiu, Hao Chen, Jun Zhou, Jiayi He, Penghua Du, Dafan Liu
First page: 340
Abstract: The gas-gathering pipeline network is a critical infrastructure for collecting and conveying natural gas from the extraction site to the processing facility. This paper introduces a design optimization model for a star–tree gas-gathering pipeline network within a discrete space, aimed at determining the optimal configuration of this infrastructure. The objective is to reduce the investment required to build the network. Key decision variables include the locations of stations, the plant location, the connections between wells and stations, and the interconnections between stations. Several equality and inequality constraints are formulated, primarily addressing the affiliation between wells and stations, the transmission radius, and the capacity of the stations. The design of a star–tree pipeline network represents a complex, non-deterministic polynomial (NP) hard combinatorial optimization problem. To tackle this challenge, a hierarchical optimization framework coupled with an improved genetic algorithm (IGA) is proposed. The efficacy of the genetic algorithm is validated through testing and comparison with other traditional algorithms. Subsequently, the optimization model and solution methodology are applied to the layout design of a pipeline network. The findings reveal that the optimized network configuration reduces investment costs by 16% compared to the original design. Furthermore, when comparing the optimal layout under a star–star topology, it is observed that the investment needed for the star–star topology is 4% higher than that needed for the star–tree topology.
Citation: Algorithms
PubDate: 2024-08-05
DOI: 10.3390/a17080340
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 341: Fast Minimum Error Entropy for Linear
Regression
Authors: Qiang Li, Xiao Liao, Wei Cui, Ying Wang, Hui Cao, Qingshu Guan
First page: 341
Abstract: The minimum error entropy (MEE) criterion finds extensive utility across diverse applications, particularly in contexts characterized by non-Gaussian noise. However, its computational demands are notable, and are primarily attributable to the double summation operation involved in calculating the probability density function (PDF) of the error. To address this, our study introduces a novel approach, termed the fast minimum error entropy (FMEE) algorithm, aimed at mitigating computational complexity through the utilization of polynomial expansions of the error PDF. Initially, the PDF approximation of a random variable is derived via the Gram–Charlier expansion. Subsequently, we proceed to ascertain and streamline the entropy of the random variable. Following this, the error entropy inherent to the linear regression model is delineated and expressed as a function of the regression coefficient vector. Lastly, leveraging the gradient descent algorithm, we compute the regression coefficient vector corresponding to the minimum error entropy. Theoretical scrutiny reveals that the time complexity of FMEE stands at O(n), in stark contrast to the O(n2) complexity associated with MEE. Experimentally, our findings underscore the remarkable efficiency gains afforded by FMEE, with time consumption registering less than 1‰ of that observed with MEE. Encouragingly, this efficiency leap is achieved without compromising accuracy, as evidenced by negligible differentials observed between the accuracies of FMEE and MEE. Furthermore, comprehensive regression experiments on real-world electric datasets in northwest China demonstrate that our FMEE outperforms baseline methods by a clear margin.
Citation: Algorithms
PubDate: 2024-08-06
DOI: 10.3390/a17080341
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 342: Optimization of Gene Selection for Cancer
Classification in High-Dimensional Data Using an Improved African Vultures
Algorithm
Authors: Mona G. Gafar, Amr A. Abohany, Ahmed E. Elkhouli, Amr A. Abd El-Mageed
First page: 342
Abstract: This study presents a novel method, termed RBAVO-DE (Relief Binary African Vultures Optimization based on Differential Evolution), aimed at addressing the Gene Selection (GS) challenge in high-dimensional RNA-Seq data, specifically the rnaseqv2 lluminaHiSeq rnaseqv2 un edu Level 3 RSEM genes normalized dataset, which contains over 20,000 genes. RNA Sequencing (RNA-Seq) is a transformative approach that enables the comprehensive quantification and characterization of gene expressions, surpassing the capabilities of micro-array technologies by offering a more detailed view of RNA-Seq gene expression data. Quantitative gene expression analysis can be pivotal in identifying genes that differentiate normal from malignant tissues. However, managing these high-dimensional dense matrix data presents significant challenges. The RBAVO-DE algorithm is designed to meticulously select the most informative genes from a dataset comprising more than 20,000 genes and assess their relevance across twenty-two cancer datasets. To determine the effectiveness of the selected genes, this study employs the Support Vector Machine (SVM) and k-Nearest Neighbor (k-NN) classifiers. Compared to binary versions of widely recognized meta-heuristic algorithms, RBAVO-DE demonstrates superior performance. According to Wilcoxon’s rank-sum test, with a 5% significance level, RBAVO-DE achieves up to 100% classification accuracy and reduces the feature size by up to 98% in most of the twenty-two cancer datasets examined. This advancement underscores the potential of RBAVO-DE to enhance the precision of gene selection for cancer research, thereby facilitating more accurate and efficient identification of key genetic markers.
Citation: Algorithms
PubDate: 2024-08-06
DOI: 10.3390/a17080342
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 343: A Review on Reinforcement Learning in
Production Scheduling: An Inferential Perspective
Authors: Vladimir Modrak, Ranjitharamasamy Sudhakarapandian, Arunmozhi Balamurugan, Zuzana Soltysova
First page: 343
Abstract: In this study, a systematic review on production scheduling based on reinforcement learning (RL) techniques using especially bibliometric analysis has been carried out. The aim of this work is, among other things, to point out the growing interest in this domain and to outline the influence of RL as a type of machine learning on production scheduling. To achieve this, the paper explores production scheduling using RL by investigating the descriptive metadata of pertinent publications contained in Scopus, ScienceDirect, and Google Scholar databases. The study focuses on a wide spectrum of publications spanning the years between 1996 and 2024. The findings of this study can serve as new insights for future research endeavors in the realm of production scheduling using RL techniques.
Citation: Algorithms
PubDate: 2024-08-07
DOI: 10.3390/a17080343
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 344: Inversion-Based Deblending in Common
Midpoint Domain Using Time Domain High-Resolution Radon
Authors: Kai Zhuang, Daniel Trad, Amr Ibrahim
First page: 344
Abstract: We implement an inversion-based deblending method in the common midpoint gathers (CMP) as an alternative to the standard common receiver gather (CRG) domain methods. The primary advantage of deblending in the CMP domain is that reflections from dipping layers are centred around zero offsets. As a result, CMP gathers exhibit a simpler structure compared to common receiver gathers (CRGs), where these reflections are apex-shifted. Consequently, we can employ a zero-offset hyperbolic Radon operator to process CMP gathers. This operator is a computationally more efficient alternative to the apex-shifted hyperbolic Radon required for processing CRG gathers. Sparse transforms, such as the Radon transform, can stack reflections and produce sparse models capable of separating blended sources. We utilize the Radon operator to develop an inversion-based deblending framework that incorporates a sparse model constraint. The inclusion of a sparsity constraint in the inversion process enhances the focusing of the transform and improves data recovery. Inversion-based deblending enables us to account for all observed data by incorporating the blending operator into the cost function. Our synthetic and field data examples demonstrate that inversion-based deblending in the CMP domain can effectively separate blended sources.
Citation: Algorithms
PubDate: 2024-08-07
DOI: 10.3390/a17080344
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 345: Precedence Table Construction Algorithm
for CFGs Regardless of Being OPGs
Authors: Leonardo Lizcano, Eduardo Angulo, José Márquez
First page: 345
Abstract: Operator precedence grammars (OPG) are context-free grammars (CFG) that are characterized by the absence of two adjacent non-terminal symbols in the body of each production (right-hand side). Operator precedence languages (OPL) are deterministic and context-free. Three possible precedence relations between pairs of terminal symbols are established for these languages. Many CFGs are not OPGs because the operator precedence cannot be applied to them as they do not comply with the basic rule. To solve this problem, we have conducted a thorough redefinition of the Left and Right sets of terminals that are the basis for calculating the precedence relations, and we have defined a new Leftmost set. The algorithms for calculating them are also described in detail. Our work’s most significant contribution is that we establish precedence relationships between terminals by overcoming the basic rule of not having two consecutive non-terminals using an algorithm that allows building the operator precedence table for a CFG regardless of whether it is an OPG. The paper shows the complexities of the proposed algorithms and possible exceptions to the proposed rules. We present examples by using an OPG and two non-OPGs to illustrate the operation of the proposed algorithms. With these, the operator precedence table is built, and bottom-up parsing is carried out correctly.
Citation: Algorithms
PubDate: 2024-08-07
DOI: 10.3390/a17080345
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 346: EEG Channel Selection for Stroke Patient
Rehabilitation Using BAT Optimizer
Authors: Mohammed Azmi Al-Betar, Zaid Abdi Alkareem Alyasseri, Noor Kamal Al-Qazzaz, Sharif Naser Makhadmeh, Nabeel Salih Ali, Christoph Guger
First page: 346
Abstract: Stroke is a major cause of mortality worldwide, disrupts cerebral blood flow, leading to severe brain damage. Hemiplegia, a common consequence, results in motor task loss on one side of the body. Many stroke survivors face long-term motor impairments and require great rehabilitation. Electroencephalograms (EEGs) provide a non-invasive method to monitor brain activity and have been used in brain–computer interfaces (BCIs) to help in rehabilitation. Motor imagery (MI) tasks, detected through EEG, are pivotal for developing BCIs that assist patients in regaining motor purpose. However, interpreting EEG signals for MI tasks remains challenging due to their complexity and low signal-to-noise ratio. The main aim of this study is to focus on optimizing channel selection in EEG-based BCIs specifically for stroke rehabilitation. Determining the most informative EEG channels is crucial for capturing the neural signals related to motor impairments in stroke patients. In this paper, a binary bat algorithm (BA)-based optimization method is proposed to select the most relevant channels tailored to the unique neurophysiological changes in stroke patients. This approach is able to enhance the BCI performance by improving classification accuracy and reducing data dimensionality. We use time–entropy–frequency (TEF) attributes, processed through automated independent component analysis with wavelet transform (AICA-WT) denoising, to enhance signal clarity. The selected channels and features are proved through a k-nearest neighbor (KNN) classifier using public BCI datasets, demonstrating improved classification of MI tasks and the potential for better rehabilitation outcomes.
Citation: Algorithms
PubDate: 2024-08-08
DOI: 10.3390/a17080346
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 347: Classification and Regression of Pinhole
Corrosions on Pipelines Based on Magnetic Flux Leakage Signals Using
Convolutional Neural Networks
Authors: Yufei Shen, Wenxing Zhou
First page: 347
Abstract: Pinhole corrosions on oil and gas pipelines are difficult to detect and size and, therefore, pose a significant challenge to the pipeline integrity management practice. This study develops two convolutional neural network (CNN) models to identify pinholes and predict the sizes and location of the pinhole corrosions according to the magnetic flux leakage signals generated using the magneto-static finite element analysis. Extensive three-dimensional parametric finite element analysis cases are generated to train and validate the two CNN models. Additionally, comprehensive algorithm analysis evaluates the model performance, providing insights into the practical application of CNN models in pipeline integrity management. The proposed classification CNN model is shown to be highly accurate in classifying pinholes and pinhole-in-general corrosion defects. The proposed regression CNN model is shown to be highly accurate in predicting the location of the pinhole and obtain a reasonably high accuracy in estimating the depth and diameter of the pinhole, even in the presence of measurement noises. This study indicates the effectiveness of employing deep learning algorithms to enhance the integrity management practice of corroded pipelines.
Citation: Algorithms
PubDate: 2024-08-08
DOI: 10.3390/a17080347
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 348: The Parallel Machine Scheduling Problem
with Different Speeds and Release Times in the Ore Hauling Operation
Authors: Luis Tarazona-Torres, Ciro Amaya, Alvaro Paipilla, Camilo Gomez, David Alvarez-Martinez
First page: 348
Abstract: Ore hauling operations are crucial within the mining industry as they supply essential minerals to production plants. Conducted with sophisticated and high-cost operational equipment, these operations demand meticulous planning to ensure that production targets are met while optimizing equipment utilization. In this study, we present an algorithm to determine the minimum amount of hauling equipment required to meet the ore transport target. To achieve this, a mathematical model has been developed, considering it as a parallel machine scheduling problem with different speeds and release times, focusing on minimizing both the completion time and the costs associated with equipment use. Additionally, another algorithm was developed to allow the tactical evaluation of these two variables. These procedures and the model contribute significantly to decision-makers by providing a systematic approach to resource allocation, ensuring that loading and hauling equipment are utilized to their fullest potentials while adhering to budgetary constraints and operational schedules. This approach optimizes resource usage and improves operational efficiency, facilitating continuous improvement in mining operations.
Citation: Algorithms
PubDate: 2024-08-08
DOI: 10.3390/a17080348
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 349: Editorial for the Special Issue on
“Recent Advances in Nonsmooth Optimization and Analysis”
Authors: Sorin-Mihai Grad
First page: 349
Abstract: In recent years, nonsmooth optimization and analysis have seen remarkable advancements, significantly impacting various scientific and engineering disciplines [...]
Citation: Algorithms
PubDate: 2024-08-09
DOI: 10.3390/a17080349
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 350: A Non-Smooth Numerical Optimization
Approach to the Three-Point Dubins Problem (3PDP)
Authors: Mattia Piazza, Enrico Bertolazzi, Marco Frego
First page: 350
Abstract: This paper introduces a novel non-smooth numerical optimization approach for solving the Three-Point Dubins Problem (3PDP). The 3PDP requires determining the shortest path of bounded curvature that connects given initial and final positions and orientations while traversing a specified waypoint. The inherent discontinuity of this problem precludes the use of conventional optimization algorithms. We propose two innovative methods specifically designed to address this challenge. These methods not only effectively solve the 3PDP but also offer significant computational efficiency improvements over existing state-of-the-art techniques. Our contributions include the formulation of these new algorithms, a detailed analysis of their theoretical foundations, and their implementation. Additionally, we provide a thorough comparison with current leading approaches, demonstrating the superior performance of our methods in terms of accuracy and computational speed. This work advances the field of path planning in robotics, providing practical solutions for applications requiring efficient and precise motion planning.
Citation: Algorithms
PubDate: 2024-08-10
DOI: 10.3390/a17080350
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 351: Multi-Objective Resource-Constrained
Scheduling in Large and Repetitive Construction Projects
Authors: Vasiliki Lazari, Athanasios Chassiakos, Stylianos Karatzas
First page: 351
Abstract: Effective resource management constitutes a cornerstone of construction project success. This is a challenging combinatorial optimization problem with multiple and contradictory objectives whose complexity rises disproportionally with the project size and special characteristics (e.g., repetitive projects). While relevant work exists, there is still a need for thorough modeling of the practical implications of non-optimal decisions. This study proposes a multi-objective model, which can realistically represent the actual loss from not meeting the resource utilization priorities and constraints of a given project, including parameters that assess the cost of exceeding the daily resource availability, the cost of moving resources in and out of the worksite, and the cost of delaying the project completion. Optimization is performed using Genetic Algorithms, with problem setups organized in a spreadsheet format for enhanced readability and the solving is conducted via commercial software. A case study consisting of 16 repetitive projects, totaling 160 activities, tested under different objective and constraint scenarios is used to evaluate the algorithm effectiveness in different project management priorities. The main study conclusions emphasize the importance of conducting multiple analyses for effective decision-making, the increasing necessity for formal optimization as a project’s size and complexity increase, and the significant support that formal optimization provides in customizing resource allocation decisions in construction projects.
Citation: Algorithms
PubDate: 2024-08-10
DOI: 10.3390/a17080351
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 352: In-Depth Analysis of GAF-Net: Comparative
Fusion Approaches in Video-Based Person Re-Identification
Authors: Moncef Boujou, Rabah Iguernaissi, Lionel Nicod, Djamal Merad, Séverine Dubuisson
First page: 352
Abstract: This study provides an in-depth analysis of GAF-Net, a novel model for video-based person re-identification (Re-ID) that matches individuals across different video sequences. GAF-Net combines appearance-based features with gait-based features derived from skeletal data, offering a new approach that diverges from traditional silhouette-based methods. We thoroughly examine each module of GAF-Net and explore various fusion methods at the both score and feature levels, extending beyond initial simple concatenation. Comprehensive evaluations on the iLIDS-VID and MARS datasets demonstrate GAF-Net’s effectiveness across scenarios. GAF-Net achieves state-of-the-art 93.2% rank-1 accuracy on iLIDS-VID’s long sequences, while MARS results (86.09% mAP, 89.78% rank-1) reveal challenges with shorter, variable sequences in complex real-world settings. We demonstrate that integrating skeleton-based gait features consistently improves Re-ID performance, particularly with long, more informative sequences. This research provides crucial insights into multi-modal feature integration in Re-ID tasks, laying a foundation for the advancement of multi-modal biometric systems for diverse computer vision applications.
Citation: Algorithms
PubDate: 2024-08-11
DOI: 10.3390/a17080352
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 353: An Efficient AdaBoost Algorithm for
Enhancing Skin Cancer Detection and Classification
Authors: Seham Gamil, Feng Zeng, Moath Alrifaey, Muhammad Asim, Naveed Ahmad
First page: 353
Abstract: Skin cancer is a prevalent and perilous form of cancer and presents significant diagnostic challenges due to its high costs, dependence on medical experts, and time-consuming procedures. The existing diagnostic process is inefficient and expensive, requiring extensive medical expertise and time. To tackle these issues, researchers have explored the application of artificial intelligence (AI) tools, particularly machine learning techniques such as shallow and deep learning, to enhance the diagnostic process for skin cancer. These tools employ computer algorithms and deep neural networks to identify and categorize skin cancer. However, accurately distinguishing between skin cancer and benign tumors remains challenging, necessitating the extraction of pertinent features from image data for classification. This study addresses these challenges by employing Principal Component Analysis (PCA), a dimensionality-reduction approach, to extract relevant features from skin images. Additionally, accurately classifying skin images into malignant and benign categories presents another obstacle. To improve accuracy, the AdaBoost algorithm is utilized, which amalgamates weak classification models into a robust classifier with high accuracy. This research introduces a novel approach to skin cancer diagnosis by integrating Principal Component Analysis (PCA), AdaBoost, and EfficientNet B0, leveraging artificial intelligence (AI) tools. The novelty lies in the combination of these techniques to develop a robust and accurate system for skin cancer classification. The advantage of this approach is its ability to significantly reduce costs, minimize reliance on medical experts, and expedite the diagnostic process. The developed model achieved an accuracy of 93.00% using the DermIS dataset and demonstrated excellent precision, recall, and F1-score values, confirming its ability to correctly classify skin lesions as malignant or benign. Additionally, the model achieved an accuracy of 91.00% using the ISIC dataset, which is widely recognized for its comprehensive collection of annotated dermoscopic images, providing a robust foundation for training and validation. These advancements have the potential to significantly enhance the efficiency and accuracy of skin cancer diagnosis and classification. Ultimately, the integration of AI tools and techniques in skin cancer diagnosis can lead to cost reduction and improved patient outcomes, benefiting both patients and healthcare providers.
Citation: Algorithms
PubDate: 2024-08-12
DOI: 10.3390/a17080353
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 354: Machine Learning Analysis Using the Black
Oil Model and Parallel Algorithms in Oil Recovery Forecasting
Authors: Bazargul Matkerim, Aksultan Mukhanbet, Nurislam Kassymbek, Beimbet Daribayev, Maksat Mustafin, Timur Imankulov
First page: 354
Abstract: The accurate forecasting of oil recovery factors is crucial for the effective management and optimization of oil production processes. This study explores the application of machine learning methods, specifically focusing on parallel algorithms, to enhance traditional reservoir simulation frameworks using black oil models. This research involves four main steps: collecting a synthetic dataset, preprocessing it, modeling and predicting the oil recovery factors with various machine learning techniques, and evaluating the model’s performance. The analysis was carried out on a synthetic dataset containing parameters such as porosity, pressure, and the viscosity of oil and gas. By utilizing parallel computing, particularly GPUs, this study demonstrates significant improvements in processing efficiency and prediction accuracy. While maintaining the value of the R2 metric in the range of 0.97, using data parallelism sped up the learning process by, at best, 10.54 times. Neural network training was accelerated almost 8 times when running on a GPU. These findings underscore the potential of parallel machine learning algorithms to revolutionize the decision-making processes in reservoir management, offering faster and more precise predictive tools. This work not only contributes to computational sciences and reservoir engineering but also opens new avenues for the integration of advanced machine learning and parallel computing methods in optimizing oil recovery.
Citation: Algorithms
PubDate: 2024-08-14
DOI: 10.3390/a17080354
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 355: Multi-Objective Unsupervised Feature
Selection and Cluster Based on Symbiotic Organism Search
Authors: Abbas Fadhil Jasim AL-Gburi, Mohd Zakree Ahmad Nazri, Mohd Ridzwan Bin Yaakub, Zaid Abdi Alkareem Alyasseri
First page: 355
Abstract: Unsupervised learning is a type of machine learning that learns from data without human supervision. Unsupervised feature selection (UFS) is crucial in data analytics, which plays a vital role in enhancing the quality of results and reducing computational complexity in huge feature spaces. The UFS problem has been addressed in several research efforts. Recent studies have witnessed a surge in innovative techniques like nature-inspired algorithms for clustering and UFS problems. However, very few studies consider the UFS problem as a multi-objective problem to find the optimal trade-off between the number of selected features and model accuracy. This paper proposes a multi-objective symbiotic organism search algorithm for unsupervised feature selection (SOSUFS) and a symbiotic organism search-based clustering (SOSC) algorithm to generate the optimal feature subset for more accurate clustering. The efficiency and robustness of the proposed algorithm are investigated on benchmark datasets. The SOSUFS method, combined with SOSC, demonstrated the highest f-measure, whereas the KHCluster method resulted in the lowest f-measure. SOSFS effectively reduced the number of features by more than half. The proposed symbiotic organisms search-based optimal unsupervised feature-selection (SOSUFS) method, along with search-based optimal clustering (SOSC), was identified as the top-performing clustering approach. Following this, the SOSUFS method demonstrated strong performance. In summary, this empirical study indicates that the proposed algorithm significantly surpasses state-of-the-art algorithms in both efficiency and effectiveness. Unsupervised learning in artificial intelligence involves machine-learning techniques that learn from data without human supervision. Unlike supervised learning, unsupervised machine-learning models work with unlabeled data to uncover patterns and insights independently, without explicit guidance or instruction.
Citation: Algorithms
PubDate: 2024-08-14
DOI: 10.3390/a17080355
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 356: A Virtual Machine Platform Providing
Machine Learning as a Programmable and Distributed Service for IoT and
Edge On-Device Computing: Architecture, Transformation, and Evaluation of
Integer Discretization
Authors: Stefan Bosse
First page: 356
Abstract: Data-driven models used for predictive classification and regression tasks are commonly computed using floating-point arithmetic and powerful computers. We address constraints in distributed sensor networks like the IoT, edge, and material-integrated computing, providing only low-resource embedded computers with sensor data that are acquired and processed locally. Sensor networks are characterized by strong heterogeneous systems. This work introduces and evaluates a virtual machine architecture that provides ML as a service layer (MLaaS) on the node level and addresses very low-resource distributed embedded computers (with less than 20 kB of RAM). The VM provides a unified ML instruction set architecture that can be programmed to implement decision trees, ANN, and CNN model architectures using scaled integer arithmetic only. Models are trained primarily offline using floating-point arithmetic, finally converted by an iterative scaling and transformation process, demonstrated in this work by two tests based on simulated and synthetic data. This paper is an extended version of the FedCSIS 2023 conference paper providing new algorithms and ML applications, including ANN/CNN-based regression and classification tasks studying the effects of discretization on classification and regression accuracy.
Citation: Algorithms
PubDate: 2024-08-15
DOI: 10.3390/a17080356
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 357: A System Design Perspective for Business
Growth in a Crowdsourced Data Labeling Practice
Authors: Vahid Hajipour, Sajjad Jalali, Francisco Javier Santos-Arteaga, Samira Vazifeh Noshafagh, Debora Di Caprio
First page: 357
Abstract: Data labeling systems are designed to facilitate the training and validation of machine learning algorithms under the umbrella of crowdsourcing practices. The current paper presents a novel approach for designing a customized data labeling system, emphasizing two key aspects: an innovative payment mechanism for users and an efficient configuration of output results. The main problem addressed is the labeling of datasets where golden items are utilized to verify user performance and assure the quality of the annotated outputs. Our proposed payment mechanism is enhanced through a modified skip-based golden-oriented function that balances user penalties and prevents spam activities. Additionally, we introduce a comprehensive reporting framework to measure aggregated results and accuracy levels, ensuring the reliability of the labeling output. Our findings indicate that the proposed solutions are pivotal in incentivizing user participation, thereby reinforcing the applicability and profitability of newly launched labeling systems.
Citation: Algorithms
PubDate: 2024-08-15
DOI: 10.3390/a17080357
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 358: Directed Clustering of Multivariate Data
Based on Linear or Quadratic Latent Variable Models
Authors: Yingjuan Zhang, Jochen Einbeck
First page: 358
Abstract: We consider situations in which the clustering of some multivariate data is desired, which establishes an ordering of the clusters with respect to an underlying latent variable. As our motivating example for a situation where such a technique is desirable, we consider scatterplots of traffic flow and speed, where a pattern of consecutive clusters can be thought to be linked by a latent variable, which is interpretable as traffic density. We focus on latent structures of linear or quadratic shapes, and present an estimation methodology based on expectation–maximization, which estimates both the latent subspace and the clusters along it. The directed clustering approach is summarized in two algorithms and applied to the traffic example outlined. Connections to related methodology, including principal curves, are briefly drawn.
Citation: Algorithms
PubDate: 2024-08-16
DOI: 10.3390/a17080358
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 359: Exploring Clique Transversal Variants on
Distance-Hereditary Graphs: Computational Insights and Algorithmic
Approaches
Authors: Chuan-Min Lee
First page: 359
Abstract: The clique transversal problem is a critical concept in graph theory, focused on identifying a minimum subset of vertices that intersects all maximal cliques in a graph. This problem and its variations—such as the k-fold clique, {k}-clique, minus clique, and signed clique transversal problems—have received significant interest due to their theoretical importance and practical applications. This paper examines the k-fold clique, {k}-clique, minus clique, and signed clique transversal problems on distance-hereditary graphs. Known for their distinctive structural properties, distance hereditary graphs provide an ideal framework for studying these problem variants. By exploring these issues in the context of distance-hereditary graphs, this research enhances the understanding of the computational challenges and the potential for developing efficient algorithms to address these problems.
Citation: Algorithms
PubDate: 2024-08-16
DOI: 10.3390/a17080359
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 360: Detection of Subtle ECG Changes Despite
Superimposed Artifacts by Different Machine Learning Algorithms
Authors: Matthias Noitz, Christoph Mörtl, Carl Böck, Christoph Mahringer, Ulrich Bodenhofer, Martin W. Dünser, Jens Meier
First page: 360
Abstract: Analyzing electrocardiographic (ECG) signals is crucial for evaluating heart function and diagnosing cardiac pathology. Traditional methods for detecting ECG changes often rely on offline analysis or subjective visual inspection, which may overlook subtle variations, particularly in the case of artifacts. In this theoretical, proof-of-concept study, we investigated the potential of five different machine learning algorithms [random forests (RFs), gradient boosting methods (GBMs), deep neural networks (DNNs), an ensemble learning technique, as well as logistic regression] to detect subtle changes in the morphology of synthetically generated ECG beats despite artifacts. Following the generation of a synthetic ECG beat using the standardized McSharry algorithm, the baseline ECG signal was modified by changing the amplitude of different ECG components by 0.01–0.06 mV. In addition, a Gaussian jitter of 0.1–0.3 mV was overlaid to simulate artifacts. Five different machine learning algorithms were then applied to detect differences between the modified ECG beats. The highest discriminatory potency, as assessed by the discriminatory accuracy, was achieved by RFs and GBMs (accuracy of up to 1.0), whereas the least accurate results were obtained by logistic regression (accuracy approximately 10% less). In a second step, a feature importance algorithm (Boruta) was used to discriminate which signal parts were responsible for difference detection. For all comparisons, only signal components that had been modified in advance were used for discretion, demonstrating that the RF model focused on the appropriate signal elements. Our findings highlight the potential of RFs and GBMs as valuable tools for detecting subtle ECG changes despite artifacts, with implications for enhancing clinical diagnosis and monitoring. Further studies are needed to validate our findings with clinical data.
Citation: Algorithms
PubDate: 2024-08-16
DOI: 10.3390/a17080360
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 361: Frequency-Domain and Spatial-Domain
MLMVN-Based Convolutional Neural Networks
Authors: Igor Aizenberg, Alexander Vasko
First page: 361
Abstract: This paper presents a detailed analysis of a convolutional neural network based on multi-valued neurons (CNNMVN) and a fully connected multilayer neural network based on multi-valued neurons (MLMVN), employed here as a convolutional neural network in the frequency domain. We begin by providing an overview of the fundamental concepts underlying CNNMVN, focusing on the organization of convolutional layers and the CNNMVN learning algorithm. The error backpropagation rule for this network is justified and presented in detail. Subsequently, we consider how MLMVN can be used as a convolutional neural network in the frequency domain. It is shown that each neuron in the first hidden layer of MLMVN may work as a frequency-domain convolutional kernel, utilizing the Convolution Theorem. Essentially, these neurons create Fourier transforms of the feature maps that would have resulted from the convolutions in the spatial domain performed in regular convolutional neural networks. Furthermore, we discuss optimization techniques for both networks and compare the resulting convolutions to explore which features they extract from images. Finally, we present experimental results showing that both approaches can achieve high accuracy in image recognition.
Citation: Algorithms
PubDate: 2024-08-17
DOI: 10.3390/a17080361
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 362: Utilization of Machine Learning Algorithms
for the Strengthening of HIV Testing: A Systematic Review
Authors: Musa Jaiteh, Edith Phalane, Yegnanew A. Shiferaw, Karen Alida Voet, Refilwe Nancy Phaswana-Mafuya
First page: 362
Abstract: Several machine learning (ML) techniques have demonstrated efficacy in precisely forecasting HIV risk and identifying the most eligible individuals for HIV testing in various countries. Nevertheless, there is a data gap on the utility of ML algorithms in strengthening HIV testing worldwide. This systematic review aimed to evaluate how effectively ML algorithms can enhance the efficiency and accuracy of HIV testing interventions and to identify key outcomes, successes, gaps, opportunities, and limitations in their implementation. This review was guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines. A comprehensive literature search was conducted via PubMed, Google Scholar, Web of Science, Science Direct, Scopus, and Gale OneFile databases. Out of the 845 identified articles, 51 studies were eligible. More than 75% of the articles included in this review were conducted in the Americas and various parts of Sub-Saharan Africa, and a few were from Europe, Asia, and Australia. The most common algorithms applied were logistic regression, deep learning, support vector machine, random forest, extreme gradient booster, decision tree, and the least absolute shrinkage selection operator model. The findings demonstrate that ML techniques exhibit higher accuracy in predicting HIV risk/testing compared to traditional approaches. Machine learning models enhance early prediction of HIV transmission, facilitate viable testing strategies to improve the efficiency of testing services, and optimize resource allocation, ultimately leading to improved HIV testing. This review points to the positive impact of ML in enhancing early prediction of HIV spread, optimizing HIV testing approaches, improving efficiency, and eventually enhancing the accuracy of HIV diagnosis. We strongly recommend the integration of ML into HIV testing programs for efficient and accurate HIV testing.
Citation: Algorithms
PubDate: 2024-08-17
DOI: 10.3390/a17080362
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 363: Computer Vision Algorithms on a Raspberry
Pi 4 for Automated Depalletizing
Authors: Danilo Greco, Majid Fasihiany, Ali Varasteh Ranjbar, Francesco Masulli, Stefano Rovetta, Alberto Cabri
First page: 363
Abstract: The primary objective of a depalletizing system is to automate the process of detecting and locating specific variable-shaped objects on a pallet, allowing a robotic system to accurately unstack them. Although many solutions exist for the problem in industrial and manufacturing settings, the application to small-scale scenarios such as retail vending machines and small warehouses has not received much attention so far. This paper presents a comparative analysis of four different computer vision algorithms for the depalletizing task, implemented on a Raspberry Pi 4, a very popular single-board computer with low computer power suitable for the IoT and edge computing. The algorithms evaluated include the following: pattern matching, scale-invariant feature transform, Oriented FAST and Rotated BRIEF, and Haar cascade classifier. Each technique is described and their implementations are outlined. Their evaluation is performed on the task of box detection and localization in the test images to assess their suitability in a depalletizing system. The performance of the algorithms is given in terms of accuracy, robustness to variability, computational speed, detection sensitivity, and resource consumption. The results reveal the strengths and limitations of each algorithm, providing valuable insights for selecting the most appropriate technique based on the specific requirements of a depalletizing system.
Citation: Algorithms
PubDate: 2024-08-18
DOI: 10.3390/a17080363
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 364: HRIDM: Hybrid Residual/Inception-Based
Deeper Model for Arrhythmia Detection from Large Sets of 12-Lead ECG
Recordings
Authors: Syed Atif Moqurrab, Hari Mohan Rai, Joon Yoo
First page: 364
Abstract: Heart diseases such as cardiovascular and myocardial infarction are the foremost reasons of death in the world. The timely, accurate, and effective prediction of heart diseases is crucial for saving lives. Electrocardiography (ECG) is a primary non-invasive method to identify cardiac abnormalities. However, manual interpretation of ECG recordings for heart disease diagnosis is a time-consuming and inaccurate process. For the accurate and efficient detection of heart diseases from the 12-lead ECG dataset, we have proposed a hybrid residual/inception-based deeper model (HRIDM). In this study, we have utilized ECG datasets from various sources, which are multi-institutional large ECG datasets. The proposed model is trained on 12-lead ECG data from over 10,000 patients. We have compared the proposed model with several state-of-the-art (SOTA) models, such as LeNet-5, AlexNet, VGG-16, ResNet-50, Inception, and LSTM, on the same training and test datasets. To show the effectiveness of the computational efficiency of the proposed model, we have only trained over 20 epochs without GPU support and we achieved an accuracy of 50.87% on the test dataset for 27 categories of heart abnormalities. We found that our proposed model outperformed the previous studies which participated in the official PhysioNet/CinC Challenge 2020 and achieved fourth place as compared with the 41 official ranking teams. The result of this study indicates that the proposed model is an implying new method for predicting heart diseases using 12-lead ECGs.
Citation: Algorithms
PubDate: 2024-08-19
DOI: 10.3390/a17080364
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 365: An Image Processing-Based Correlation
Method for Improving the Characteristics of Brillouin Frequency Shift
Extraction in Distributed Fiber Optic Sensors
Authors: Yuri Konstantinov, Anton Krivosheev, Fedor Barkov
First page: 365
Abstract: This paper demonstrates how the processing of Brillouin gain spectra (BGS) by two-dimensional correlation methods improves the accuracy of Brillouin frequency shift (BFS) extraction in distributed fiber optic sensor systems based on the BOTDA/BOTDR (Brillouin optical time domain analysis/reflectometry) principles. First, the spectra corresponding to different spatial coordinates of the fiber sensor are resampled. Subsequently, the resampled spectra are aligned by the position of the maximum by shifting in frequency relative to each other. The spectra aligned by the position of the maximum are then averaged, which effectively increases the signal-to-noise ratio (SNR). Finally, the Lorentzian curve fitting (LCF) method is applied to the spectrum with improved characteristics, including a reduced scanning step and an increased SNR. Simulations and experiments have demonstrated that the method is particularly efficacious when the signal-to-noise ratio does not exceed 8 dB and the frequency scanning step is coarser than 4 MHz. This is particularly relevant when designing high-speed sensors, as well as when using non-standard laser sources, such as a self-scanning frequency laser, for distributed fiber-optic sensing.
Citation: Algorithms
PubDate: 2024-08-20
DOI: 10.3390/a17080365
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 366: Determining Thresholds for Optimal
Adaptive Discrete Cosine Transformation
Authors: Alexander Khanov, Anastasija Shulzhenko, Anzhelika Voroshilova, Alexander Zubarev, Timur Karimov, Shakeeb Fahmi
First page: 366
Abstract: The discrete cosine transform (DCT) is widely used for image and video compression. Lossy algorithms such as JPEG, WebP, BPG and many others are based on it. Multiple modifications of DCT have been developed to improve its performance. One of them is adaptive DCT (ADCT) designed to deal with heterogeneous image structure and it may be found, for example, in the HEVC video codec. Adaptivity means that the image is divided into an uneven grid of squares: smaller ones retain information about details better, while larger squares are efficient for homogeneous backgrounds. The practical use of adaptive DCT algorithms is complicated by the lack of optimal threshold search algorithms for image partitioning procedures. In this paper, we propose a novel method for optimal threshold search in ADCT using a metric based on tonal distribution. We define two thresholds: pm, the threshold defining solid mean coloring, and ps, defining the quadtree fragment splitting. In our algorithm, the values of these thresholds are calculated via polynomial functions of the tonal distribution of a particular image or fragment. The polynomial coefficients are determined using the dedicated optimization procedure on the dataset containing images from the specific domain, urban road scenes in our case. In the experimental part of the study, we show that ADCT allows a higher compression ratio compared to non-adaptive DCT at the same level of quality loss, up to 66% for acceptable quality. The proposed algorithm may be used directly for image compression, or as a core of video compression framework in traffic-demanding applications, such as urban video surveillance systems.
Citation: Algorithms
PubDate: 2024-08-21
DOI: 10.3390/a17080366
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 367: Augmented Dataset for Vision-Based
Analysis of Railroad Ballast via Multi-Dimensional Data Synthesis
Authors: Kelin Ding, Jiayi Luo, Haohang Huang, John M. Hart, Issam I. A. Qamhia, Erol Tutumluer
First page: 367
Abstract: Ballast serves a vital structural function in supporting railroad tracks under continuous loading. The degradation of ballast can result in issues such as inadequate drainage, lateral instability, excessive settlement, and potential service disruptions, necessitating efficient evaluation methods to ensure safe and reliable railroad operations. The incorporation of computer vision techniques into ballast inspection processes has proven effective in enhancing accuracy and robustness. Given the data-driven nature of deep learning approaches, the efficacy of these models is intrinsically linked to the quality of the training datasets, thereby emphasizing the need for a comprehensive and meticulously annotated ballast aggregate dataset. This paper presents the development of a multi-dimensional ballast aggregate dataset, constructed using empirical data collected from field and laboratory environments, supplemented with synthetic data generated by a proprietary ballast particle generator. The dataset comprises both two-dimensional (2D) data, consisting of ballast images annotated with 2D masks for particle localization, and three-dimensional (3D) data, including heightmaps, point clouds, and 3D annotations for particle localization. The data collection process encompassed various environmental lighting conditions and degradation states, ensuring extensive coverage and diversity within the training dataset. A previously developed 2D ballast particle segmentation model was trained on this augmented dataset, demonstrating high accuracy in field ballast inspections. This comprehensive database will be utilized in subsequent research to advance 3D ballast particle segmentation and shape completion, thereby facilitating enhanced inspection protocols and the development of effective ballast maintenance methodologies.
Citation: Algorithms
PubDate: 2024-08-21
DOI: 10.3390/a17080367
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 368: Identification of Crude Distillation Unit:
A Comparison between Neural Network and Koopman Operator
Authors: Abdulrazaq Nafiu Abubakar, Mustapha Kamel Khaldi, Mujahed Aldhaifallah, Rohit Patwardhan, Hussain Salloum
First page: 368
Abstract: In this paper, we aimed to identify the dynamics of a crude distillation unit (CDU) using closed-loop data with NARX−NN and the Koopman operator in both linear (KL) and bilinear (KB) forms. A comparative analysis was conducted to assess the performance of each method under different experimental conditions, such as the gain, a delay and time constant mismatch, tight constraints, nonlinearities, and poor tuning. Although NARX−NN showed good training performance with the lowest Mean Squared Error (MSE), the KB demonstrated better generalization and robustness, outperforming the other methods. The KL observed a significant decline in performance in the presence of nonlinearities in inputs, yet it remained competitive with the KB under other circumstances. The use of the bilinear form proved to be crucial, as it offered a more accurate representation of CDU dynamics, resulting in enhanced performance.
Citation: Algorithms
PubDate: 2024-08-21
DOI: 10.3390/a17080368
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 369: On the Complexity of the Bipartite
Polarization Problem: From Neutral to Highly Polarized Discussions
Authors: Teresa Alsinet, Josep Argelich, Ramón Béjar, Santi Martínez
First page: 369
Abstract: The bipartite polarization problem is an optimization problem where the goal is to find the highest polarized bipartition on a weighted and labeled graph that represents a debate developed through some social network, where nodes represent user’s opinions and edges agreement or disagreement between users. This problem can be seen as a generalization of the maxcut problem, and in previous work, approximate solutions and exact solutions have been obtained for real instances obtained from Reddit discussions, showing that such real instances seem to be very easy to solve. In this paper, we further investigate the complexity of this problem by introducing an instance generation model where a single parameter controls the polarization of the instances in such a way that this correlates with the average complexity to solve those instances. The average complexity results we obtain are consistent with our hypothesis: the higher the polarization of the instance, the easier is to find the corresponding polarized bipartition. In view of the experimental results, it is computationally feasible to implement transparent mechanisms to monitor polarization on online discussions and to inform about solutions for creating healthier social media environments.
Citation: Algorithms
PubDate: 2024-08-21
DOI: 10.3390/a17080369
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 370: Joint Optimization of Service Migration
and Resource Allocation in Mobile Edge–Cloud Computing
Authors: Zhenli He, Liheng Li, Ziqi Lin, Yunyun Dong, Jianglong Qin, Keqin Li
First page: 370
Abstract: In the rapidly evolving domain of mobile edge–cloud computing (MECC), the proliferation of Internet of Things (IoT) devices and mobile applications poses significant challenges, particularly in dynamically managing computational demands and user mobility. Current research has partially addressed aspects of service migration and resource allocation, yet it often falls short in thoroughly examining the nuanced interdependencies between migration strategies and resource allocation, the consequential impacts of migration delays, and the intricacies of handling incomplete tasks during migration. This study advances the discourse by introducing a sophisticated framework optimized through a deep reinforcement learning (DRL) strategy, underpinned by a Markov decision process (MDP) that dynamically adapts service migration and resource allocation strategies. This refined approach facilitates continuous system monitoring, adept decision making, and iterative policy refinement, significantly enhancing operational efficiency and reducing response times in MECC environments. By meticulously addressing these previously overlooked complexities, our research not only fills critical gaps in the literature but also enhances the practical deployment of edge computing technologies, contributing profoundly to both theoretical insights and practical implementations in contemporary digital ecosystems.
Citation: Algorithms
PubDate: 2024-08-21
DOI: 10.3390/a17080370
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 371: Bi-Objective, Dynamic, Multiprocessor
Open-Shop Scheduling: A Hybrid Scatter Search–Tabu Search Approach
Authors: Tamer F. Abdelmaguid
First page: 371
Abstract: This paper presents a novel, multi-objective scatter search algorithm (MOSS) for a bi-objective, dynamic, multiprocessor open-shop scheduling problem (Bi-DMOSP). The considered objectives are the minimization of the maximum completion time (makespan) and the minimization of the mean weighted flow time. Both are particularly important for improving machines’ utilization and customer satisfaction level in maintenance and healthcare diagnostic systems, in which the studied Bi-DMOSP is mostly encountered. Since the studied problem is NP-hard for both objectives, fast algorithms are needed to fulfill the requirements of real-life circumstances. Previous attempts have included the development of an exact algorithm and two metaheuristic approaches based on the non-dominated sorting genetic algorithm (NSGA-II) and the multi-objective gray wolf optimizer (MOGWO). The exact algorithm is limited to small-sized instances; meanwhile, NSGA-II was found to produce better results compared to MOGWO in both small- and large-sized test instances. The proposed MOSS in this paper attempts to provide more efficient non-dominated solutions for the studied Bi-DMOSP. This is achievable via its hybridization with a novel, bi-objective tabu search approach that utilizes a set of efficient neighborhood search functions. Parameter tuning experiments are conducted first using a subset of small-sized benchmark instances for which the optimal Pareto front solutions are known. Then, detailed computational experiments on small- and large-sized instances are conducted. Comparisons with the previously developed NSGA-II metaheuristic demonstrate the superiority of the proposed MOSS approach for small-sized instances. For large-sized instances, it proves its capability of producing competitive results for instances with low and medium density.
Citation: Algorithms
PubDate: 2024-08-21
DOI: 10.3390/a17080371
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 372: Correlation Analysis of Railway Track
Alignment and Ballast Stiffness: Comparing Frequency-Based and Machine
Learning Algorithms
Authors: Saeed Mohammadzadeh, Hamidreza Heydari, Mahdi Karimi, Araliya Mosleh
First page: 372
Abstract: One of the primary challenges in the railway industry revolves around achieving a comprehensive and insightful understanding of track conditions. The geometric parameters and stiffness of railway tracks play a crucial role in condition monitoring as well as maintenance work. Hence, this study investigated the relationship between vertical ballast stiffness and the track longitudinal level. Initially, the ballast stiffness and track longitudinal level data were acquired through a series of experimental measurements conducted on a reference test track along the Tehran–Mashhad railway line, utilizing recording cars for geometric track and stiffness recordings. Subsequently, the correlation between the track longitudinal level and ballast stiffness was surveyed using both frequency-based techniques and machine learning (ML) algorithms. The power spectrum density (PSD) as a frequency-based technique was employed, alongside ML algorithms, including linear regression, decision trees, and random forests, for correlation mining analyses. The results showed a robust and statistically significant relationship between the vertical ballast stiffness and longitudinal levels of railway tracks. Specifically, the PSD data exhibited a considerable correlation, especially within the 1–4 rad/m wave number range. Furthermore, the data analyses conducted using ML methods indicated that the values of the root mean square error (RMSE) were about 0.05, 0.07, and 0.06 for the linear regression, decision tree, and random forest algorithms, respectively, demonstrating the adequate accuracy of ML-based approaches.
Citation: Algorithms
PubDate: 2024-08-22
DOI: 10.3390/a17080372
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 373: Algorithms for Various Trigonometric Power
Sums
Authors: Victor Kowalenko
First page: 373
Abstract: In this paper, algorithms for different types of trigonometric power sums are developed and presented. Although interesting in their own right, these trigonometric power sums arise during the creation of an algorithm for the four types of twisted trigonometric power sums defined in the introduction. The primary aim in evaluating these sums is to obtain exact results in a rational form, as opposed to standard or direct evaluation, which often results in machine-dependent decimal values that can be affected by round-off errors. Moreover, since the variable, m, appearing in the denominators of the arguments of the trigonometric functions in these sums, can remain algebraic in the algorithms/codes, one can also obtain polynomial solutions in powers of m and the variable r that appears in the cosine factor accompanying the trigonometric power. The degrees of these polynomials are found to be dependent upon v, the value of the trigonometric power in the sum, which must always be specified.
Citation: Algorithms
PubDate: 2024-08-22
DOI: 10.3390/a17080373
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 374: Extended General Malfatti’s
Problem
Authors: Ching-Shoei Chiang
First page: 374
Abstract: Malfatti’s problem involves three circles (called Malfatti circles) that are tangent to each other and two sides of a triangle. In this study, our objective is to extend the problem to find 6, 10, … ∑1ni (n > 2) circles inside the triangle so that the three corner circles are tangent to two sides of the triangle, the boundary circles are tangent to one side of the triangle, and four other circles (at least two of them being boundary or corner circles) and the inner circles are tangent to six other circles. We call this problem the extended general Malfatti’s problem, or the Tri(Tn) problem, where Tri means that the boundary of these circles is a triangle, and Tn is the number of circles inside the triangle. In this paper, we propose an algorithm to solve the Tri(Tn) problem.
Citation: Algorithms
PubDate: 2024-08-22
DOI: 10.3390/a17080374
Issue No: Vol. 17, No. 8 (2024)
- Algorithms, Vol. 17, Pages 283: Maximizing the Average Environmental
Benefit of a Fleet of Drones under a Periodic Schedule of Tasks
Authors: Vladimir Kats, Eugene Levner
First page: 283
Abstract: Unmanned aerial vehicles (UAVs, drones) are not just a technological achievement based on modern ideas of artificial intelligence; they also provide a sustainable solution for green technologies in logistics, transport, and material handling. In particular, using battery-powered UAVs to transport products can significantly decrease energy and fuel expenses, reduce environmental pollution, and improve the efficiency of clean technologies through improved energy-saving efficiency. We consider the problem of maximizing the average environmental benefit of a fleet of drones given a periodic schedule of tasks performed by the fleet of vehicles. To solve the problem efficiently, we formulate it as an optimization problem on an infinite periodic graph and reduce it to a special type of parametric assignment problem. We exactly solve the problem under consideration in O(n3) time, where n is the number of flights performed by UAVs.
Citation: Algorithms
PubDate: 2024-06-28
DOI: 10.3390/a17070283
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 284: Ensemble Learning with Pre-Trained
Transformers for Crash Severity Classification: A Deep N.L.P. Approach
Authors: Shadi Jaradat, Richi Nayak, Alexander Paz, Mohammed Elhenawy
First page: 284
Abstract: Transfer learning has gained significant traction in natural language processing due to the emergence of state-of-the-art pre-trained language models (P.L.M.s). Unlike traditional word embedding methods such as TF-IDF and Word2Vec, P.L.M.s are context-dependent and outperform conventional techniques when fine-tuned for specific tasks. This paper proposes an innovative hard voting classifier to enhance crash severity classification by combining machine learning and deep learning models with various word embedding techniques, including BERT, RoBERTa, Word2Vec, and TF-IDF. Our study involves two comprehensive experiments using motorists’ crash data from the Missouri State Highway Patrol. The first experiment evaluates the performance of three machine learning models—XGBoost (X.G.B.), random forest (R.F.), and naive Bayes (N.B.)—paired with TF-IDF, Word2Vec, and BERT feature extraction techniques. Additionally, BERT and RoBERTa are fine-tuned with a Bidirectional Long Short-Term Memory (Bi-LSTM) classification model. All models are initially evaluated on the original dataset. The second experiment repeats the evaluation using an augmented dataset to address the severe data imbalance. The results from the original dataset show strong performance for all models in the “Fatal” and “Personal Injury” classes but a poor classification of the minority “Property Damage” class. In the augmented dataset, while the models continued to excel with the majority classes, only XGB/TFIDF and BERT-LSTM showed improved performance for the minority class. The ensemble model outperformed individual models in both datasets, achieving an F1 score of 99% for “Fatal” and “Personal Injury” and 62% for “Property Damage” on the augmented dataset. These findings suggest that ensemble models, combined with data augmentation, are highly effective for crash severity classification and potentially other textual classification tasks.
Citation: Algorithms
PubDate: 2024-06-30
DOI: 10.3390/a17070284
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 285: The Novel EfficientNet Architecture-Based
System and Algorithm to Predict Complex Human Emotions
Authors: Mavlonbek Khomidov, Jong-Ha Lee
First page: 285
Abstract: Facial expressions are often considered the primary indicators of emotions. However, it is challenging to detect genuine emotions because they can be controlled. Many studies on emotion recognition have been conducted actively in recent years. In this study, we designed a convolutional neural network (CNN) model and proposed an algorithm that combines the analysis of bio-signals with facial expression templates to effectively predict emotional states. We utilized the EfficientNet-B0 architecture for network design and validation, known for achieving maximum performance with minimal parameters. The accuracy for emotion recognition using facial expression images alone was 74%, while the accuracy for emotion recognition combining biological signals reached 88.2%. These results demonstrate that integrating these two types of data leads to significantly improved accuracy. By combining the image and bio-signals captured in facial expressions, our model offers a more comprehensive and accurate understanding of emotional states.
Citation: Algorithms
PubDate: 2024-07-01
DOI: 10.3390/a17070285
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 286: Enhancing Video Anomaly Detection Using a
Transformer Spatiotemporal Attention Unsupervised Framework for Large
Datasets
Authors: Mohamed H. Habeb, May Salama, Lamiaa A. Elrefaei
First page: 286
Abstract: This work introduces an unsupervised framework for video anomaly detection, leveraging a hybrid deep learning model that combines a vision transformer (ViT) with a convolutional spatiotemporal relationship (STR) attention block. The proposed model addresses the challenges of anomaly detection in video surveillance by capturing both local and global relationships within video frames, a task that traditional convolutional neural networks (CNNs) often struggle with due to their localized field of view. We have utilized a pre-trained ViT as an encoder for feature extraction, which is then processed by the STR attention block to enhance the detection of spatiotemporal relationships among objects in videos. The novelty of this work is utilizing the ViT with the STR attention to detect video anomalies effectively in large and heterogeneous datasets, an important thing given the diverse environments and scenarios encountered in real-world surveillance. The framework was evaluated on three benchmark datasets, i.e., the UCSD-Ped2, CHUCK Avenue, and ShanghaiTech. This demonstrates the model’s superior performance in detecting anomalies compared to state-of-the-art methods, showcasing its potential to significantly enhance automated video surveillance systems by achieving area under the receiver operating characteristic curve (AUC ROC) values of 95.6, 86.8, and 82.1. To show the effectiveness of the proposed framework in detecting anomalies in extra-large datasets, we trained the model on a subset of the huge contemporary CHAD dataset that contains over 1 million frames, achieving AUC ROC values of 71.8 and 64.2 for CHAD-Cam 1 and CHAD-Cam 2, respectively, which outperforms the state-of-the-art techniques.
Citation: Algorithms
PubDate: 2024-07-01
DOI: 10.3390/a17070286
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 287: Enhancing Program Synthesis with Large
Language Models Using Many-Objective Grammar-Guided Genetic Programming
Authors: Ning Tao, Anthony Ventresque, Vivek Nallur, Takfarinas Saber
First page: 287
Abstract: The ability to automatically generate code, i.e., program synthesis, is one of the most important applications of artificial intelligence (AI). Currently, two AI techniques are leading the way: large language models (LLMs) and genetic programming (GP) methods—each with its strengths and weaknesses. While LLMs have shown success in program synthesis from a task description, they often struggle to generate the correct code due to ambiguity in task specifications, complex programming syntax, and lack of reliability in the generated code. Furthermore, their generative nature limits their ability to fix erroneous code with iterative LLM prompting. Grammar-guided genetic programming (G3P, i.e., one of the top GP methods) has been shown capable of evolving programs that fit a defined Backus–Naur-form (BNF) grammar based on a set of input/output tests that help guide the search process while ensuring that the generated code does not include calls to untrustworthy libraries or poorly structured snippets. However, G3P still faces issues generating code for complex tasks. A recent study attempting to combine both approaches (G3P and LLMs) by seeding an LLM-generated program into the initial population of the G3P has shown promising results. However, the approach rapidly loses the seeded information over the evolutionary process, which hinders its performance. In this work, we propose combining an LLM (specifically ChatGPT) with a many-objective G3P (MaOG3P) framework in two parts: (i) provide the LLM-generated code as a seed to the evolutionary process following a grammar-mapping phase that creates an avenue for program evolution and error correction; and (ii) leverage many-objective similarity measures towards the LLM-generated code to guide the search process throughout the evolution. The idea behind using the similarity measures is that the LLM-generated code is likely to be close to the correct fitting code. Our approach compels any generated program to adhere to the BNF grammar, ultimately mitigating security risks and improving code quality. Experiments on a well-known and widely used program synthesis dataset show that our approach successfully improves the synthesis of grammar-fitting code for several tasks.
Citation: Algorithms
PubDate: 2024-07-01
DOI: 10.3390/a17070287
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 288: Optimal Design of I-PD and PI-D Industrial
Controllers Based on Artificial Intelligence Algorithm
Authors: Olga Shiryayeva, Batyrbek Suleimenov, Yelena Kulakova
First page: 288
Abstract: This research aims to apply Artificial Intelligence (AI) methods, specifically Artificial Immune Systems (AIS), to design an optimal control strategy for a multivariable control plant. Two specific industrial control approaches are investigated: I-PD (Integral-Proportional Derivative) and PI-D (Proportional-Integral Derivative) control. The motivation for using these variations of PID controllers is that they are functionally implemented in modern industrial controllers, where they provide precise process control. The research results in a novel solution to the control synthesis problem for the industrial system. In particular, the research deals with the synthesis of I-P control for a two-loop system in the technological process of a distillation column. This synthesis is carried out using the AIS algorithm, which is the first application of this technique in this specific context. Methodological approaches are proposed to improve the performance of industrial multivariable control systems by effectively using optimization algorithms and establishing modified quality criteria. The numerical performance index ISE justifies the effectiveness of the AIS-based controllers in comparison with conventional PID controllers (ISE1 = 1.865, ISE2 = 1.56). The problem of synthesis of the multi-input multi-output (MIMO) control system is solved, considering the interconnections due to the decoupling procedure.
Citation: Algorithms
PubDate: 2024-07-01
DOI: 10.3390/a17070288
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 289: Fuzzy Fractional Brownian Motion: Review
and Extension
Authors: Urumov, Chountas, Chaussalet
First page: 289
Abstract: In traditional finance, option prices are typically calculated using crisp sets of variables. However, as reported in the literature novel, these parameters possess a degree of fuzziness or uncertainty. This allows participants to estimate option prices based on their risk preferences and beliefs, considering a range of possible values for the parameters. This paper presents a comprehensive review of existing work on fuzzy fractional Brownian motion and proposes an extension in the context of financial option pricing. In this paper, we define a unified framework combining fractional Brownian motion with fuzzy processes, creating a joint product measure space that captures both randomness and fuzziness. The approach allows for the consideration of individual risk preferences and beliefs about parameter uncertainties. By extending Merton’s jump-diffusion model to include fuzzy fractional Brownian motion, this paper addresses the modelling needs of hybrid systems with uncertain variables. The proposed model, which includes fuzzy Poisson processes and fuzzy volatility, demonstrates advantageous properties such as long-range dependence and self-similarity, providing a robust tool for modelling financial markets. By incorporating fuzzy numbers and the belief degree, this approach provides a more flexible framework for practitioners to make their investment decisions.
Citation: Algorithms
PubDate: 2024-07-01
DOI: 10.3390/a17070289
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 290: Federated Learning-Based Security Attack
Detection for Multi-Controller Software-Defined Networks
Authors: Abrar Alkhamisi, Iyad Katib, Seyed M. Buhari
First page: 290
Abstract: A revolutionary concept of Multi-controller Software-Defined Networking (MC-SDN) is a promising structure for pursuing an evolving complex and expansive large-scale modern network environment. Despite the rich operational flexibility of MC-SDN, it is imperative to protect the network deployment against potential vulnerabilities that lead to misuse and malicious activities on data planes. The security holes in the MC-SDN significantly impact network survivability, and subsequently, the data plane is vulnerable to potential security threats and unintended consequences. Accordingly, this work intends to design a Federated learning-based Security (FedSec) strategy that detects the MC-SDN attack. The FedSec ensures packet routing services among the nodes by maintaining a flow table frequently updated according to the global model knowledge. By executing the FedSec algorithm only on the network-centric nodes selected based on importance measurements, the FedSec reduces the system complexity and enhances attack detection and classification accuracy. Finally, the experimental results illustrate the significance of the proposed FedSec strategy regarding various metrics.
Citation: Algorithms
PubDate: 2024-07-02
DOI: 10.3390/a17070290
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 291: Prime Time Tactics—Sieve Tweaks
and Boosters
Authors: Mircea Ghidarcea, Decebal Popescu
First page: 291
Abstract: In a landscape where interest in prime sieving has waned and practitioners are few, we are still hoping for a domain renaissance, fueled by a resurgence of interest and a fresh wave of innovation. Building upon years of extensive research and experimentation, this article aims to contribute by presenting a heterogeneous compilation of generic tweaks and boosters aimed at revitalizing prime sieving methodologies. Drawing from a wealth of resurfaced knowledge and refined sieving algorithms, techniques, and optimizations, we unveil a diverse array of strategies designed to elevate the efficiency, accuracy, and scalability of prime sieving algorithms; these tweaks and boosters represent a synthesis of old wisdom and new discoveries, offering practical guidance for researchers and practitioners alike.
Citation: Algorithms
PubDate: 2024-07-03
DOI: 10.3390/a17070291
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 292: Central Kurdish Text-to-Speech Synthesis
with Novel End-to-End Transformer Training
Authors: Hawraz A. Ahmad, Tarik A. Rashid
First page: 292
Abstract: Recent advancements in text-to-speech (TTS) models have aimed to streamline the two-stage process into a single-stage training approach. However, many single-stage models still lag behind in audio quality, particularly when handling Kurdish text and speech. There is a critical need to enhance text-to-speech conversion for the Kurdish language, particularly for the Sorani dialect, which has been relatively neglected and is underrepresented in recent text-to-speech advancements. This study introduces an end-to-end TTS model for efficiently generating high-quality Kurdish audio. The proposed method leverages a variational autoencoder (VAE) that is pre-trained for audio waveform reconstruction and is augmented by adversarial training. This involves aligning the prior distribution established by the pre-trained encoder with the posterior distribution of the text encoder within latent variables. Additionally, a stochastic duration predictor is incorporated to imbue synthesized Kurdish speech with diverse rhythms. By aligning latent distributions and integrating the stochastic duration predictor, the proposed method facilitates the real-time generation of natural Kurdish speech audio, offering flexibility in pitches and rhythms. Empirical evaluation via the mean opinion score (MOS) on a custom dataset confirms the superior performance of our approach (MOS of 3.94) compared with that of a one-stage system and other two-staged systems as assessed through a subjective human evaluation.
Citation: Algorithms
PubDate: 2024-07-03
DOI: 10.3390/a17070292
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 293: A Histogram Publishing Method under
Authors: Jianzhang Chen, Shuo Zhou, Jie Qiu, Yixin Xu, Bozhe Zeng, Wanchuan Fang, Xiangying Chen, Yipeng Huang, Zhengquan Xu, Youqin Chen
First page: 293
Abstract: Differential privacy, a cornerstone of privacy-preserving techniques, plays an indispensable role in ensuring the secure handling and sharing of sensitive data analysis across domains such as in census, healthcare, and social networks. Histograms, serving as a visually compelling tool for presenting analytical outcomes, are widely employed in these sectors. Currently, numerous algorithms for publishing histograms under differential privacy have been developed, striving to balance privacy protection with the provision of useful data. Nonetheless, the pivotal challenge concerning the effective enhancement of precision for small bins (those intervals that are narrowly defined or contain a relatively small number of data points) within histograms has yet to receive adequate attention and in-depth investigation from experts. In standard DP histogram publishing, adding noise without regard for bin size can result in small data bins being disproportionately influenced by noise, potentially severely impairing the overall accuracy of the histogram. In response to this challenge, this paper introduces the SReB_GCA sanitization algorithm designed to enhance the accuracy of small bins in DP histograms. The SReB_GCA approach involves sorting the bins from smallest to largest and applying a greedy grouping strategy, with a predefined lower bound on the mean relative error required for a bin to be included in a group. Our theoretical analysis reveals that sorting bins in ascending order prior to grouping effectively prioritizes the accuracy of smaller bins. SReB_GCA ensures strict ϵ-DP compliance and strikes a careful balance between reconstruction error and noise error, thereby not only initially improving the accuracy of small bins but also approximately optimizing the mean relative error of the entire histogram. To validate the efficiency of our proposed SReB_GCA method, we conducted extensive experiments using four diverse datasets, including two real-life datasets and two synthetic ones. The experimental results, quantified by the Kullback–Leibler Divergence (KLD), show that the SReB_GCA algorithm achieves substantial performance enhancement compared to the baseline method (DP_BASE) and several other established approaches for differential privacy histogram publication.
Citation: Algorithms
PubDate: 2024-07-04
DOI: 10.3390/a17070293
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 294: Logical Execution Time and Time-Division
Multiple Access in Multicore Embedded Systems: A Case Study
Authors: Carlos-Antonio Mosqueda-Arvizu, Julio-Alejandro Romero-González, Diana-Margarita Córdova-Esparza, Juan Terven, Ricardo Chaparro-Sánchez, Juvenal Rodríguez-Reséndiz
First page: 294
Abstract: The automotive industry has recently adopted multicore processors and microcontrollers to meet the requirements of new features, such as autonomous driving, and comply with the latest safety standards. However, inter-core communication poses a challenge in ensuring real-time requirements such as time determinism and low latencies. Concurrent access to shared buffers makes predicting the flow of data difficult, leading to decreased algorithm performance. This study explores the integration of Logical Execution Time (LET) and Time-Division Multiple Access (TDMA) models in multicore embedded systems to address the challenges in inter-core communication by synchronizing read/write operations across different cores, significantly reducing latency variability and improving system predictability and consistency. Experimental results demonstrate that this integrated approach eliminates data loss and maintains fixed operation rates, achieving a consistent latency of 11 ms. The LET-TDMA method reduces latency variability to approximately 1 ms, maintaining a maximum delay of 1.002 ms and a minimum delay of 1.001 ms, compared to the variability in the LET-only method, which ranged from 3.2846 ms to 8.9257 ms for different configurations.
Citation: Algorithms
PubDate: 2024-07-05
DOI: 10.3390/a17070294
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 295: VMP-ER: An Efficient Virtual Machine
Placement Algorithm for Energy and Resources Optimization in Cloud Data
Center
Authors: Hasanein D. Rjeib, Gabor Kecskemeti
First page: 295
Abstract: Cloud service providers deliver computing services on demand using the Infrastructure as a Service (IaaS) model. In a cloud data center, several virtual machines (VMs) can be hosted on a single physical machine (PM) with the help of virtualization. The virtual machine placement (VMP) involves assigning VMs across various physical machines, which is a crucial process impacting energy draw and resource usage in the cloud data center. Nonetheless, finding an effective settlement is challenging owing to factors like hardware heterogeneity and the scalability of cloud data centers. This paper proposes an efficient algorithm named VMP-ER aimed at optimizing power consumption and reducing resource wastage. Our algorithm achieves this by decreasing the number of running physical machines, and it gives priority to energy-efficient servers. Additionally, it improves resource utilization across physical machines, thus minimizing wastage and ensuring balanced resource allocation.
Citation: Algorithms
PubDate: 2024-07-05
DOI: 10.3390/a17070295
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 296: Hardness and Approximability of Dimension
Reduction on the Probability Simplex
Authors: Roberto Bruno
First page: 296
Abstract: Dimension reduction is a technique used to transform data from a high-dimensional space into a lower-dimensional space, aiming to retain as much of the original information as possible. This approach is crucial in many disciplines like engineering, biology, astronomy, and economics. In this paper, we consider the following dimensionality reduction instance: Given an n-dimensional probability distribution p and an integer m<n, we aim to find the m-dimensional probability distribution q that is the closest to p, using the Kullback–Leibler divergence as the measure of closeness. We prove that the problem is strongly NP-hard, and we present an approximation algorithm for it.
Citation: Algorithms
PubDate: 2024-07-06
DOI: 10.3390/a17070296
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 297: Crystal Symmetry-Inspired Algorithm for
Optimal Design of Contemporary Mono Passivated Emitter and Rear Cell Solar
Photovoltaic Modules
Authors: Ram Ishwar Vais, Kuldeep Sahay, Tirumalasetty Chiranjeevi, Ramesh Devarapalli, Łukasz Knypiński
First page: 297
Abstract: A metaheuristic algorithm named the Crystal Structure Algorithm (CrSA), which is inspired by the symmetric arrangement of atoms, molecules, or ions in crystalline minerals, has been used for the accurate modeling of Mono Passivated Emitter and Rear Cell (PERC) WSMD-545 and CS7L-590 MS solar photovoltaic (PV) modules. The suggested algorithm is a concise and parameter-free approach that does not need the identification of any intrinsic parameter during the optimization stage. It is based on crystal structure generation by combining the basis and lattice point. The proposed algorithm is adopted to minimize the sum of the squares of the errors at the maximum power point, as well as the short circuit and open circuit points. Several runs are carried out to examine the V-I characteristics of the PV panels under consideration and the nature of the derived parameters. The parameters generated by the proposed technique offer the lowest error over several executions, indicating that it should be implemented in the present scenario. To validate the performance of the proposed approach, convergence curves of Mono PERC WSMD-545 and CS7L-590 MS PV modules obtained using the CrSA are compared with the convergence curves obtained using the recent optimization algorithms (OAs) in the literature. It has been observed that the proposed approach exhibited the fastest rate of convergence on each of the PV panels.
Citation: Algorithms
PubDate: 2024-07-06
DOI: 10.3390/a17070297
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 298: A Sparsity-Invariant Model via Unifying
Depth Prediction and Completion
Authors: Shuling Wang, Fengze Jiang, Xiaojin Gong
First page: 298
Abstract: The development of a sparse-invariant depth completion model capable of handling varying levels of input depth sparsity is highly desirable in real-world applications. However, existing sparse-invariant models tend to degrade when the input depth points are extremely sparse. In this paper, we propose a new model that combines the advantageous designs of depth completion and monocular depth estimation tasks to achieve sparse invariance. Specifically, we construct a dual-branch architecture with one branch dedicated to depth prediction and the other to depth completion. Additionally, we integrate the multi-scale local planar module in the decoders of both branches. Experimental results on the NYU Depth V2 benchmark and the OPPO prototype dataset equipped with the Spot-iToF316 sensor demonstrate that our model achieves reliable results even in cases with irregularly distributed, limited or absent depth information.
Citation: Algorithms
PubDate: 2024-07-06
DOI: 10.3390/a17070298
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 299: Mixed Graph Colouring as Scheduling a
Partially Ordered Set of Interruptible Multi-Processor Tasks with Integer
Due Dates
Authors: Evangelina I. Mihova, Yuri N. Sotskov
First page: 299
Abstract: We investigate relationships between scheduling problems with the bottleneck objective functions (minimising makespan or maximal lateness) and problems of optimal colourings of the mixed graphs. The investigated scheduling problems have integer durations of the multi-processor tasks (operations), integer release dates and integer due dates of the given jobs. In the studied scheduling problems, it is required to find an optimal schedule for processing the partially ordered operations, given that operation interruptions are allowed and indicated subsets of the unit-time operations must be processed simultaneously. First, we show that the input data for any considered scheduling problem can be completely determined by the corresponding mixed graph. Second, we prove that solvable scheduling problems can be reduced to problems of finding optimal colourings of corresponding mixed graphs. Third, finding an optimal colouring of the mixed graph is equivalent to the considered scheduling problem determined by the same mixed graph. Finally, due to the proven equivalence of the considered optimisation problems, most of the results that were proven for the optimal colourings of mixed graphs generate similar results for considered scheduling problems, and vice versa.
Citation: Algorithms
PubDate: 2024-07-06
DOI: 10.3390/a17070299
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 300: Continuous Recognition of Teachers’
Hand Signals for Students with Attention Deficits
Authors: Ivane Delos Santos Chen, Chieh-Ming Yang, Shang-Shu Wu, Chih-Kang Yang, Mei-Juan Chen, Chia-Hung Yeh, Yuan-Hong Lin
First page: 300
Abstract: In the era of inclusive education, students with attention deficits are integrated into the general classroom. To ensure a seamless transition of students’ focus towards the teacher’s instruction throughout the course and to align with the teaching pace, this paper proposes a continuous recognition algorithm for capturing teachers’ dynamic gesture signals. This algorithm aims to offer instructional attention cues for students with attention deficits. According to the body landmarks of the teacher’s skeleton by using vision and machine learning-based MediaPipe BlazePose, the proposed method uses simple rules to detect the teacher’s hand signals dynamically and provides three kinds of attention cues (Pointing to left, Pointing to right, and Non-pointing) during the class. Experimental results show the average accuracy, sensitivity, specificity, precision, and F1 score achieved 88.31%, 91.03%, 93.99%, 86.32%, and 88.03%, respectively. By analyzing non-verbal behavior, our method of competent performance can replace verbal reminders from the teacher and be helpful for students with attention deficits in inclusive education.
Citation: Algorithms
PubDate: 2024-07-07
DOI: 10.3390/a17070300
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 301: To Cache or Not to Cache
Authors: Steven Lyons, Rangaswami
First page: 301
Abstract: Unlike conventional CPU caches, non-datapath caches, such as host-side flash caches which are extensively used as storage caches, have distinct requirements. While every cache miss results in a cache update in a conventional cache, non-datapath caches allow for the flexibility of selective caching, i.e., the option of not having to update the cache on each miss. We propose a new, generalized, bimodal caching algorithm, Fear Of Missing Out (FOMO), for managing non-datapath caches. Being generalized has the benefit of allowing any datapath cache replacement policy, such as LRU, ARC, or LIRS, to be augmented by FOMO to make these datapath caching algorithms better suited for non-datapath caches. Operating in two states, FOMO is selective—it selectively disables cache insertion and replacement depending on the learned behavior of the workload. FOMO is lightweight and tracks inexpensive metrics in order to identify these workload behaviors effectively. FOMO is evaluated using three different cache replacement policies against the current state-of-the-art non-datapath caching algorithms, using five different storage system workload repositories (totaling 176 workloads) for six different cache size configurations, each sized as a percentage of each workload’s footprint. Our extensive experimental analysis reveals that FOMO can improve upon other non-datapath caching algorithms across a range of production storage workloads, while also reducing the write rate.
Citation: Algorithms
PubDate: 2024-07-07
DOI: 10.3390/a17070301
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 302: SCMs: Systematic Conglomerated Models for
Audio Cough Signal Classification
Authors: Sunil Kumar Prabhakar, Dong-Ok Won
First page: 302
Abstract: A common and natural physiological response of the human body is cough, which tries to push air and other wastage thoroughly from the airways. Due to environmental factors, allergic responses, pollution or some diseases, cough occurs. A cough can be either dry or wet depending on the amount of mucus produced. A characteristic feature of the cough is the sound, which is a quacking sound mostly. Human cough sounds can be monitored continuously, and so, cough sound classification has attracted a lot of interest in the research community in the last decade. In this research, three systematic conglomerated models (SCMs) are proposed for audio cough signal classification. The first conglomerated technique utilizes the concept of robust models like the Cross-Correlation Function (CCF) and Partial Cross-Correlation Function (PCCF) model, Least Absolute Shrinkage and Selection Operator (LASSO) model, elastic net regularization model with Gabor dictionary analysis and efficient ensemble machine learning techniques, the second technique utilizes the concept of stacked conditional autoencoders (SAEs) and the third technique utilizes the concept of using some efficient feature extraction schemes like Tunable Q Wavelet Transform (TQWT), sparse TQWT, Maximal Information Coefficient (MIC), Distance Correlation Coefficient (DCC) and some feature selection techniques like the Binary Tunicate Swarm Algorithm (BTSA), aggregation functions (AFs), factor analysis (FA), explanatory factor analysis (EFA) classified with machine learning classifiers, kernel extreme learning machine (KELM), arc-cosine ELM, Rat Swarm Optimization (RSO)-based KELM, etc. The techniques are utilized on publicly available datasets, and the results show that the highest classification accuracy of 98.99% was obtained when sparse TQWT with AF was implemented with an arc-cosine ELM classifier.
Citation: Algorithms
PubDate: 2024-07-08
DOI: 10.3390/a17070302
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 303: On Implementing a Two-Step Interior Point
Method for Solving Linear Programs
Authors: Sajad Fathi Hafshejani, Daya Gaur, Robert Benkoczi
First page: 303
Abstract: A new two-step interior point method for solving linear programs is presented. The technique uses a convex combination of the auxiliary and central points to compute the search direction. To update the central point, we find the best value for step size such that the feasibility condition is held. Since we use the information from the previous iteration to find the search direction, the inverse of the system is evaluated only once every iteration. A detailed empirical evaluation is performed on NETLIB instances, which compares two variants of the approach to the primal-dual log barrier interior point method. Results show that the proposed method is faster. The method reduces the number of iterations and CPU time(s) by 27% and 18%, respectively, on NETLIB instances tested compared to the classical interior point algorithm.
Citation: Algorithms
PubDate: 2024-07-08
DOI: 10.3390/a17070303
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 304: Sequential Convex Programming for
Nonlinear Optimal Control in UAV Trajectory Planning
Authors: Yong Li, Qidan Zhu, Ahsan Elahi
First page: 304
Abstract: Abstract: In this paper, an algorithm is proposed to solve the non-convex optimization problem using sequential convex programming. An approximation method was used to solve the collision avoidance constraint. An iterative approach was utilized to estimate the non-convex constraints, replacing them with their linear approximations. Through the simulation, we can see that this method allows for quadcopters to take off from a given initial position and fly to the desired final position within a specified flight time. It is guaranteed that the quadcopters will not collide with each other in different scenarios.
Citation: Algorithms
PubDate: 2024-07-08
DOI: 10.3390/a17070304
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 305: Equity in Transportation Asset Management:
A Proposed Framework
Authors: Sara Arezoumand, Omar Smadi
First page: 305
Abstract: Transportation asset management has historically overlooked equity considerations. However, recently, there has been a significant increase in concerns about this issue, leading to a range of research and practices aimed at achieving more equitable outcomes. Yet, addressing equity is challenging and time-consuming, given its complexity and multifaceted nature. Several factors can significantly impact the outcome of an analysis, including the definition of equity, the evaluation and quantification of its impacts, and the community classification. As a result, there can be a wide range of interpretations of what constitutes equity. Therefore, there is no single correct or incorrect approach for equity evaluation, and different perspectives, impacts, and analysis methods could be considered for this purpose. This study reviews previous research on how transportation agencies are integrating equity into transportation asset management, particularly pavement management systems. The primary objective is to investigate important equity factors for pavement management and propose a prototype framework that integrates economic, environmental, and social equity considerations into the decision-making process for pavement maintenance, rehabilitation, and reconstruction projects. The proposed framework consists of two main steps: (1) defining objectives based on the three equity dimensions, and (2) analyzing key factors for identifying underserved areas through a case study approach. The case study analyzed pavement condition and sociodemographic data for California’s Bay Area. Statistical analysis and a machine learning method revealed that areas with higher poverty rates and worse air quality tend to have poorer pavement conditions, highlighting the need to consider these factors when defining underserved areas in Bay Area and promoting equity in pavement management decision-making. The proposed framework incorporates an optimization problem to simultaneously minimize disparities in pavement conditions between underserved and other areas, reduce greenhouse gas emissions from construction and traffic disruptions, and maximize overall network pavement condition subject to budget constraints. By incorporating all three equity aspects into a quantitative decision-support framework with specific objectives, this study proposes a novel approach for transportation agencies to promote sustainable and equitable asset management practices.
Citation: Algorithms
PubDate: 2024-07-09
DOI: 10.3390/a17070305
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 306: Performance Evaluation of Fractional
Proportional–Integral–Derivative Controllers Tuned by
Heuristic Algorithms for Nonlinear Interconnected Tanks
Authors: Raúl Pazmiño, Wilson Pavon, Matthew Armstrong, Silvio Simani
First page: 306
Abstract: This article presents an in-depth analysis of three advanced strategies to tune fractional PID (FOPID) controllers for a nonlinear system of interconnected tanks, simulated using MATLAB. The study focuses on evaluating the performance characteristics of system responses controlled by FOPID controllers tuned through three heuristic algorithms: Ant Colony Optimization (ACO), Grey Wolf Optimizer (GWO), and Flower Pollination Algorithm (FPA). Each algorithm aims to minimize its respective cost function using various performance metrics. The nonlinear model was linearized around an equilibrium point using Taylor Series expansion and Laplace transforms to facilitate control. The FPA algorithm performed better with the lowest Integral Square Error (ISE) criterion value (297.83) and faster convergence in constant values and fractional orders. This comprehensive evaluation underscores the importance of selecting the appropriate tuning strategy and performance index, demonstrating that the FPA provides the most efficient and robust tuning for FOPID controllers in nonlinear systems. The results highlight the efficacy of meta-heuristic algorithms in optimizing complex control systems, providing valuable insights for future research and practical applications, thereby contributing to the advancement of control systems engineering.
Citation: Algorithms
PubDate: 2024-07-10
DOI: 10.3390/a17070306
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 307: Evaluating the Expressive Range of Super
Mario Bros Level Generators
Authors: Hans Schaa, Nicolas A. Barriga
First page: 307
Abstract: Procedural Content Generation for video games (PCG) is widely used by today’s video game industry to create huge open worlds or enhance replayability. However, there is little scientific evidence that these systems produce high-quality content. In this document, we evaluate three open-source automated level generators for Super Mario Bros in addition to the original levels used for training. These are based on Genetic Algorithms, Generative Adversarial Networks, and Markov Chains. The evaluation was performed through an Expressive Range Analysis (ERA) on 200 levels with nine metrics. The results show how analyzing the algorithms’ expressive range can help us evaluate the generators as a preliminary measure to study whether they respond to users’ needs. This method allows us to recognize potential problems early in the content generation process, in addition to taking action to guarantee quality content when a generator is used.
Citation: Algorithms
PubDate: 2024-07-11
DOI: 10.3390/a17070307
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 308: Automatic Vertical Parking Reference
Trajectory Based on Improved Immune Shark Smell Optimization
Authors: Yan Chen, Gang Liu, Longda Wang, Bing Xia
First page: 308
Abstract: Parking path optimization is the principal problem of automatic vertical parking (AVP); however, it is difficult to determine a collision avoiding, smooth, and accurate optimized parking path using traditional parking reference trajectory optimization methods. In order to implement high-performance automatic parking reference trajectory optimization, we establish an automatic parking reference trajectory optimization model using cubic spline interpolation, and we propose an improved immune shark smell optimization (IISSO) to solve it. Firstly, we take the length of the parking reference trajectory as the optimization objective, and we introduce an intelligent automatic parking path optimization model using cubic spline interpolation. Secondly, the improved immune shark optimization algorithm combines the immune, refraction, and Gaussian variation mechanisms, thus effectively improving its global optimization ability. The simulation results for the parking path optimization experiments indicate that the proposed IISSO has a higher optimization accuracy and faster calculation speed; hence, it can obtain a parking path with higher optimization performance.
Citation: Algorithms
PubDate: 2024-07-11
DOI: 10.3390/a17070308
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 309: Real-Time Tracking and Detection of
Cervical Cancer Precursor Cells: Leveraging SIFT Descriptors in Mobile
Video Sequences for Enhanced Early Diagnosis
Authors: Jesus Eduardo Alcaraz-Chavez, Adriana del Carmen Téllez-Anguiano, Juan Carlos Olivares-Rojas, Ricardo Martínez-Parrales
First page: 309
Abstract: Cervical cancer ranks among the leading causes of mortality in women worldwide, underscoring the critical need for early detection to ensure patient survival. While the Pap smear test is widely used, its effectiveness is hampered by the inherent subjectivity of cytological analysis, impacting its sensitivity and specificity. This study introduces an innovative methodology for detecting and tracking precursor cervical cancer cells using SIFT descriptors in video sequences captured with mobile devices. More than one hundred digital images were analyzed from Papanicolaou smears provided by the State Public Health Laboratory of Michoacán, Mexico, along with over 1800 unique examples of cervical cancer precursor cells. SIFT descriptors enabled real-time correspondence of precursor cells, yielding results demonstrating 98.34% accuracy, 98.3% precision, 98.2% recovery rate, and an F-measure of 98.05%. These methods were meticulously optimized for real-time analysis, showcasing significant potential to enhance the accuracy and efficiency of the Pap smear test in early cervical cancer detection.
Citation: Algorithms
PubDate: 2024-07-12
DOI: 10.3390/a17070309
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 310: Messy Broadcasting in Grid
Authors: Aria Adibi, Hovhannes A. Harutyunyan
First page: 310
Abstract: In classical broadcast models, information is disseminated in synchronous rounds under the constant communication time model, wherein a node may only inform one of its neighbors in each time-unit—also known as the processor-bound model. These models assume either a coordinating leader or that each node has a set of coordinated actions optimized for each originator, which may require nodes to have sufficient storage, processing power, and the ability to determine the originator. This assumption is not always ideal, and a broadcast model based on the node’s local knowledge can sometimes be more effective. Messy models address these issues by eliminating the need for a leader, knowledge of the starting time, and the identity of the originator, relying solely on local knowledge available to each node. This paper investigates the messy broadcast time and optimal scheme in a grid graph, a structure widely used in computer networking systems, particularly in parallel computing, due to its robustness, scalability, fault tolerance, and simplicity. The focus is on scenarios where the originator is located at one of the corner vertices, aiming to understand the efficiency and performance of messy models in such grid structures.
Citation: Algorithms
PubDate: 2024-07-12
DOI: 10.3390/a17070310
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 311: Generating m-Ary Gray Codes and Related
Algorithms
Authors: Stefka Bouyuklieva, Iliya Bouyukliev, Valentin Bakoev, Maria Pashinska-Gadzheva
First page: 311
Abstract: In this work, we systematize several implementations of the Gray code over an alphabet with m≥2 elements, which we present in C code so that they can be used directly after copying from the text. We consider two variants—reflected and modular (or shifted) m-ary Gray codes. For both variants, we present the ranking and unranking functions, as well as algorithms for generating only a part of the code, more precisely the codewords between two given vectors. Finally, we give algorithms that generate a maximal set of non-proportional vectors of length n over the given alphabet in a Gray code.
Citation: Algorithms
PubDate: 2024-07-13
DOI: 10.3390/a17070311
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 312: Normalization of Web of Science
Institution Names Based on Deep Learning
Authors: Zijie Jia, Zhijian Fang, Huaxiong Zhang
First page: 312
Abstract: Academic evaluation is a process of assessing and measuring researchers, institutions, or disciplinary fields. Its goal is to evaluate their contributions and impact in the academic community, as well as to determine their reputation and status within specific disciplinary domains. Web of Science (WOS), being the most renowned global academic citation database, provides crucial data for academic evaluation. However, due to factors such as institutional changes, translation discrepancies, transcription errors in databases, and authors’ individual writing habits, there exist ambiguities in the institution names recorded in the WOS literature, which in turn affect the scientific evaluation of researchers and institutions. To address the issue of data reliability in academic evaluation, this paper proposes a WOS institution name synonym recognition framework that integrates multi-granular embeddings and multi-contextual information.
Citation: Algorithms
PubDate: 2024-07-14
DOI: 10.3390/a17070312
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 313: A Novel Hybrid Crow Search Arithmetic
Optimization Algorithm for Solving Weighted Combined Economic Emission
Dispatch with Load-Shifting Practice
Authors: Bishwajit Dey, Gulshan Sharma, Pitshou N. Bokoro
First page: 313
Abstract: The crow search arithmetic optimization algorithm (CSAOA) method is introduced in this article as a novel hybrid optimization technique. This proposed strategy is a population-based metaheuristic method inspired by crows’ food-hiding techniques and merged with a recently created simple yet robust arithmetic optimization algorithm (AOA). The proposed method’s performance and superiority over other existing methods is evaluated using six benchmark functions that are unimodal and multimodal in nature, and real-time optimization problems related to power systems, such as the weighted dynamic economic emission dispatch (DEED) problem. A load-shifting mechanism is also implemented, which reduces the system’s generation cost even further. An extensive technical study is carried out to compare the weighted DEED to the penalty factor-based DEED and arrive at a superior compromise option. The effects of CO2, SO2, and NOx are studied independently to determine their impact on system emissions. In addition, the weights are modified from 0.1 to 0.9, and the effects on generating cost and emission are investigated. Nonparametric statistical analysis asserts that the proposed CSAOA is superior and robust.
Citation: Algorithms
PubDate: 2024-07-16
DOI: 10.3390/a17070313
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 314: A Reliability Quantification Method for
Deep Reinforcement Learning-Based Control
Authors: Hitoshi Yoshioka, Hirotada Hashimoto
First page: 314
Abstract: Reliability quantification of deep reinforcement learning (DRL)-based control is a significant challenge for the practical application of artificial intelligence (AI) in safety-critical systems. This study proposes a method for quantifying the reliability of DRL-based control. First, an existing method, random network distillation, was applied to the reliability evaluation to clarify the issues to be solved. Second, a novel method for reliability quantification was proposed to solve these issues. The reliability is quantified using two neural networks: a reference and an evaluator. They have the same structure with the same initial parameters. The outputs of the two networks were the same before training. During training, the evaluator network parameters were updated to maximize the difference between the reference and evaluator networks for trained data. Thus, the reliability of the DRL-based control for a state can be evaluated based on the difference in output between the two networks. The proposed method was applied to DRL-based controls as an example of a simple task, and its effectiveness was demonstrated. Finally, the proposed method was applied to the problem of switching trained models depending on the state. Consequently, the performance of the DRL-based control was improved by switching the trained models according to their reliability.
Citation: Algorithms
PubDate: 2024-07-18
DOI: 10.3390/a17070314
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 315: Artificial Intelligence-Based System for
Retinal Disease Diagnosis
Authors: Ekaterina V. Orlova
First page: 315
Abstract: The growth in the number of people suffering from eye diseases determines the relevance of research in the field of diagnosing retinal pathologies. Artificial intelligence models and algorithms based on measurements obtained via electrophysiological methods can significantly improve and speed up the analysis of results and diagnostics. We propose an approach to designing an artificial intelligent diagnosis system (AI diagnosis system) which includes an electrophysiological complex to collect objective information and an intelligent decision support system to justify the diagnosis. The task of diagnosing retinal diseases based on a set of heterogeneous data is considered as a multi-class classification on unbalanced data. The decision support system includes two classifiers—one classifier is based on a fuzzy model and a fuzzy rule base (RB-classifier) and one uses the stochastic gradient boosting algorithm (SGB-classifier). The efficiency of algorithms in a multi-class classification on unbalanced data is assessed based on two indicators—MAUC (multi-class area under curve) and MMCC (multi-class Matthews correlation coefficient). Combining two algorithms in a decision support system provides more accurate and reliable pathology identification. The accuracy of diagnostics using the proposed AI diagnosis system is 5–8% higher than the accuracy of a system using only diagnostics based on electrophysical indicators. The AI diagnosis system differs from other systems of this class in that it is based on the processing of objective electrophysiological data and socio-demographic data about patients, as well as subjective information from the anamnesis, which ensures increased efficiency of medical decision-making. The system is tested using actual data about retinal diseases from the Russian Institute of Eye Diseases and its high efficiency is proven. Simulation experiments conducted in various scenario conditions with different combinations of factors ensured the identification of the main determinants (markers) for each diagnosis of retinal pathology.
Citation: Algorithms
PubDate: 2024-07-18
DOI: 10.3390/a17070315
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 316: Threshold Active Learning Approach for
Physical Violence Detection on Images Obtained from Video (Frame-Level)
Using Pre-Trained Deep Learning Neural Network Models
Authors: Itzel M. Abundez, Roberto Alejo, Francisco Primero Primero, Everardo E. Granda-Gutiérrez, Otniel Portillo-Rodríguez, Juan Alberto Antonio Velázquez
First page: 316
Abstract: Public authorities and private companies have used video cameras as part of surveillance systems, and one of their objectives is the rapid detection of physically violent actions. This task is usually performed by human visual inspection, which is labor-intensive. For this reason, different deep learning models have been implemented to remove the human eye from this task, yielding positive results. One of the main problems in detecting physical violence in videos is the variety of scenarios that can exist, which leads to different models being trained on datasets, leading them to detect physical violence in only one or a few types of videos. In this work, we present an approach for physical violence detection on images obtained from video based on threshold active learning, that increases the classifier’s robustness in environments where it was not trained. The proposed approach consists of two stages: In the first stage, pre-trained neural network models are trained on initial datasets, and we use a threshold (μ) to identify those images that the classifier considers ambiguous or hard to classify. Then, they are included in the training dataset, and the model is retrained to improve its classification performance. In the second stage, we test the model with video images from other environments, and we again employ (μ) to detect ambiguous images that a human expert analyzes to determine the real class or delete the ambiguity on them. After that, the ambiguous images are added to the original training set and the classifier is retrained; this process is repeated while ambiguous images exist. The model is a hybrid neural network that uses transfer learning and a threshold μ to detect physical violence on images obtained from video files successfully. In this active learning process, the classifier can detect physical violence in different environments, where the main contribution is the method used to obtain a threshold μ (which is based on the neural network output) that allows human experts to contribute to the classification process to obtain more robust neural networks and high-quality datasets. The experimental results show the proposed approach’s effectiveness in detecting physical violence, where it is trained using an initial dataset, and new images are added to improve its robustness in diverse environments.
Citation: Algorithms
PubDate: 2024-07-18
DOI: 10.3390/a17070316
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 317: Algorithm for Assessment of the Switching
Angles in the Unipolar SPWM Technique for Single-Phase Inverters
Authors: Mario Ponce-Silva, Óscar Sánchez-Vargas, Claudia Cortés-García, Jesús Aguayo-Alquicira, Susana Estefany De León-Aldaco
First page: 317
Abstract: The main contribution of this paper is to present a simple algorithm that theoretically and numerically assesses the switching angles of an inverter operated with the SPWM technique. This technique is the most widely used for eliminating harmonics in DC-AC converters for powering motors, renewable energy applications, household appliances, etc. Unlike conventional implementations of the SPWM technique based on the analog or digital comparison of a sinusoidal signal with a triangular signal, this paper mathematically performs this comparison. It proposes a simple solution to solve the transcendental equations arising from the mathematical analysis numerically. The technique is validated by calculating the total harmonic distortion (THD) of the generated signal theoretically and numerically, and the results indicate that the calculated angles produce the same distribution of harmonics calculated analytically and numerically. The algorithm is limited to single-phase inverters with unipolar SPWM.
Citation: Algorithms
PubDate: 2024-07-19
DOI: 10.3390/a17070317
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 318: Enabling Decision Making with the Modified
Causal Forest: Policy Trees for Treatment Assignment
Authors: Hugo Bodory, Federica Mascolo, Michael Lechner
First page: 318
Abstract: Decision making plays a pivotal role in shaping outcomes across various disciplines, such as medicine, economics, and business. This paper provides practitioners with guidance on implementing a decision tree designed to optimise treatment assignment policies through an interpretable and non-parametric algorithm. Building upon the method proposed by Zhou, Athey, and Wager (2023), our policy tree introduces three key innovations: a different approach to policy score calculation, the incorporation of constraints, and enhanced handling of categorical and continuous variables. These innovations enable the evaluation of a broader class of policy rules, all of which can be easily obtained using a single module. We showcase the effectiveness of our policy tree in managing multiple, discrete treatments using datasets from diverse fields. Additionally, the policy tree is implemented in the open-source Python package mcf (modified causal forest), facilitating its application in both randomised and observational research settings.
Citation: Algorithms
PubDate: 2024-07-19
DOI: 10.3390/a17070318
Issue No: Vol. 17, No. 7 (2024)
- Algorithms, Vol. 17, Pages 319: Adaptive Sliding-Mode Controller for a
Zeta Converter to Provide High-Frequency Transients in Battery
Applications
Authors: Andrés Tobón, Carlos Andrés Ramos-Paja, Martha Lucía Orozco-Gutíerrez, Andrés Julián Saavedra-Montes, Sergio Ignacio Serna-Garcés
First page: 319
Abstract: Hybrid energy storage systems significantly impact the renewable energy sector due to their role in enhancing grid stability and managing its variability. However, implementing these systems requires advanced control strategies to ensure correct operation. This paper presents an algorithm for designing the power and control stages of a hybrid energy storage system formed by a battery, a supercapacitor, and a bidirectional Zeta converter. The control stage involves an adaptive sliding-mode controller co-designed with the power circuit parameters. The design algorithm ensures battery protection against high-frequency transients that reduce lifespan, and provides compatibility with low-cost microcontrollers. Moreover, the continuous output current of the Zeta converter does not introduce current harmonics to the battery, the microgrid, or the load. The proposed solution is validated through an application example using PSIM electrical simulation software (version 2024.0), demonstrating superior performance in comparison with a classical cascade PI structure.
Citation: Algorithms
PubDate: 2024-07-21
DOI: 10.3390/a17070319
Issue No: Vol. 17, No. 7 (2024)