Subjects -> COMMUNICATIONS (Total: 518 journals)
    - COMMUNICATIONS (446 journals)
    - DIGITAL AND WIRELESS COMMUNICATION (31 journals)
    - HUMAN COMMUNICATION (19 journals)
    - MEETINGS AND CONGRESSES (7 journals)
    - RADIO, TELEVISION AND CABLE (15 journals)

DIGITAL AND WIRELESS COMMUNICATION (31 journals)

Showing 1 - 31 of 31 Journals sorted alphabetically
Ada : A Journal of Gender, New Media, and Technology     Open Access   (Followers: 22)
Advances in Image and Video Processing     Open Access   (Followers: 27)
Communications and Network     Open Access   (Followers: 13)
E-Health Telecommunication Systems and Networks     Open Access   (Followers: 3)
EURASIP Journal on Wireless Communications and Networking     Open Access   (Followers: 14)
Future Internet     Open Access   (Followers: 88)
Granular Computing     Hybrid Journal  
IEEE Transactions on Wireless Communications     Hybrid Journal   (Followers: 26)
IEEE Wireless Communications Letters     Hybrid Journal   (Followers: 42)
IET Wireless Sensor Systems     Open Access   (Followers: 18)
International Journal of Communications, Network and System Sciences     Open Access   (Followers: 9)
International Journal of Digital Earth     Hybrid Journal   (Followers: 15)
International Journal of Embedded and Real-Time Communication Systems     Full-text available via subscription   (Followers: 6)
International Journal of Interactive Communication Systems and Technologies     Full-text available via subscription   (Followers: 2)
International Journal of Machine Intelligence and Sensory Signal Processing     Hybrid Journal   (Followers: 4)
International Journal of Mobile Computing and Multimedia Communications     Full-text available via subscription   (Followers: 2)
International Journal of Satellite Communications and Networking     Hybrid Journal   (Followers: 39)
International Journal of Wireless and Mobile Computing     Hybrid Journal   (Followers: 8)
International Journal of Wireless Networks and Broadband Technologies     Full-text available via subscription   (Followers: 2)
International Journals Digital Communication and Analog Signals     Full-text available via subscription   (Followers: 2)
Journal of Digital Information     Open Access   (Followers: 206)
Journal of Interconnection Networks     Hybrid Journal   (Followers: 1)
Journal of the Southern Association for Information Systems     Open Access   (Followers: 2)
Mobile Media & Communication     Hybrid Journal   (Followers: 10)
Nano Communication Networks     Hybrid Journal   (Followers: 5)
Psychology of Popular Media Culture     Full-text available via subscription   (Followers: 1)
Signal, Image and Video Processing     Hybrid Journal   (Followers: 11)
Ukrainian Information Space     Open Access  
Vehicular Communications     Full-text available via subscription   (Followers: 4)
Vista     Open Access   (Followers: 4)
Wireless Personal Communications     Hybrid Journal   (Followers: 6)
Similar Journals
Journal Cover
Future Internet
Journal Prestige (SJR): 0.219
Citation Impact (citeScore): 1
Number of Followers: 88  

  This is an Open Access Journal Open Access journal
ISSN (Print) 1999-5903
Published by MDPI Homepage  [258 journals]
  • Future Internet, Vol. 16, Pages 179: Object and Event Detection Pipeline
           for Rink Hockey Games

    • Authors: Jorge Miguel Lopes, Luis Paulo Mota, Samuel Marques Mota, José Manuel Torres, Rui Silva Moreira, Christophe Soares, Ivo Pereira, Feliz Ribeiro Gouveia, Pedro Sobral
      First page: 179
      Abstract: All types of sports are potential application scenarios for automatic and real-time visual object and event detection. In rink hockey, the popular roller skate variant of team hockey, it is of great interest to automatically track player movements, positions, and sticks, and also to make other judgments, such as being able to locate the ball. In this work, we present a real-time pipeline consisting of an object detection model specifically designed for rink hockey games, followed by a knowledge-based event detection module. Even in the presence of occlusions and fast movements, our deep learning object detection model effectively identifies and tracks important visual elements in real time, such as: ball, players, sticks, referees, crowd, goalkeeper, and goal. Using a curated dataset consisting of a collection of rink hockey videos containing 2525 annotated frames, we trained and evaluated the algorithm’s performance and compared it to state-of-the-art object detection techniques. Our object detection model, based on YOLOv7, presents a global accuracy of 80% and, according to our results, good performance in terms of accuracy and speed, making it a good choice for rink hockey applications. In our initial tests, the event detection module successfully detected an important event type in rink hockey games, namely, the occurrence of penalties.
      Citation: Future Internet
      PubDate: 2024-05-21
      DOI: 10.3390/fi16060179
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 180: Using ChatGPT in Software
           Requirements Engineering: A Comprehensive Review

    • Authors: Nuno Marques, Rodrigo Rocha Silva, Jorge Bernardino
      First page: 180
      Abstract: Large language models (LLMs) have had a significant impact on several domains, including software engineering. However, a comprehensive understanding of LLMs’ use, impact, and potential limitations in software engineering is still emerging and remains in its early stages. This paper analyzes the role of large language models (LLMs), such as ChatGPT-3.5, in software requirements engineering, a critical area in software engineering experiencing rapid advances due to artificial intelligence (AI). By analyzing several studies, we systematically evaluate the integration of ChatGPT into software requirements engineering, focusing on its benefits, challenges, and ethical considerations. This evaluation is based on a comparative analysis that highlights ChatGPT’s efficiency in eliciting requirements, accuracy in capturing user needs, potential to improve communication among stakeholders, and impact on the responsibilities of requirements engineers. The selected studies were analyzed for their insights into the effectiveness of ChatGPT, the importance of human feedback, prompt engineering techniques, technological limitations, and future research directions in using LLMs in software requirements engineering. This comprehensive analysis aims to provide a differentiated perspective on how ChatGPT can reshape software requirements engineering practices and provides strategic recommendations for leveraging ChatGPT to effectively improve the software requirements engineering process.
      Citation: Future Internet
      PubDate: 2024-05-21
      DOI: 10.3390/fi16060180
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 181: MADDPG-Based Offloading Strategy for
           Timing-Dependent Tasks in Edge Computing

    • Authors: Yuchen Wang, Zishan Huang, Zhongcheng Wei, Jijun Zhao
      First page: 181
      Abstract: With the increasing popularity of the Internet of Things (IoT), the proliferation of computation-intensive and timing-dependent applications has brought serious load pressure on terrestrial networks. In order to solve the problem of computing resource conflict and long response delay caused by concurrent application service applications from multiple users, this paper proposes an improved edge computing timing-dependent, task-offloading scheme based on Multi-Agent Deep Deterministic Policy Gradient (MADDPG) that aims to shorten the offloading delay and improve the resource utilization rate by means of resource prediction and collaboration among multiple agents to shorten the offloading delay and improve the resource utilization. First, to coordinate the global computing resource, the gated recurrent unit is utilized, which predicts the next computing resource requirements of the timing-dependent tasks according to historical information. Second, the predicted information, the historical offloading decisions and the current state are used as inputs, and the training process of the reinforcement learning algorithm is improved to propose a task-offloading algorithm based on MADDPG. The simulation results show that the algorithm reduces the response latency by 6.7% and improves the resource utilization by 30.6% compared with the suboptimal benchmark algorithm, and it reduces nearly 500 training rounds during the learning process, which effectively improves the timeliness of the offloading strategy.
      Citation: Future Internet
      PubDate: 2024-05-21
      DOI: 10.3390/fi16060181
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 182: Urban Green Spaces and Mental
           Well-Being: A Systematic Review of Studies Comparing Virtual Reality
           versus Real Nature

    • Authors: Liyuan Liang, Like Gobeawan, Siu-Kit Lau, Ervine Shengwei Lin, Kai Keng Ang
      First page: 182
      Abstract: Increasingly, urban planners are adopting virtual reality (VR) in designing urban green spaces (UGS) to visualize landscape designs in immersive 3D. However, the psychological effect of green spaces from the experience in VR may differ from the actual experience in the real world. In this paper, we systematically reviewed studies in the literature that conducted experiments to investigate the psychological benefits of nature in both VR and the real world to study nature in VR anchored to nature in the real world. We separated these studies based on the type of VR setup used, specifically, 360-degree video or 3D virtual environment, and established a framework of commonly used standard questionnaires used to measure the perceived mental states. The most common questionnaires include Positive and Negative Affect Schedule (PANAS), Perceived Restorativeness Scale (PRS), and Restoration Outcome Scale (ROS). Although the results from studies that used 360-degree video were less clear, results from studies that used 3D virtual environments provided evidence that virtual nature is comparable to real-world nature and thus showed promise that UGS designs in VR can transfer into real-world designs to yield similar physiological effects.
      Citation: Future Internet
      PubDate: 2024-05-21
      DOI: 10.3390/fi16060182
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 183: Cross-Layer Optimization for Enhanced
           IoT Connectivity: A Novel Routing Protocol for Opportunistic Networks

    • Authors: Ayman Khalil, Besma Zeddini
      First page: 183
      Abstract: Opportunistic networks, an evolution of mobile Ad Hoc networks (MANETs), offer decentralized communication without relying on preinstalled infrastructure, enabling nodes to route packets through different mobile nodes dynamically. However, due to the absence of complete paths and rapidly changing connectivity, routing in opportunistic networks presents unique challenges. This paper proposes a novel probabilistic routing model for opportunistic networks, leveraging nodes’ meeting probabilities to route packets towards their destinations. Thismodel dynamically builds routes based on the likelihood of encountering the destination node, considering factors such as the last meeting time and acknowledgment tables to manage network overload. Additionally, an efficient message detection scheme is introduced to alleviate high overhead by selectively deleting messages from buffers during congestion. Furthermore, the proposed model incorporates cross-layer optimization techniques, integrating optimization strategies across multiple protocol layers to maximize energy efficiency, adaptability, and message delivery reliability. Through extensive simulations, the effectiveness of the proposed model is demonstrated, showing improved message delivery probability while maintaining reasonable overhead and latency. This research contributes to the advancement of opportunistic networks, particularly in enhancing connectivity and efficiency for Internet of Things (IoT) applications deployed in challenging environments.
      Citation: Future Internet
      PubDate: 2024-05-22
      DOI: 10.3390/fi16060183
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 184: Exploiting Autoencoder-Based Anomaly
           Detection to Enhance Cybersecurity in Power Grids

    • Authors: Fouzi Harrou, Benamar Bouyeddou, Abdelkader Dairi, Ying Sun
      First page: 184
      Abstract: The evolution of smart grids has led to technological advances and a demand for more efficient and sustainable energy systems. However, the deployment of communication systems in smart grids has increased the threat of cyberattacks, which can result in power outages and disruptions. This paper presents a semi-supervised hybrid deep learning model that combines a Gated Recurrent Unit (GRU)-based Stacked Autoencoder (AE-GRU) with anomaly detection algorithms, including Isolation Forest, Local Outlier Factor, One-Class SVM, and Elliptical Envelope. Using GRU units in both the encoder and decoder sides of the stacked autoencoder enables the effective capture of temporal patterns and dependencies, facilitating dimensionality reduction, feature extraction, and accurate reconstruction for enhanced anomaly detection in smart grids. The proposed approach utilizes unlabeled data to monitor network traffic and identify suspicious data flow. Specifically, the AE-GRU is performed for data reduction and extracting relevant features, and then the anomaly algorithms are applied to reveal potential cyberattacks. The proposed framework is evaluated using the widely adopted IEC 60870-5-104 traffic dataset. The experimental results demonstrate that the proposed approach outperforms standalone algorithms, with the AE-GRU-based LOF method achieving the highest detection rate. Thus, the proposed approach can potentially enhance the cybersecurity in smart grids by accurately detecting and preventing cyberattacks.
      Citation: Future Internet
      PubDate: 2024-05-22
      DOI: 10.3390/fi16060184
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 185: HP-LSTM: Hawkes
           Process–LSTM-Based Detection of DDoS Attack for In-Vehicle Network

    • Authors: Xingyu Li, Ruifeng Li, Yanchen Liu
      First page: 185
      Abstract: Connected and autonomous vehicles (CAVs) are advancing at a fast speed with the improvement of the automotive industry, which opens up new possibilities for different attacks. A Distributed Denial-of-Service (DDoS) attacker floods the in-vehicle network with fake messages, resulting in the failure of driving assistance systems and impairment of vehicle control functionalities, seriously disrupting the normal operation of the vehicle. In this paper, we propose a novel DDoS attack detection method for in-vehicle Ethernet Scalable service-Oriented Middleware over IP (SOME/IP), which integrates the Hawkes process with Long Short-Term Memory networks (LSTMs) to capture the dynamic behavioral features of the attacker. Specifically, we employ the Hawkes process to capture features of the DDoS attack, with its parameters reflecting the dynamism and self-exciting properties of the attack events. Subsequently, we propose a novel deep learning network structure, an HP-LSTM block, inspired by the Hawkes process, while employing a residual attention block to enhance the model’s detection efficiency and accuracy. Additionally, due to the scarcity of publicly available datasets for SOME/IP, we employed a mature SOME/IP generator to create a dataset for evaluating the validity of the proposed detection model. Finally, extensive experiments were conducted to demonstrate the effectiveness of the proposed DDoS attack detection method.
      Citation: Future Internet
      PubDate: 2024-05-23
      DOI: 10.3390/fi16060185
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 186: Data Collection in Areas without
           Infrastructure Using LoRa Technology and a Quadrotor

    • Authors: Josué I. Rojo-García, Sergio A. Vera-Chavarría, Yair Lozano-Hernández, Victor G. Sánchez-Meza, Jaime González-Sierra, Luz N. Oliva-Moreno
      First page: 186
      Abstract: The use of sensor networks in monitoring applications has increased; they are useful in security, environmental, and health applications, among others. These networks usually transmit data through short-range stations, which makes them attractive for incorporation into applications and devices for use in places without access to satellite or mobile signals, for example, forests, seas, and jungles. To this end, unmanned aerial vehicles (UAVs) have attractive characteristics for data collection and transmission in remote areas without infrastructure. Integrating systems based on wireless sensors and UAVs seems to be an economical and easy-to-use solution. However, the main difficulty is the amount of data sent, which affects the communication time and even the flight status of the UAV. Additionally, factors such as the UAV model and the hardware used for these tasks must be considered. Based on those difficulties mentioned, this paper proposes a system based on long-range (LoRa) technology. We present a low-cost wireless sensor network that is flexible, easy to deploy, and capable of collecting/sending data via LoRa transceivers. The readings obtained are packaged and sent to a UAV. The UAV performs predefined flights at a constant height of 30 m and with a direct line-of-sight (LoS) to the stations, during which it collects information from two data stations, concluding that it is possible to carry out a correct data transmission with a flight speed of 10 m/s and a transmission radius of 690 m for a group of three packages confirmed by 20 messages each. Thus, it is possible to collect data from routes of up to 8 km for each battery charge, considering the return of the UAV.
      Citation: Future Internet
      PubDate: 2024-05-24
      DOI: 10.3390/fi16060186
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 187: Enhanced Beacons Dynamic Transmission
           over TSCH

    • Authors: Erik Ortiz Guerra, Mario Martínez Morfa, Carlos Manuel García Algora, Hector Cruz-Enriquez, Kris Steenhaut, Samuel Montejo-Sánchez
      First page: 187
      Abstract: Time slotted channel hopping (TSCH) has become the standard multichannel MAC protocol for low-power lossy networks. The procedure for associating nodes in a TSCH-based network is not included in the standard and has been defined in the minimal 6TiSCH configuration. Faster network formation ensures that data packet transmission can start sooner. This paper proposes a dynamic beacon transmission schedule over the TSCH mechanism that achieves a shorter network formation time than the default minimum 6TiSCH static schedule. A theoretical model is derived for the proposed mechanism to estimate the expected time for a node to get associated with the network. Simulation results obtained with different network topologies and channel conditions show that the proposed mechanism reduces the average association time and average power consumption during network formation compared to the default minimal 6TiSCH configuration.
      Citation: Future Internet
      PubDate: 2024-05-24
      DOI: 10.3390/fi16060187
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 188: Studying the Quality of Source Code
           Generated by Different AI Generative Engines: An Empirical Evaluation

    • Authors: Davide Tosi
      First page: 188
      Abstract: The advent of Generative Artificial Intelligence is opening essential questions about whether and when AI will replace human abilities in accomplishing everyday tasks. This issue is particularly true in the domain of software development, where generative AI seems to have strong skills in solving coding problems and generating software source code. In this paper, an empirical evaluation of AI-generated source code is performed: three complex coding problems (selected from the exams for the Java Programming course at the University of Insubria) are prompted to three different Large Language Model (LLM) Engines, and the generated code is evaluated in its correctness and quality by means of human-implemented test suites and quality metrics. The experimentation shows that the three evaluated LLM engines are able to solve the three exams but with the constant supervision of software experts in performing these tasks. Currently, LLM engines need human-expert support to produce running code that is of good quality.
      Citation: Future Internet
      PubDate: 2024-05-24
      DOI: 10.3390/fi16060188
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 189: Dynamic Spatial–Temporal
           Self-Attention Network for Traffic Flow Prediction

    • Authors: Dong Wang, Hongji Yang, Hua Zhou
      First page: 189
      Abstract: Traffic flow prediction is considered to be one of the fundamental technologies in intelligent transportation systems (ITSs) with a tremendous application prospect. Unlike traditional time series analysis tasks, the key challenge in traffic flow prediction lies in effectively modelling the highly complex and dynamic spatiotemporal dependencies within the traffic data. In recent years, researchers have proposed various methods to enhance the accuracy of traffic flow prediction, but certain issues still persist. For instance, some methods rely on specific static assumptions, failing to adequately simulate the dynamic changes in the data, thus limiting their modelling capacity. On the other hand, some approaches inadequately capture the spatiotemporal dependencies, resulting in the omission of crucial information and leading to unsatisfactory prediction outcomes. To address these challenges, this paper proposes a model called the Dynamic Spatial–Temporal Self-Attention Network (DSTSAN). Firstly, this research enhances the interaction between different dimension features in the traffic data through a feature augmentation module, thereby improving the model’s representational capacity. Subsequently, the current investigation introduces two masking matrices: one captures local spatial dependencies and the other captures global spatial dependencies, based on the spatial self-attention module. Finally, the methodology employs a temporal self-attention module to capture and integrate the dynamic temporal dependencies of traffic data. We designed experiments using historical data from the previous hour to predict traffic flow conditions in the hour ahead, and the experiments were extensively compared to the DSTSAN model, with 11 baseline methods using four real-world datasets. The results demonstrate the effectiveness and superiority of the proposed approach.
      Citation: Future Internet
      PubDate: 2024-05-25
      DOI: 10.3390/fi16060189
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 190: Tracing Student Activity Patterns in
           E-Learning Environments: Insights into Academic Performance

    • Authors: Evgenia Paxinou, Georgios Feretzakis, Rozita Tsoni, Dimitrios Karapiperis, Dimitrios Kalles, Vassilios S. Verykios
      First page: 190
      Abstract: In distance learning educational environments like Moodle, students interact with their tutors, their peers, and the provided educational material through various means. Due to advancements in learning analytics, students’ transitions within Moodle generate digital trace data that outline learners’ self-directed learning paths and reveal information about their academic behavior within a course. These learning paths can be depicted as sequences of transitions between various states, such as completing quizzes, submitting assignments, downloading files, and participating in forum discussions, among others. Considering that a specific learning path summarizes the students’ trajectory in a course during an academic year, we analyzed data on students’ actions extracted from Moodle logs to investigate how the distribution of user actions within different Moodle resources can impact academic achievements. Our analysis was conducted using a Markov Chain Model, whereby transition matrices were constructed to identify steady states, and eigenvectors were calculated. Correlations were explored between specific states in users’ eigenvectors and their final grades, which were used as a proxy of academic performance. Our findings offer valuable insights into the relationship between student actions, link weight vectors, and academic performance, in an attempt to optimize students’ learning paths, tutors’ guidance, and course structures in the Moodle environment.
      Citation: Future Internet
      PubDate: 2024-05-29
      DOI: 10.3390/fi16060190
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 191: Harnessing the Cloud: A Novel
           Approach to Smart Solar Plant Monitoring

    • Authors: Mohammad Imran Ali, Shahi Dost, Khurram Shehzad Khattak, Muhammad Imran Khan, Riaz Muhammad
      First page: 191
      Abstract: Renewable Energy Sources (RESs) such as hydro, wind, and solar are merging as preferred alternatives to fossil fuels. Among these RESs, solar energy is the most ideal solution; it is gaining extensive interest around the globe. However, due to solar energy’s intermittent nature and sensitivity to environmental parameters (e.g., irradiance, dust, temperature, aging and humidity), real-time solar plant monitoring is imperative. This paper’s contribution is to compare and analyze current IoT trends and propose future research directions. As a result, this will be instrumental in the development of low-cost, real-time, scalable, reliable, and power-optimized solar plant monitoring systems. In this work, a comparative analysis has been performed on proposed solutions using the existing literature. This comparative analysis has been conducted considering five aspects: computer boards, sensors, communication, servers, and architectural paradigms. IoT architectural paradigms employed have been summarized and discussed with respect to communication, application layers, and storage capabilities. To facilitate enhanced IoT-based solar monitoring, an edge computing paradigm has been proposed. Suggestions are presented for the fabrication of edge devices and nodes using optimum compute boards, sensors, and communication modules. Different cloud platforms have been explored, and it was concluded that the public cloud platform Amazon Web Services is the ideal solution. Artificial intelligence-based techniques, methods, and outcomes are presented, which can help in the monitoring, analysis, and management of solar PV systems. As an outcome, this paper can be used to help researchers and academics develop low-cost, real-time, effective, scalable, and reliable solar monitoring systems.
      Citation: Future Internet
      PubDate: 2024-05-29
      DOI: 10.3390/fi16060191
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 192:
           Prophet–CEEMDAN–ARBiLSTM-Based Model for Short-Term Load
           Forecasting

    • Authors: Jindong Yang, Xiran Zhang, Wenhao Chen, Fei Rong
      First page: 192
      Abstract: Accurate short-term load forecasting (STLF) plays an essential role in sustainable energy development. Specifically, energy companies can efficiently plan and manage their generation capacity, lessening resource wastage and promoting the overall efficiency of power resource utilization. However, existing models cannot accurately capture the nonlinear features of electricity data, leading to a decline in the forecasting performance. To relieve this issue, this paper designs an innovative load forecasting method, named Prophet–CEEMDAN–ARBiLSTM, which consists of Prophet, Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN), and the residual Bidirectional Long Short-Term Memory (BiLSTM) network. Specifically, this paper firstly employs the Prophet method to learn cyclic and trend features from input data, aiming to discern the influence of these features on the short-term electricity load. Then, the paper adopts CEEMDAN to decompose the residual series and yield components with distinct modalities. In the end, this paper designs the advanced residual BiLSTM (ARBiLSTM) block as the input of the above extracted features to obtain the forecasting results. By conducting multiple experiments on the New England public dataset, it demonstrates that the Prophet–CEEMDAN–ARBiLSTM method can achieve better performance compared with the existing Prophet-based ones.
      Citation: Future Internet
      PubDate: 2024-05-31
      DOI: 10.3390/fi16060192
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 193: Enhancing Sensor Data Imputation:
           OWA-Based Model Aggregation for Missing Values

    • Authors: Muthana Al-Amidie, Laith Alzubaidi, Muhammad Aminul Islam, Derek T. Anderson
      First page: 193
      Abstract: Due to some limitations in the data collection process caused either by human-related errors or by collection electronics, sensors, and network connectivity-related errors, the important values at some points could be lost. However, a complete dataset is required for the desired performance of the subsequent applications in various fields like engineering, data science, statistics, etc. An efficient data imputation technique is desired to fill in the missing data values to achieve completeness within the dataset. The fuzzy integral is considered one of the most powerful techniques for multi-source information fusion. It has a wide range of applications in many real-world decision-making problems that often require decisions to be made with partially observable/available information. To address this problem, algorithms impute missing data with a representative sample or by predicting the most likely value given the observed data. In this article, we take a completely different approach to the information fusion task in the ordered weighted averaging (OWA) context. In particular, we empirically explore for different distributions how the weights/importance of the missing sources are distributed across the observed inputs/sources. The experimental results on the synthetic and real-world datasets demonstrate the applicability of the proposed methods.
      Citation: Future Internet
      PubDate: 2024-05-31
      DOI: 10.3390/fi16060193
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 194: Research on Multi-Modal Pedestrian
           Detection and Tracking Algorithm Based on Deep Learning

    • Authors: Rui Zhao, Jutao Hao, Huan Huo
      First page: 194
      Abstract: In the realm of intelligent transportation, pedestrian detection has witnessed significant advancements. However, it continues to grapple with challenging issues, notably the detection of pedestrians in complex lighting scenarios. Conventional visible light mode imaging is profoundly affected by varying lighting conditions. Under optimal daytime lighting, visibility is enhanced, leading to superior pedestrian detection outcomes. Conversely, under low-light conditions, visible light mode imaging falters due to the inadequate provision of pedestrian target information, resulting in a marked decline in detection efficacy. In this context, infrared light mode imaging emerges as a valuable supplement, bolstering pedestrian information provision. This paper delves into pedestrian detection and tracking algorithms within a multi-modal image framework grounded in deep learning methodologies. Leveraging the YOLOv4 algorithm as a foundation, augmented by a channel stack fusion module, a novel multi-modal pedestrian detection algorithm tailored for intelligent transportation is proposed. This algorithm capitalizes on the fusion of visible and infrared light mode image features to enhance pedestrian detection performance amidst complex road environments. Experimental findings demonstrate that compared to the Visible-YOLOv4 algorithm, renowned for its high performance, the proposed Double-YOLOv4-CSE algorithm exhibits a notable improvement, boasting a 5.0% accuracy rate enhancement and a 6.9% reduction in logarithmic average missing rate. This research’s goal is to ensure that the algorithm can run smoothly even on a low configuration 1080 Ti GPU and to improve the algorithm’s coverage at the application layer, making it affordable and practical for both urban and rural areas. This addresses the broader research problem within the scope of smart cities and remote ends with limited computational power.
      Citation: Future Internet
      PubDate: 2024-05-31
      DOI: 10.3390/fi16060194
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 195: GreenLab, an IoT-Based Small-Scale
           Smart Greenhouse

    • Authors: Cristian Volosciuc, Răzvan Bogdan, Bianca Blajovan, Cristina Stângaciu, Marius Marcu
      First page: 195
      Abstract: In an era of connectivity, the Internet of Things introduces smart solutions for smart and sustainable agriculture, bringing alternatives to overcome the food crisis. Among these solutions, smart greenhouses support crop and vegetable agriculture regardless of season and cultivated area by carefully controlling and managing parameters like temperature, air and soil humidity, and light. Smart technologies have proven to be successful tools for increasing agricultural production at both the macro and micro levels, which is an important step in streamlining small-scale agriculture. This paper presents an experimental Internet of Things-based small-scale greenhouse prototype as a proof of concept for the benefits of merging smart sensing, connectivity, IoT, and mobile-based applications, for growing cultures. Our proposed solution is cost-friendly and includes a photovoltaic panel and a buffer battery for reducing energy consumption costs, while also assuring functionality during night and cloudy weather and a mobile application for easy data visualization and monitoring of the greenhouse.
      Citation: Future Internet
      PubDate: 2024-05-31
      DOI: 10.3390/fi16060195
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 196: Efficiency of Federated Learning and
           Blockchain in Preserving Privacy and Enhancing the Performance of Credit
           Card Fraud Detection (CCFD) Systems

    • Authors: Tahani Baabdullah, Amani Alzahrani, Danda B. Rawat, Chunmei Liu
      First page: 196
      Abstract: Increasing global credit card usage has elevated it to a preferred payment method for daily transactions, underscoring its significance in global financial cybersecurity. This paper introduces a credit card fraud detection (CCFD) system that integrates federated learning (FL) with blockchain technology. The experiment employs FL to establish a global learning model on the cloud server, which transmits initial parameters to individual local learning models on fog nodes. With three banks (fog nodes) involved, each bank trains its learning model locally, ensuring data privacy, and subsequently sends back updated parameters to the global learning model. Through the integration of FL and blockchain, our system ensures privacy preservation and data protection. We utilize three machine learning and deep neural network learning algorithms, RF, CNN, and LSTM, alongside deep optimization techniques such as ADAM, SGD, and MSGD. The SMOTE oversampling technique is also employed to balance the dataset before model training. Our proposed framework has demonstrated efficiency and effectiveness in enhancing classification performance and prediction accuracy.
      Citation: Future Internet
      PubDate: 2024-06-02
      DOI: 10.3390/fi16060196
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 197: In-Home Evaluation of the NeoCare
           Artificial Intelligence Sound-Based Fall Detection System

    • Authors: Carol Maher, Kylie A. Dankiw, Ben Singh, Svetlana Bogomolova, Rachel G. Curtis
      First page: 197
      Abstract: The NeoCare home monitoring system aims to detect falls and other events using artificial intelligence. This study evaluated NeoCare’s accuracy and explored user perceptions through a 12-week in-home trial with 18 households of adults aged 65+ years old at risk of falls (mean age: 75.3 years old; 67% female). Participants logged events that were cross-referenced with NeoCare logs to calculate sensitivity and specificity for fall detection and response. Qualitative interviews gathered in-depth user feedback. During the trial, 28 falls/events were documented, with 12 eligible for analysis as others occurred outside the home or when devices were offline. NeoCare was activated 4939 times—4930 by everyday household sounds and 9 by actual falls. Fall detection sensitivity was 75.00% and specificity 6.80%. For responding to falls, sensitivity was 62.50% and specificity 17.28%. Users felt more secure with NeoCare but identified needs for further calibration to improve accuracy. Advantages included avoiding wearables, while key challenges were misinterpreting noises and occasional technical issues like going offline. Suggested improvements were visual indicators, trigger words, and outdoor capability. The study demonstrated NeoCare’s potential with modifications. Users found it beneficial, but highlighted areas for improvement. Real-world evaluations and user-centered design are crucial for healthcare technology development.
      Citation: Future Internet
      PubDate: 2024-06-02
      DOI: 10.3390/fi16060197
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 198: The Use of Artificial Intelligence in
           eParticipation: Mapping Current Research

    • Authors: Zisis Vasilakopoulos, Theocharis Tavantzis, Rafail Promikyridis, Efthimios Tambouris
      First page: 198
      Abstract: Electronic Participation (eParticipation) enables citizens to engage in political and decision-making processes using information and communication technologies. As in many other fields, Artificial Intelligence (AI) has recently started to dictate some of the realities of eParticipation. As a result, an increasing number of studies are investigating the use of AI in eParticipation. The aim of this paper is to map current research on the use of AI in eParticipation. Following PRISMA methodology, the authors identified 235 relevant papers in Web of Science and Scopus and selected 46 studies for review. For analysis purposes, an analysis framework was constructed that combined eParticipation elements (namely actors, activities, effects, contextual factors, and evaluation) with AI elements (namely areas, algorithms, and algorithm evaluation). The results suggest that certain eParticipation actors and activities, as well as AI areas and algorithms, have attracted significant attention from researchers. However, many more remain largely unexplored. The findings can be of value to both academics looking for unexplored research fields and practitioners looking for empirical evidence on what works and what does not.
      Citation: Future Internet
      PubDate: 2024-06-03
      DOI: 10.3390/fi16060198
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 199: Metaverse and Fashion: An Analysis of
           Consumer Online Interest

    • Authors: Carmen Ruiz Viñals, Marta Gil Ibáñez, José Luis Del Olmo Arriaga
      First page: 199
      Abstract: Recent studies have demonstrated the value that the Internet and web applications bring to businesses. Among other tools are those that enable the analysis and monitoring of searches, such as Google Trends, which is currently used by the fashion industry to guide experiential practices in a context of augmented reality and/or virtual reality, and even to predict purchasing behaviours through the metaverse. Data from this tool provide insight into fashion consumer search patterns. Understanding and managing this digital tool is an essential factor in rethinking businesses’ marketing strategies. The aim of this study is to analyse online user search behaviour by analysing and monitoring the terms “metaverse” and “fashion” on Google Trends. A quantitative descriptive cross-sectional method was employed. The results show that there is growing consumer interest in both concepts on the Internet, despite the lack of homogeneity in the behaviour of the five Google search tools.
      Citation: Future Internet
      PubDate: 2024-06-04
      DOI: 10.3390/fi16060199
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 200: Implementation of Lightweight Machine
           Learning-Based Intrusion Detection System on IoT Devices of Smart Homes

    • Authors: Abbas Javed, Amna Ehtsham, Muhammad Jawad, Muhammad Naeem Awais, Ayyaz-ul-Haq Qureshi, Hadi Larijani
      First page: 200
      Abstract: Smart home devices, also known as IoT devices, provide significant convenience; however, they also present opportunities for attackers to jeopardize homeowners’ security and privacy. Securing these IoT devices is a formidable challenge because of their limited computational resources. Machine learning-based intrusion detection systems (IDSs) have been implemented on the edge and the cloud; however, IDSs have not been embedded in IoT devices. To address this, we propose a novel machine learning-based two-layered IDS for smart home IoT devices, enhancing accuracy and computational efficiency. The first layer of the proposed IDS is deployed on a microcontroller-based smart thermostat, which uploads the data to a website hosted on a cloud server. The second layer of the IDS is deployed on the cloud side for classification of attacks. The proposed IDS can detect the threats with an accuracy of 99.50% at cloud level (multiclassification). For real-time testing, we implemented the Raspberry Pi 4-based adversary to generate a dataset for man-in-the-middle (MITM) and denial of service (DoS) attacks on smart thermostats. The results show that the XGBoost-based IDS detects MITM and DoS attacks in 3.51 ms on a smart thermostat with an accuracy of 97.59%.
      Citation: Future Internet
      PubDate: 2024-06-05
      DOI: 10.3390/fi16060200
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 201: Impact, Compliance, and
           Countermeasures in Relation to Data Breaches in Publicly Traded U.S.
           Companies

    • Authors: Gabriel Arquelau Pimenta Rodrigues, André Luiz Marques Serrano, Guilherme Fay Vergara, Robson de Oliveira Albuquerque, Georges Daniel Amvame Nze
      First page: 201
      Abstract: A data breach is the unauthorized disclosure of sensitive personal data, and it impacts millions of individuals annually in the United States, as reported by Privacy Rights Clearinghouse. These breaches jeopardize the physical safety of the individuals whose data are exposed and result in substantial economic losses for the affected companies. To diminish the frequency and severity of data breaches in the future, it is imperative to research their causes and explore preventive measures. In pursuit of this goal, this study considers a dataset of data breach incidents affecting companies listed on the New York Stock Exchange and NASDAQ. This dataset has been augmented with additional information regarding the targeted company. This paper employs statistical visualizations of the data to clarify these incidents and assess their consequences on the affected companies and individuals whose data were compromised. We then propose mitigation controls based on established frameworks such as the NIST Cybersecurity Framework. Additionally, this paper reviews the compliance scenario by examining the relevant laws and regulations applicable to each case, including SOX, HIPAA, GLBA, and PCI-DSS, and evaluates the impacts of data breaches on stock market prices. We also review guidelines for appropriately responding to data leaks in the U.S., for compliance achievement and cost reduction. By conducting this analysis, this work aims to contribute to a comprehensive understanding of data breaches and empower organizations to safeguard against them proactively, improving the technical quality of their basic services. To our knowledge, this is the first paper to address compliance with data protection regulations, security controls as countermeasures, financial impacts on stock prices, and incident response strategies. Although the discussion is focused on publicly traded companies in the United States, it may also apply to public and private companies worldwide.
      Citation: Future Internet
      PubDate: 2024-06-05
      DOI: 10.3390/fi16060201
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 202: Metric Space Indices for Dynamic
           Optimization in a Peer to Peer-Based Image Classification Crowdsourcing
           Platform

    • Authors: Fernando Loor, Veronica Gil-Costa, Mauricio Marin
      First page: 202
      Abstract: Large-scale computer platforms that process users’ online requests must be capable of handling unexpected spikes in arrival rates. These platforms, which are composed of distributed components, can be configured with parameters to ensure both the quality of the results obtained for each request and low response times. In this work, we propose a dynamic optimization engine based on metric space indexing to address this problem. The engine is integrated into the platform and periodically monitors performance metrics to determine whether new configuration parameter values need to be computed. Our case study focuses on a P2P platform designed for classifying crowdsourced images related to natural disasters. We evaluate our approach under scenarios with high and low workloads, comparing it against alternative methods based on deep reinforcement learning. The results show that our approach reduces processing time by an average of 40%.
      Citation: Future Internet
      PubDate: 2024-06-06
      DOI: 10.3390/fi16060202
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 203: Evaluation of Radio Access Protocols
           for V2X in 6G Scenario-Based Models

    • Authors: Héctor Orrillo, André Sabino, Mário Marques da Silva
      First page: 203
      Abstract: The expansion of mobile connectivity with the arrival of 6G paves the way for the new Internet of Verticals (6G-IoV), benefiting autonomous driving. This article highlights the importance of vehicle-to-everything (V2X) and vehicle-to-vehicle (V2V) communication in improving road safety. Current technologies such as IEEE 802.11p and LTE-V2X are being improved, while new radio access technologies promise more reliable, lower-latency communications. Moreover, 3GPP is developing NR-V2X to improve the performance of communications between vehicles, while IEEE proposes the 802.11bd protocol, aiming for the greater interoperability and detection of transmissions between vehicles. Both new protocols are being developed and improved to make autonomous driving more efficient. This study analyzes and compares the performance of the protocols mentioned, namely 802.11p, 802.11bd, LTE-V2X, and NR-V2X. The contribution of this study is to identify the most suitable protocol that meets the requirements of V2V communications in autonomous driving. The relevance of V2V communication has driven intense research in the scientific community. Among the various applications of V2V communication are Cooperative Awareness, V2V Unicast Exchange, and V2V Decentralized Environmental Notification, among others. To this end, the performance of the Link Layer of these protocols is evaluated and compared. Based on the analysis of the results, it can be concluded that NR-V2X outperforms IEEE 802.11bd in terms of transmission latency (L) and data rate (DR). In terms of the packet error rate (PER), it is shown that both LTE-V2X and NR-V2X exhibit a lower PER compared to IEEE protocols, especially as the distance between the vehicles increases. This advantage becomes even more significant in scenarios with greater congestion and network interference.
      Citation: Future Internet
      PubDate: 2024-06-06
      DOI: 10.3390/fi16060203
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 204: Usability Evaluation of Wearable
           

    • Authors: Majed A. Alshamari, Maha M. Althobaiti
      First page: 204
      Abstract: The mobile and wearable nature of smartwatches poses challenges in evaluating their usability. This paper presents a study employing customized heuristic evaluation and use of the system usability scale (SUS) on four smartwatches, along with their mobile applications. A total of 11 heuristics were developed and validated by experts by combining Nielsen’s heuristic and Motti and Caines’ heuristics. In this study, 20 participants used the watches and participated in the SUS survey. A total of 307 usability issues were reported by the evaluators. The results of this study show that the Galaxy Watch 5 scored highest in terms of efficiency, ease of use, features, and battery life compared to the other three smartwatches and has fewer usability issues. The results indicate that ease of use, features, and flexibility are important usability attributes for future smartwatches. The Galaxy Watch 5 received the highest SUS score of 87.375. Both evaluation methods showed no significant differences in results, and customized heuristics were found to be useful for smartwatch evaluation.
      Citation: Future Internet
      PubDate: 2024-06-06
      DOI: 10.3390/fi16060204
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 205: Enhancing Efficiency and Security in
           Unbalanced PSI-CA Protocols through Cloud Computing and Homomorphic
           Encryption in Mobile Networks

    • Authors: Wuzheng Tan, Shenglong Du, Jian Weng
      First page: 205
      Abstract: Private Set Intersection Cardinality (PSI-CA) is a cryptographic method in secure multi-party computation that allows entities to identify the cardinality of the intersection without revealing their private data.Traditional approaches assume similar-sized datasets and equal computational power, overlooking practical imbalances. In real-world applications, dataset sizes and computational capacities often vary, particularly in Internet of Things and mobile scenarios where device limitations restrict computational types. Traditional PSI-CA protocols are inefficient here, as computational and communication complexities correlate with the size of larger datasets. Thus, adapting PSI-CA protocols to these imbalances is crucial. This paper explores unbalanced scenarios where one party (the receiver) has a relatively small dataset and limited computational power, while the other party (the sender) has a large amount of data and strong computational capabilities.This paper, based on the concept of commutative encryption, introduces Cuckoo filter, cloud computing technology, and homomorphic encryption, among other technologies, to construct three novel solutions for unbalanced Private Set Intersection Cardinality (PSI-CA): an unbalanced PSI-CA protocol based on Cuckoo filter, an unbalanced PSI-CA protocol based on single-cloud assistance, and an unbalanced PSI-CA protocol based on dual-cloud assistance. Depending on performance and security requirements, different protocols can be employed for various applications.
      Citation: Future Internet
      PubDate: 2024-06-07
      DOI: 10.3390/fi16060205
      Issue No: Vol. 16, No. 6 (2024)
       
  • Future Internet, Vol. 16, Pages 144: Anticipating Job Market
           Demands—A Deep Learning Approach to Determining the Future Readiness
           of Professional Skills

    • Authors: Albert Weichselbraun, Norman Süsstrunk, Roger Waldvogel, André Glatzl, Adrian M. P. Braşoveanu, Arno Scharl
      First page: 144
      Abstract: Anticipating the demand for professional job market skills needs to consider trends such as automation, offshoring, and the emerging Gig economy, as they significantly impact the future readiness of skills. This article draws on the scientific literature, expert assessments, and deep learning to estimate two indicators of high relevance for a skill’s future readiness: its automatability and offshorability. Based on gold standard data, we evaluate the performance of Support Vector Machines (SVMs), Transformers, Large Language Models (LLMs), and a deep learning ensemble classifier for propagating expert and literature assessments on these indicators of yet unseen skills. The presented approach uses short bipartite skill labels that contain a skill topic (e.g., “Java”) and a corresponding verb (e.g., “programming”) to describe the skill. Classifiers thus need to base their judgments solely on these two input terms. Comprehensive experiments on skewed and balanced datasets show that, in this low-token setting, classifiers benefit from pre-training and fine-tuning and that increased classifier complexity does not yield further improvements.
      Citation: Future Internet
      PubDate: 2024-04-23
      DOI: 10.3390/fi16050144
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 145: Design and Implementation of a
           Low-Cost, Linear Robotic Camera System, Targeting Greenhouse Plant Growth
           Monitoring

    • Authors: Kamarianakis, Perdikakis, Ioannis N. Daliakopoulos, Dimitrios M. Papadimitriou, Panagiotakis
      First page: 145
      Abstract: Automated greenhouse production systems frequently employ non-destructive techniques, such as computer vision-based methods, to accurately measure plant physiological properties and monitor crop growth. By utilizing an automated image acquisition and analysis system, it becomes possible to swiftly assess the growth and health of plants throughout their entire lifecycle. This valuable information can be utilized by growers, farmers, and crop researchers who are interested in self-cultivation procedures. At the same time, such a system can alleviate the burden of daily plant photography for human photographers and crop researchers, while facilitating automated plant image acquisition for crop status monitoring. Given these considerations, the aim of this study was to develop an experimental, low-cost, 1-DOF linear robotic camera system specifically designed for automated plant photography. As an initial evaluation of the proposed system, which targets future research endeavors of simplifying the process of plant growth monitoring in a small greenhouse, the experimental setup and precise plant identification and localization are demonstrated in this work through an application on lettuce plants, imaged mostly under laboratory conditions.
      Citation: Future Internet
      PubDate: 2024-04-23
      DOI: 10.3390/fi16050145
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 146: Median Absolute Deviation for BGP
           Anomaly Detection

    • Authors: Maria Andrea Romo-Chavero, Jose Antonio Cantoral-Ceballos, Jesus Arturo Pérez-Díaz, Carlos Martinez-Cagnazzo
      First page: 146
      Abstract: The stability and reliability of the global Internet infrastructure heavily rely on the Border Gateway Protocol (BGP), a crucial protocol that facilitates the exchange of routing information among various Autonomous Systems, ensuring seamless connectivity worldwide. However, BGP inherently possesses a susceptibility to abnormal routing behaviors, potentially leading to significant connectivity disruptions. Despite extensive efforts, accurately detecting and effectively mitigating such abnormalities persist as tough challenges. To tackle these, this article proposes a novel statistical approach employing the median absolute deviation under certain constraints to proactively detect anomalies in BGP. By applying advanced analysis techniques, this research offers a robust method for the early detection of anomalies, such as Internet worms, configuration errors, and link failures. This innovative approach has been empirically validated, achieving an accuracy rate of 90% and a precision of 95% in identifying these disruptions. This high level of precision and accuracy not only confirms the effectiveness of the statistical method employed but also marks a significant step forward for enhancing the stability and reliability of the global Internet infrastructure.
      Citation: Future Internet
      PubDate: 2024-04-25
      DOI: 10.3390/fi16050146
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 147: A New Generation of Collaborative
           Immersive Analytics on the Web: Open-Source Services to Capture, Process
           and Inspect Users’ Sessions in 3D Environments

    • Authors: Bruno Fanini, Giorgio Gosti
      First page: 147
      Abstract: Recording large amounts of users’ sessions performed through 3D applications may provide crucial insights into interaction patterns. Such data can be captured from interactive experiences in public exhibits, remote motion tracking equipment, immersive XR devices, lab installations or online web applications. Immersive analytics (IA) deals with the benefits and challenges of using immersive environments for data analysis and related design solutions to improve the quality and efficiency of the analysis process. Today, web technologies allow us to craft complex applications accessible through common browsers, and APIs like WebXR allow us to interact with and explore virtual 3D environments using immersive devices. These technologies can be used to access rich, immersive spaces but present new challenges related to performance, network bottlenecks and interface design. WebXR IA tools are still quite new in the literature: they present several challenges and leave quite unexplored the possibility of synchronous collaborative inspection. The opportunity to share the virtual space with remote analysts in fact improves sense-making tasks and offers new ways to discuss interaction patterns together, while inspecting captured records or data aggregates. Furthermore, with proper collaborative approaches, analysts are able to share machine learning (ML) pipelines and constructively discuss the outcomes and insights through tailored data visualization, directly inside immersive 3D spaces, using a web browser. Under the H2IOSC project, we present the first results of an open-source pipeline involving tools and services aimed at capturing, processing and inspecting interactive sessions collaboratively in WebXR with other analysts. The modular pipeline can be easily deployed in research infrastructures (RIs), remote dedicated hubs or local scenarios. The developed WebXR immersive analytics tool specifically offers advanced features for volumetric data inspection, query, annotation and discovery, alongside spatial interfaces. We assess the pipeline through users’ sessions captured during two remote public exhibits, by a WebXR application presenting generative AI content to visitors. We deployed the pipeline to assess the different services and to better understand how people interact with generative AI environments. The obtained results can be easily adopted for a multitude of case studies, interactive applications, remote equipment or online applications, to support or accelerate the detection of interaction patterns among remote analysts collaborating in the same 3D space.
      Citation: Future Internet
      PubDate: 2024-04-25
      DOI: 10.3390/fi16050147
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 148: HSM4SSL: Leveraging HSMs for Enhanced
           Intra-Domain Security

    • Authors: Yazan Aref, Abdelkader Ouda
      First page: 148
      Abstract: In a world where digitization is rapidly advancing, the security and privacy of intra-domain communication within organizations are of critical concern. The imperative to secure communication channels among physical systems has led to the deployment of various security approaches aimed at fortifying networking protocols. However, these approaches have typically been designed to secure protocols individually, lacking a holistic perspective on the broader challenge of intra-domain communication security. This omission raises fundamental concerns about the safety and integrity of intra-domain environments, where all communication occurs within a single domain. As a result, this paper introduces HSM4SSL, a comprehensive solution designed to address the evolving challenges of secure data transmission in intra-domain environments. By leveraging hardware security modules (HSMs), HSM4SSL aims to utilize the Secure Socket Layer (SSL) protocol within intra-domain environments to ensure data confidentiality, authentication, and integrity. In addition, solutions proposed by academic researchers and in the industry have not addressed the issue in a holistic and integrative manner, as they only apply to specific types of environments or servers and do not utilize all cryptographic operations for robust security. Thus, HSM4SSL bridges this gap by offering a unified and comprehensive solution that includes certificate management, key management practices, and various security services. HSM4SSL comprises three layers to provide a standardized interaction between software applications and HSMs. A performance evaluation was conducted comparing HSM4SSL with a benchmark tool for cryptographic operations. The results indicate that HSM4SSL achieved 33% higher requests per second (RPS) compared to OpenSSL, along with a 13% lower latency rate. Additionally, HSM4SSL efficiently utilizes CPU and network resources, outperforming OpenSSL in various aspects. These findings highlight the effectiveness and reliability of HSM4SSL in providing secure communication within intra-domain environments, thus addressing the pressing need for enhanced security mechanisms.
      Citation: Future Internet
      PubDate: 2024-04-26
      DOI: 10.3390/fi16050148
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 149: A Blockchain-Based Real-Time Power
           Balancing Service for Trustless Renewable Energy Grids

    • Authors: Andrea Calvagna, Giovanni Marotta, Giuseppe Pappalardo, Emiliano Tramontana
      First page: 149
      Abstract: We face a decentralized renewable energy production scenario, where a large number of small energy producers, i.e., prosumers, contribute to a common distributor entity, who resells energy directly to end-users. A major challenge for the distributor is to ensure power stability, constantly balancing produced vs consumed energy flows. In this context, being able to provide quick restore actions in response to unpredictable unbalancing events is a must, as fluctuations are the norm for renewable energy sources. To this aim, the high scalability and diversity of sources are crucial requirements for the said balancing to be actually manageable. In this study, we explored the challenges and benefits of adopting a blockchain-based software architecture as a scalable, trustless interaction platform between prosumers’ smart energy meters and the distributor. Our developed prototype accomplishes the energy load balancing service via smart contracts deployed in a real blockchain network with an increasing number of simulated prosumers. We show that the blockchain-based application managed to react in a timely manner to energy unbalances for up to a few hundred prosumers.
      Citation: Future Internet
      PubDate: 2024-04-26
      DOI: 10.3390/fi16050149
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 150: Exploring Data Input Problems in
           Mixed Reality Environments: Proposal and Evaluation of Natural Interaction
           Techniques

    • Authors: Jingzhe Zhang, Tiange Chen, Wenjie Gong, Jiayue Liu, Jiangjie Chen
      First page: 150
      Abstract: Data input within mixed reality environments poses significant interaction challenges, notably in immersive visual analytics applications. This study assesses five numerical input techniques: three benchmark methods (Touch-Slider, Keyboard, Pinch-Slider) and two innovative multimodal techniques (Bimanual Scaling, Gesture and Voice). An experimental design was employed to compare these techniques’ input efficiency, accuracy, and user experience across varying precision and distance conditions. The findings reveal that multimodal techniques surpass slider methods in input efficiency yet are comparable to keyboards; the voice method excels in reducing cognitive load but falls short in accuracy; and the scaling method marginally leads in user satisfaction but imposes a higher physical load. Furthermore, this study outlines these techniques’ pros and cons and offers design guidelines and future research directions.
      Citation: Future Internet
      PubDate: 2024-04-27
      DOI: 10.3390/fi16050150
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 151: Effective Monoaural Speech Separation
           through Convolutional Top-Down Multi-View Network

    • Authors: Aye Nyein Aung, Che-Wei Liao, Jeih-Weih Hung
      First page: 151
      Abstract: Speech separation, sometimes known as the “cocktail party problem”, is the process of separating individual speech signals from an audio mixture that includes ambient noises and several speakers. The goal is to extract the target speech in this complicated sound scenario and either make it easier to understand or increase its quality so that it may be used in subsequent processing. Speech separation on overlapping audio data is important for many speech-processing tasks, including natural language processing, automatic speech recognition, and intelligent personal assistants. New speech separation algorithms are often built on a deep neural network (DNN) structure, which seeks to learn the complex relationship between the speech mixture and any specific speech source of interest. DNN-based speech separation algorithms outperform conventional statistics-based methods, although they typically need a lot of processing and/or a larger model size. This study presents a new end-to-end speech separation network called ESC-MASD-Net (effective speaker separation through convolutional multi-view attention and SuDoRM-RF network), which has relatively fewer model parameters compared with the state-of-the-art speech separation architectures. The network is partly inspired by the SuDoRM-RF++ network, which uses multiple time-resolution features with downsampling and resampling for effective speech separation. ESC-MASD-Net incorporates the multi-view attention and residual conformer modules into SuDoRM-RF++. Additionally, the U-Convolutional block in ESC-MASD-Net is refined with a conformer layer. Experiments conducted on the WHAM! dataset show that ESC-MASD-Net outperforms SuDoRM-RF++ significantly in the SI-SDRi metric. Furthermore, the use of the conformer layer has also improved the performance of ESC-MASD-Net.
      Citation: Future Internet
      PubDate: 2024-04-28
      DOI: 10.3390/fi16050151
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 152: A Hybrid Multi-Agent Reinforcement
           Learning Approach for Spectrum Sharing in Vehicular Networks

    • Authors: Mansoor Jamal, Zaib Ullah, Muddasar Naeem, Musarat Abbas, Antonio Coronato
      First page: 152
      Abstract: Efficient spectrum sharing is essential for maximizing data communication performance in Vehicular Networks (VNs). In this article, we propose a novel hybrid framework that leverages Multi-Agent Reinforcement Learning (MARL), thereby combining both centralized and decentralized learning approaches. This framework addresses scenarios where multiple vehicle-to-vehicle (V2V) links reuse the frequency spectrum preoccupied by vehicle-to-infrastructure (V2I) links. We introduce the QMIX technique with the Deep Q Networks (DQNs) algorithm to facilitate collaborative learning and efficient spectrum management. The DQN technique uses a neural network to approximate the Q value function in high-dimensional state spaces, thus mapping input states to (action, Q value) tables that facilitate self-learning across diverse scenarios. Similarly, the QMIX is a value-based technique for multi-agent environments. In the proposed model, each V2V agent having its own DQN observes the environment, receives observation, and obtains a common reward. The QMIX network receives Q values from all agents considering individual benefits and collective objectives. This mechanism leads to collective learning while V2V agents dynamically adapt to real-time conditions, thus improving VNs performance. Our research finding highlights the potential of hybrid MARL models for dynamic spectrum sharing in VNs and paves the way for advanced cooperative learning strategies in vehicular communication environments. Furthermore, we conducted an in-depth exploration of the simulation environment and performance evaluation criteria, thus concluding in a comprehensive comparative analysis with cutting-edge solutions in the field. Simulation results show that the proposed framework efficiently performs against the benchmark architecture in terms of V2V transmission probability and V2I peak data transfer.
      Citation: Future Internet
      PubDate: 2024-04-28
      DOI: 10.3390/fi16050152
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 153: A Novel Traffic Classification
           Approach by Employing Deep Learning on Software-Defined Networking

    • Authors: Daniel Nuñez-Agurto, Walter Fuertes, Luis Marrone, Eduardo Benavides-Astudillo, Christian Coronel-Guerrero, Franklin Perez
      First page: 153
      Abstract: The ever-increasing diversity of Internet applications and the rapid evolution of network infrastructure due to emerging technologies have made network management more challenging. Effective traffic classification is critical for efficiently managing network resources and aligning with service quality and security demands. The centralized controller of software-defined networking provides a comprehensive network view, simplifying traffic analysis and offering direct programmability features. When combined with deep learning techniques, these characteristics enable the incorporation of intelligence into networks, leading to optimization and improved network management and maintenance. Therefore, this research aims to develop a model for traffic classification by application types and network attacks using deep learning techniques to enhance the quality of service and security in software-defined networking. The SEMMA method is employed to deploy the model, and the classifiers are trained with four algorithms, namely LSTM, BiLSTM, GRU, and BiGRU, using selected features from two public datasets. These results underscore the remarkable effectiveness of the GRU model in traffic classification. Hence, the outcomes achieved in this research surpass state-of-the-art methods and showcase the effectiveness of a deep learning model within a traffic classification in an SDN environment.
      Citation: Future Internet
      PubDate: 2024-04-29
      DOI: 10.3390/fi16050153
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 154: A Method for the Rapid Propagation of
           Emergency Event Notifications in a Long Vehicle Convoy

    • Authors: John David Sprunger, Alvin Lim, David M. Bevly
      First page: 154
      Abstract: Convoys composed of autonomous vehicles could improve the transportation and freight industries in several ways. One of the avenues of improvement is in fuel efficiency, where the vehicles maintain a close following distance to each other in order to reduce air resistance by way of the draft effect. While close following distances improve fuel efficiency, they also reduce both the margin of safety and the system’s tolerance to disturbances in relative position. The system’s tolerance to disturbances is known as string stability, where the error magnitude either grows or decays as it propagates rearward through the convoy. One of the major factors in a system’s string stability is its delay in sending state updates to other vehicles, the most pertinent being a hard braking maneuver. Both external sensors and vehicle-to-vehicle communication standards have relatively long delays between peer vehicle state changes and the information being actionable by the ego vehicle. The system presented here, called the Convoy Vehicular Ad Hoc Network (Convoy VANET), was designed to reliably propagate emergency event messages with low delay while maintaining reasonable channel efficiency. It accomplishes this using a combination of several techniques, notably relative position-based retransmission delays. Our results using Network Simulator 3 (ns3) show the system propagating messages down a 20-vehicle convoy in less than 100 ms even with more than a 35% message loss between vehicles that are not immediately adjacent. These simulation results show the potential for this kind of system in situations where emergency information must be disseminated quickly in low-reliability wireless environments.
      Citation: Future Internet
      PubDate: 2024-04-29
      DOI: 10.3390/fi16050154
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 155: Novel Approach towards a Fully Deep
           Learning-Based IoT Receiver Architecture: From Estimation to Decoding

    • Authors: Matthew Boeding, Michael Hempel, Hamid Sharif
      First page: 155
      Abstract: As the Internet of Things (IoT) continues to expand, wireless communication is increasingly widespread across diverse industries and remote devices. This includes domains such as Operational Technology in the Smart Grid. Notably, there is a surge in resource-constrained devices leveraging wireless communication, especially with the advances of 5G/6G technology. Nevertheless, the transmission of wireless communications demands substantial power and computational resources, presenting a significant challenge to these devices and their operations. In this work, we propose the use of deep learning to improve the Bit Error Rate (BER) performance of Orthogonal Frequency Division Multiplexing (OFDM) wireless receivers. By improving the BER performance of these receivers, devices can transmit with less power, thereby improving IoT devices’ battery life. The architecture presented in this paper utilizes a depthwise Convolutional Neural Network (CNN) for channel estimation and demodulation, whereas a Graph Neural Network (GNN) is utilized for Low-Density Parity Check (LDPC) decoding, tested against a proposed (1998, 1512) LDPC code. Our results show higher performance than traditional receivers in both isolated tests for the CNN and GNN, and a combined end-to-end test with lower computational complexity than other proposed deep learning models. For BER improvement, our proposed approach showed a 1 dB improvement for eliminating BER in QPSK models. Additionally, it improved 16-QAM Rician BER by five decades, 16-QAM LOS model BER by four decades, 64-QAM Rician BER by 2.5 decades, and 64-QAM LOS model BER by three decades.
      Citation: Future Internet
      PubDate: 2024-04-30
      DOI: 10.3390/fi16050155
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 156: A Fair Crowd-Sourced Automotive Data
           Monetization Approach Using Substrate Hybrid Consensus Blockchain

    • Authors: Cyril Naves Samuel, François Verdier, Severine Glock, Patricia Guitton-Ouhamou
      First page: 156
      Abstract: This work presents a private consortium blockchain-based automotive data monetization architecture implementation using the Substrate blockchain framework. Architecture is decentralized where crowd-sourced data from vehicles are collectively auctioned ensuring data privacy and security. Smart Contracts and OffChain worker interactions built along with the blockchain make it interoperable with external systems to send or receive data. The work is deployed in a Kubernetes cloud platform and evaluated on different parameters like throughput, hybrid consensus algorithms AuRa and BABE, along with GRANDPA performance in terms of forks and scalability for increasing node participants. The hybrid consensus algorithms are studied in depth to understand the difference and performance in the separation of block creation by AuRa and BABE followed by chain finalization through the GRANDPA protocol.
      Citation: Future Internet
      PubDate: 2024-04-30
      DOI: 10.3390/fi16050156
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 157: Realization of Authenticated One-Pass
           Key Establishment on RISC-V Micro-Controller for IoT Applications

    • Authors: Tuan-Kiet Dang, Khai-Duy Nguyen, Binh Kieu-Do-Nguyen, Trong-Thuc Hoang, Cong-Kha Pham
      First page: 157
      Abstract: Internet-of-things networks consist of multiple sensor devices spread over a wide area. In order to protect the data from unauthorized access and tampering, it is essential to ensure secure communication between the sensor devices and the central server. This security measure aims to guarantee authenticity, confidentiality, and data integrity. Unlike traditional computing systems, sensor node devices are often limited regarding memory and computing power. Lightweight communication protocols, such as LoRaWAN, were introduced to overcome these limitations. However, despite the lightweight feature, the protocol is vulnerable to different types of attacks. This proposal presents a highly secure key establishment protocol that combines two cryptography schemes: Elliptic Curve Qu–Vanstone and signcryption key encapsulation. The protocol provides a method to establish a secure channel that inherits the security properties of the two schemes. Also, it allows for fast rekeying with only one exchange message, significantly reducing the handshake complexity in low-bandwidth communication. In addition, the selected schemes complement each other and share the same mathematical operations in elliptic curve cryptography. Moreover, with the rise of a community-friendly platform like RISC-V, we implemented the protocol on a RISC-V system to evaluate its overheads regarding the cycle count and execution time.
      Citation: Future Internet
      PubDate: 2024-05-03
      DOI: 10.3390/fi16050157
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 158: Optimization of Wheelchair Control
           via Multi-Modal Integration: Combining Webcam and EEG

    • Authors: Lassaad Zaway, Nader Ben Amor, Jalel Ktari, Mohamed Jallouli, Larbi Chrifi Alaoui, Laurent Delahoche
      First page: 158
      Abstract: Even though Electric Powered Wheelchairs (EPWs) are a useful tool for meeting the needs of people with disabilities, some disabled people find it difficult to use regular EPWs that are joystick-controlled. Smart wheelchairs that use Brain–Computer Interface (BCI) technology present an efficient solution to this problem. This article presents a cutting-edge intelligent control wheelchair that is intended to improve user involvement and security. The suggested method combines facial expression analysis via a camera with EEG signal processing using the EMOTIV Insight EEG dataset. The system generates control commands by identifying specific EEG patterns linked to facial expressions such as eye blinking, winking left and right, and smiling. Simultaneously, the system uses computer vision algorithms and inertial measurements to analyze gaze direction in order to establish the user’s intended steering. The outcomes of the experiments prove that the proposed system is reliable and efficient in meeting the various requirements of people, presenting a positive development in the field of smart wheelchair technology.
      Citation: Future Internet
      PubDate: 2024-05-03
      DOI: 10.3390/fi16050158
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 159: Enhanced Multi-Task Traffic
           Forecasting in Beyond 5G Networks: Leveraging Transformer Technology and
           Multi-Source Data Fusion

    • Authors: Ibrahim Althamary, Rubbens Boisguene, Chih-Wei Huang
      First page: 159
      Abstract: Managing cellular networks in the Beyond 5G (B5G) era is a complex and challenging task requiring advanced deep learning approaches. Traditional models focusing on internet traffic (INT) analysis often fail to capture the rich temporal and spatial contexts essential for accurate INT predictions. Furthermore, these models do not account for the influence of external factors such as weather, news, and social trends. This study proposes a multi-source CNN-RNN (MSCR) model that leverages a rich dataset, including periodic, weather, news, and social data to address these limitations. This model enables the capture and fusion of diverse data sources for improved INT prediction accuracy. An advanced deep learning model, the transformer-enhanced CNN-RNN (TE-CNN-RNN), has been introduced. This model is specifically designed to predict INT data only. This model demonstrates the effectiveness of transformers in extracting detailed temporal-spatial features, outperforming conventional CNN-RNN models. The experimental results demonstrate that the proposed MSCR and TE-CNN-RNN models outperform existing state-of-the-art models for traffic forecasting. These findings underscore the transformative power of transformers for capturing intricate temporal-spatial features and the importance of multi-source data and deep learning techniques for optimizing cell site management in the B5G era.
      Citation: Future Internet
      PubDate: 2024-05-05
      DOI: 10.3390/fi16050159
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 160: Optimizing Requirements
           Prioritization for IoT Applications Using Extended Analytical Hierarchical
           Process and an Advanced Grouping Framework

    • Authors: Sarah Kaleem, Muhammad Asim, Mohammed El-Affendi, Muhammad Babar
      First page: 160
      Abstract: Effective requirement collection and prioritization are paramount within the inherently distributed nature of the Internet of Things (IoT) application. Current methods typically categorize IoT application requirements subjectively into inessential, desirable, and mandatory groups. This often leads to prioritization challenges, especially when dealing with requirements of equal importance and when the number of requirements grows. This increases the complexity of the Analytical Hierarchical Process (AHP) to O(n2) dimensions. This research introduces a novel framework that integrates an enhanced AHP with an advanced grouping model to address these issues. This integrated approach mitigates the subjectivity found in traditional grouping methods and efficiently manages larger sets of requirements. The framework consists of two main modules: the Pre-processing Module and the Prioritization Module. The latter includes three units: the Grouping Processing Unit (GPU) for initial classification using a new grouping approach, the Review Processing Unit (RPU) for post-grouping assessment, and the AHP Processing Unit (APU) for final prioritization. This framework is evaluated through a detailed case study, demonstrating its ability to effectively streamline requirement prioritization in IoT applications, thereby enhancing design quality and operational efficiency.
      Citation: Future Internet
      PubDate: 2024-05-06
      DOI: 10.3390/fi16050160
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 161: AI-Empowered Multimodal Hierarchical
           Graph-Based Learning for Situation Awareness on Enhancing Disaster
           Responses

    • Authors: Jieli Chen, Kah Phooi Seng, Li Minn Ang, Jeremy Smith, Hanyue Xu
      First page: 161
      Abstract: Situational awareness (SA) is crucial in disaster response, enhancing the understanding of the environment. Social media, with its extensive user base, offers valuable real-time information for such scenarios. Although SA systems excel in extracting disaster-related details from user-generated content, a common limitation in prior approaches is their emphasis on single-modal extraction rather than embracing multi-modalities. This paper proposed a multimodal hierarchical graph-based situational awareness (MHGSA) system for comprehensive disaster event classification. Specifically, the proposed multimodal hierarchical graph contains nodes representing different disaster events and the features of the event nodes are extracted from the corresponding images and acoustic features. The proposed feature extraction modules with multi-branches for vision and audio features provide hierarchical node features for disaster events of different granularities, aiming to build a coarse-granularity classification task to constrain the model and enhance fine-granularity classification. The relationships between different disaster events in multi-modalities are learned by graph convolutional neural networks to enhance the system’s ability to recognize disaster events, thus enabling the system to fuse complex features of vision and audio. Experimental results illustrate the effectiveness of the proposed visual and audio feature extraction modules in single-modal scenarios. Furthermore, the MHGSA successfully fuses visual and audio features, yielding promising results in disaster event classification tasks.
      Citation: Future Internet
      PubDate: 2024-05-07
      DOI: 10.3390/fi16050161
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 162: BPET: A Unified Blockchain-Based
           Framework for Peer-to-Peer Energy Trading

    • Authors: Caixiang Fan, Hamzeh Khazaei, Petr Musilek
      First page: 162
      Abstract: Recent years have witnessed a significant dispersion of renewable energy and the emergence of blockchain-enabled transactive energy systems. These systems facilitate direct energy trading among participants, cutting transmission losses, improving energy efficiency, and fostering renewable energy adoption. However, developing such a system is usually challenging and time-consuming due to the diversity of energy markets. The lack of a market-agnostic design hampers the widespread adoption of blockchain-based peer-to-peer energy trading globally. In this paper, we propose and develop a novel unified blockchain-based peer-to-peer energy trading framework, called BPET. This framework incorporates microservices and blockchain as the infrastructures and adopts a highly modular smart contract design so that developers can easily extend it by plugging in localized energy market rules and rapidly developing a customized blockchain-based peer-to-peer energy trading system. Additionally, we have developed the price formation mechanisms, e.g., the system marginal price calculation algorithm and the pool price calculation algorithm, to demonstrate the extensibility of the BPET framework. To validate the proposed solution, we have conducted a comprehensive case study using real trading data from the Alberta Electric System Operator. The experimental results confirm the system’s capability of processing energy trading transactions efficiently and effectively within the Alberta electricity wholesale market.
      Citation: Future Internet
      PubDate: 2024-05-07
      DOI: 10.3390/fi16050162
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 163: Blockchain-Based Zero-Trust Supply
           Chain Security Integrated with Deep Reinforcement Learning for Inventory
           Optimization

    • Authors: Zhe Ma, Xuhesheng Chen, Tiejiang Sun, Xukang Wang, Ying Cheng Wu, Mengjie Zhou
      First page: 163
      Abstract: Modern supply chain systems face significant challenges, including lack of transparency, inefficient inventory management, and vulnerability to disruptions and security threats. Traditional optimization methods often struggle to adapt to the complex and dynamic nature of these systems. This paper presents a novel blockchain-based zero-trust supply chain security framework integrated with deep reinforcement learning (SAC-rainbow) to address these challenges. The SAC-rainbow framework leverages the Soft Actor–Critic (SAC) algorithm with prioritized experience replay for inventory optimization and a blockchain-based zero-trust mechanism for secure supply chain management. The SAC-rainbow algorithm learns adaptive policies under demand uncertainty, while the blockchain architecture ensures secure, transparent, and traceable record-keeping and automated execution of supply chain transactions. An experiment using real-world supply chain data demonstrated the superior performance of the proposed framework in terms of reward maximization, inventory stability, and security metrics. The SAC-rainbow framework offers a promising solution for addressing the challenges of modern supply chains by leveraging blockchain, deep reinforcement learning, and zero-trust security principles. This research paves the way for developing secure, transparent, and efficient supply chain management systems in the face of growing complexity and security risks.
      Citation: Future Internet
      PubDate: 2024-05-10
      DOI: 10.3390/fi16050163
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 164: pFedBASC: Personalized Federated
           Learning with Blockchain-Assisted Semi-Centralized Framework

    • Authors: Yu Zhang, Xiaowei Peng, Hequn Xian
      First page: 164
      Abstract: As network technology advances, there is an increasing need for a trusted new-generation information management system. Blockchain technology provides a decentralized, transparent, and tamper-proof foundation. Meanwhile, data islands have become a significant obstacle for machine learning applications. Although federated learning (FL) ensures data privacy protection, server-side security concerns persist. Traditional methods have employed a blockchain system in FL frameworks to maintain a tamper-proof global model database. In this context, we propose a novel personalized federated learning (pFL) with blockchain-assisted semi-centralized framework, pFedBASC. This approach, tailored for the Internet of Things (IoT) scenarios, constructs a semi-centralized IoT structure and utilizes trusted network connections to support FL. We concentrate on designing the aggregation process and FL algorithm, as well as the block structure. To address data heterogeneity and communication costs, we propose a pFL method called FedHype. In this method, each client is assigned a compact hypernetwork (HN) alongside a normal target network (TN) whose parameters are generated by the HN. Clients pull together other clients’ HNs for local aggregation to personalize their TNs, reducing communication costs. Furthermore, FedHype can be integrated with other existing algorithms, enhancing its functionality. Experimental results reveal that pFedBASC effectively tackles data heterogeneity issues while maintaining positive accuracy, communication efficiency, and robustness.
      Citation: Future Internet
      PubDate: 2024-05-11
      DOI: 10.3390/fi16050164
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 165:
           Reconfigurable-Intelligent-Surface-Enhanced Dynamic Resource Allocation
           for the Social Internet of Electric Vehicle Charging Networks with
           Causal-Structure-Based Reinforcement Learning

    • Authors: Yuzhu Zhang, Hao Xu
      First page: 165
      Abstract: Charging stations and electric vehicle (EV) charging networks signify a significant advancement in technology as a frontier application of the Social Internet of Things (SIoT), presenting both challenges and opportunities for current 6G wireless networks. One primary challenge in this integration is limited wireless network resources, particularly when serving a large number of users within distributed EV charging networks in the SIoT. Factors such as congestion during EV travel, varying EV user preferences, and uncertainties in decision-making regarding charging station resources significantly impact system operation and network resource allocation. To address these challenges, this paper develops a novel framework harnessing the potential of emerging technologies, specifically reconfigurable intelligent surfaces (RISs) and causal-structure-enhanced asynchronous advantage actor–critic (A3C) reinforcement learning techniques. This framework aims to optimize resource allocation, thereby enhancing communication support within EV charging networks. Through the integration of RIS technology, which enables control over electromagnetic waves, and the application of causal reinforcement learning algorithms, the framework dynamically adjusts resource allocation strategies to accommodate evolving conditions in EV charging networks. An essential aspect of this framework is its ability to simultaneously meet real-world social requirements, such as ensuring efficient utilization of network resources. Numerical simulation results validate the effectiveness and adaptability of this approach in improving wireless network efficiency and enhancing user experience within the SIoT context. Through these simulations, it becomes evident that the developed framework offers promising solutions to the challenges posed by integrating the SIoT with EV charging networks.
      Citation: Future Internet
      PubDate: 2024-05-11
      DOI: 10.3390/fi16050165
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 166: Blockchain-Enabled Secure and
           Interoperable Authentication Scheme for Metaverse Environments

    • Authors: Sonali Patwe, Sunil B. Mane
      First page: 166
      Abstract: The metaverse, which amalgamates physical and virtual realms for diverse social activities, has been the focus of extensive application development by organizations, research institutes, and companies. However, these applications are often isolated, employing distinct authentication methods across platforms. Achieving interoperable authentication is crucial for when avatars traverse different metaverses to mitigate security concerns like impersonation, mutual authentication, replay, and server spoofing. To address these issues, we propose a blockchain-enabled secure and interoperable authentication scheme. This mechanism uniquely identifies users in the physical world as well as avatars, facilitating seamless navigation across verses. Our proposal is substantiated through informal security analyses, employing automated verification of internet security protocols and applications (AVISPA), the real-or-random (ROR) model, and Burrows–Abadi–Needham (BAN) logic and showcasing effectiveness against a broad spectrum of security threats. Comparative assessments against similar schemes demonstrate our solution’s superiority in terms of communication costs, computation costs, and security features. Consequently, our blockchain-enabled, interoperable, and secure authentication scheme stands as a robust solution for ensuring security in metaverse environments.
      Citation: Future Internet
      PubDate: 2024-05-11
      DOI: 10.3390/fi16050166
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 167: A Hybrid Semi-Automated Workflow for
           Systematic and Literature Review Processes with Large Language Model
           Analysis

    • Authors: Anjia Ye, Ananda Maiti, Matthew Schmidt, Scott J. Pedersen
      First page: 167
      Abstract: Systematic reviews (SRs) are a rigorous method for synthesizing empirical evidence to answer specific research questions. However, they are labor-intensive because of their collaborative nature, strict protocols, and typically large number of documents. Large language models (LLMs) and their applications such as gpt-4/ChatGPT have the potential to reduce the human workload of the SR process while maintaining accuracy. We propose a new hybrid methodology that combines the strengths of LLMs and humans using the ability of LLMs to summarize large bodies of text autonomously and extract key information. This is then used by a researcher to make inclusion/exclusion decisions quickly. This process replaces the typical manually performed title/abstract screening, full-text screening, and data extraction steps in an SR while keeping a human in the loop for quality control. We developed a semi-automated LLM-assisted (Gemini-Pro) workflow with a novel innovative prompt development strategy. This involves extracting three categories of information including identifier, verifier, and data field (IVD) from the formatted documents. We present a case study where our hybrid approach reduced errors compared with a human-only SR. The hybrid workflow improved the accuracy of the case study by identifying 6/390 (1.53%) articles that were misclassified by the human-only process. It also matched the human-only decisions completely regarding the rest of the 384 articles. Given the rapid advances in LLM technology, these results will undoubtedly improve over time.
      Citation: Future Internet
      PubDate: 2024-05-12
      DOI: 10.3390/fi16050167
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 168: Evaluating Realistic Adversarial
           Attacks against Machine Learning Models for Windows PE Malware Detection

    • Authors: Imran, Appice, Malerba
      First page: 168
      Abstract: During the last decade, the cybersecurity literature has conferred a high-level role to machine learning as a powerful security paradigm to recognise malicious software in modern anti-malware systems. However, a non-negligible limitation of machine learning methods used to train decision models is that adversarial attacks can easily fool them. Adversarial attacks are attack samples produced by carefully manipulating the samples at the test time to violate the model integrity by causing detection mistakes. In this paper, we analyse the performance of five realistic target-based adversarial attacks, namely Extend, Full DOS, Shift, FGSM padding + slack and GAMMA, against two machine learning models, namely MalConv and LGBM, learned to recognise Windows Portable Executable (PE) malware files. Specifically, MalConv is a Convolutional Neural Network (CNN) model learned from the raw bytes of Windows PE files. LGBM is a Gradient-Boosted Decision Tree model that is learned from features extracted through the static analysis of Windows PE files. Notably, the attack methods and machine learning models considered in this study are state-of-the-art methods broadly used in the machine learning literature for Windows PE malware detection tasks. In addition, we explore the effect of accounting for adversarial attacks on securing machine learning models through the adversarial training strategy. Therefore, the main contributions of this article are as follows: (1) We extend existing machine learning studies that commonly consider small datasets to explore the evasion ability of state-of-the-art Windows PE attack methods by increasing the size of the evaluation dataset. (2) To the best of our knowledge, we are the first to carry out an exploratory study to explain how the considered adversarial attack methods change Windows PE malware to fool an effective decision model. (3) We explore the performance of the adversarial training strategy as a means to secure effective decision models against adversarial Windows PE malware files generated with the considered attack methods. Hence, the study explains how GAMMA can actually be considered the most effective evasion method for the performed comparative analysis. On the other hand, the study shows that the adversarial training strategy can actually help in recognising adversarial PE malware generated with GAMMA by also explaining how it changes model decisions.
      Citation: Future Internet
      PubDate: 2024-05-12
      DOI: 10.3390/fi16050168
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 169: Blockchain and Smart Contracts for
           Digital Copyright Protection

    • Authors: Franco Frattolillo
      First page: 169
      Abstract: In a global context characterized by a pressing need to find a solution to the problem of digital copyright protection, buyer-seller watermarking protocols based on asymmetric fingerprinting and adopting a “buyer-friendly” approach have proven effective in addressing such a problem. They can ensure high levels of usability and security. However, they usually resort to trusted third parties (TTPs) to guarantee the protection process, and this is often perceived as a relevant drawback since TTPs may cause conspiracy or collusion problems, besides the fact that they are generally considered as some sort of “big brother”. This paper presents a buyer-seller watermarking protocol that can achieve the right compromise between usability and security without employing a TTP. The protocol is built around previous experiences conducted in the field of protocols based on the buyer-friendly approach. Its peculiarity consists of exploiting smart contracts executed within a blockchain to implement preset and immutable rules that run automatically under specific conditions without control from some kind of central authority. The result is a simple, usable, and secure watermarking protocol able to do without TTPs.
      Citation: Future Internet
      PubDate: 2024-05-14
      DOI: 10.3390/fi16050169
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 170: Indoor Infrastructure Maintenance
           Framework Using Networked Sensors, Robots, and Augmented Reality Human
           Interface

    • Authors: Alireza Fath, Nicholas Hanna, Yi Liu, Scott Tanch, Tian Xia, Dryver Huston
      First page: 170
      Abstract: Sensing and cognition by homeowners and technicians for home maintenance are prime examples of human–building interaction. Damage, decay, and pest infestation present signals that humans interpret and then act upon to remedy and mitigate. The maintenance cognition process has direct effects on sustainability and economic vitality, as well as the health and well-being of building occupants. While home maintenance practices date back to antiquity, they readily submit to augmentation and improvement with modern technologies. This paper describes the use of networked smart technologies embedded with machine learning (ML) and presented in electronic formats to better inform homeowners and occupants about safety and maintenance issues, as well as recommend courses of remedial action. The demonstrated technologies include robotic sensing in confined areas, LiDAR scans of structural shape and deformation, moisture and gas sensing, water leak detection, network embedded ML, and augmented reality interfaces with multi-user teaming capabilities. The sensor information passes through a private local dynamic network to processors with neural network pattern recognition capabilities to abstract the information, which then feeds to humans through augmented reality and conventional smart device interfaces. This networked sensor system serves as a testbed and demonstrator for home maintenance technologies, for what can be termed Home Maintenance 4.0.
      Citation: Future Internet
      PubDate: 2024-05-15
      DOI: 10.3390/fi16050170
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 171: SmartDED: A Blockchain- and Smart
           Contract-Based Digital Electronic Detonator Safety Supervision System

    • Authors: Na Liu, Wei-Tek Tsai
      First page: 171
      Abstract: Digital electronic detonators, as a civil explosive, are of prime importance for people’s life and property safety in the process of production and operation. Therefore, the Ministry of Industry and Information Technology and the Ministry of Public Security of the People’s Republic of China have extremely high requirements for their essential safety. Existing schemes are vulnerable to tampering and single points of failure, which makes tracing unqualified digital electronic detonators difficult and identifying the responsibility for digital electronic detonator accidents hard. This paper presents a digital electronic detonator safety supervision system based on a consortium blockchain. To achieve dynamic supply chain supervision, we propose a novel digital electronic detonator supervision model together with three codes in one. We also propose a blockchain-based system that employs smart contracts to achieve efficient traceability and ensure security. We implemented the proposed model using a consortium blockchain platform and provide the cost. The evaluation results validate that the proposed system is efficient.
      Citation: Future Internet
      PubDate: 2024-05-16
      DOI: 10.3390/fi16050171
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 172: Using Optimization Techniques in
           Grammatical Evolution

    • Authors: Ioannis G. Tsoulos, Alexandros Tzallas, Evangelos Karvounis
      First page: 172
      Abstract: The Grammatical Evolution technique has been successfully applied to a wide range of problems in various scientific fields. However, in many cases, techniques that make use of Grammatical Evolution become trapped in local minima of the objective problem and fail to reach the optimal solution. One simple method to tackle such situations is the usage of hybrid techniques, where local minimization algorithms are used in conjunction with the main algorithm. However, Grammatical Evolution is an integer optimization problem and, as a consequence, techniques should be formulated that are applicable to it as well. In the current work, a modified version of the Simulated Annealing algorithm is used as a local optimization procedure in Grammatical Evolution. This approach was tested on the Constructed Neural Networks and a remarkable improvement of the experimental results was shown, both in classification data and in data fitting cases.
      Citation: Future Internet
      PubDate: 2024-05-16
      DOI: 10.3390/fi16050172
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 173: Machine Learning Strategies for
           Reconfigurable Intelligent Surface-Assisted Communication Systems—A
           Review

    • Authors: Roilhi F. Ibarra-Hernández, Francisco R. Castillo-Soria, Carlos A. Gutiérrez, Abel García-Barrientos, Luis Alberto Vásquez-Toledo, J. Alberto Del-Puerto-Flores
      First page: 173
      Abstract: Machine learning (ML) algorithms have been widely used to improve the performance of telecommunications systems, including reconfigurable intelligent surface (RIS)-assisted wireless communication systems. The RIS can be considered a key part of the backbone of sixth-generation (6G) communication mainly due to its electromagnetic properties for controlling the propagation of the signals in the wireless channel. The ML-optimized (RIS)-assisted wireless communication systems can be an effective alternative to mitigate the degradation suffered by the signal in the wireless channel, providing significant advantages in the system’s performance. However, the variety of approaches, system configurations, and channel conditions make it difficult to determine the best technique or group of techniques for effectively implementing an optimal solution. This paper presents a comprehensive review of the reported frameworks in the literature that apply ML and RISs to improve the overall performance of the wireless communication system. This paper compares the ML strategies that can be used to address the RIS-assisted system design. The systems are classified according to the ML method, the databases used, the implementation complexity, and the reported performance gains. Finally, we shed light on the challenges and opportunities in designing and implementing future RIS-assisted wireless communication systems based on ML strategies.
      Citation: Future Internet
      PubDate: 2024-05-17
      DOI: 10.3390/fi16050173
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 174: TQU-SLAM Benchmark Dataset for
           Comparative Study to Build Visual Odometry Based on Extracted Features
           from Feature Descriptors and Deep Learning

    • Authors: Thi-Hao Nguyen, Van-Hung Le, Huu-Son Do, Trung-Hieu Te, Van-Nam Phan
      First page: 174
      Abstract: The problem of data enrichment to train visual SLAM and VO construction models using deep learning (DL) is an urgent problem today in computer vision. DL requires a large amount of data to train a model, and more data with many different contextual and conditional conditions will create a more accurate visual SLAM and VO construction model. In this paper, we introduce the TQU-SLAM benchmark dataset, which includes 160,631 RGB-D frame pairs. It was collected from the corridors of three interconnected buildings comprising a length of about 230 m. The ground-truth data of the TQU-SLAM benchmark dataset were prepared manually, including 6-DOF camera poses, 3D point cloud data, intrinsic parameters, and the transformation matrix between the camera coordinate system and the real world. We also tested the TQU-SLAM benchmark dataset using the PySLAM framework with traditional features such as SHI_TOMASI, SIFT, SURF, ORB, ORB2, AKAZE, KAZE, and BRISK and features extracted from DL such as VGG, DPVO, and TartanVO. The camera pose estimation results are evaluated, and we show that the ORB2 features have the best results (Errd = 5.74 mm), while the ratio of the number of frames with detected keypoints of the SHI_TOMASI feature is the best (rd=98.97%). At the same time, we also present and analyze the challenges of the TQU-SLAM benchmark dataset for building visual SLAM and VO systems.
      Citation: Future Internet
      PubDate: 2024-05-17
      DOI: 10.3390/fi16050174
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 175: Chatbots in Airport Customer
           Service—Exploring Use Cases and Technology Acceptance

    • Authors: Isabel Auer, Stephan Schlögl, Gundula Glowka
      First page: 175
      Abstract: Throughout the last decade, chatbots have gained widespread adoption across various industries, including healthcare, education, business, e-commerce, and entertainment. These types of artificial, usually cloud-based, agents have also been used in airport customer service, although there has been limited research concerning travelers’ perspectives on this rather techno-centric approach to handling inquiries. Consequently, the goal of the presented study was to tackle this research gap and explore potential use cases for chatbots at airports, as well as investigate travelers’ acceptance of said technology. We employed an extended version of the Technology Acceptance Model considering Perceived Usefulness, Perceived Ease of Use, Trust, and Perceived Enjoyment as predictors of Behavioral Intention, with Affinity for Technology as a potential moderator. A total of n=191 travelers completed our survey. The results show that Perceived Usefulness, Trust, Perceived Ease of Use, and Perceived Enjoyment positively correlate with the Behavioral Intention to use a chatbot for airport customer service inquiries, with Perceived Usefulness showing the highest impact. Travelers’ Affinity for Technology, on the other hand, does not seem to have any significant effect.
      Citation: Future Internet
      PubDate: 2024-05-17
      DOI: 10.3390/fi16050175
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 176: MetaSSI: A Framework for Personal
           Data Protection, Enhanced Cybersecurity and Privacy in Metaverse Virtual
           Reality Platforms

    • Authors: Faisal Fiaz, Syed Muhammad Sajjad, Zafar Iqbal, Muhammad Yousaf, Zia Muhammad
      First page: 176
      Abstract: The Metaverse brings together components of parallel processing computing platforms, the digital development of physical systems, cutting-edge machine learning, and virtual identity to uncover a fully digitalized environment with equal properties to the real world. It possesses more rigorous requirements for connection, including safe access and data privacy, which are necessary with the advent of Metaverse technology. Traditional, centralized, and network-centered solutions fail to provide a resilient identity management solution. There are multifaceted security and privacy issues that hinder the secure adoption of this game-changing technology in contemporary cyberspace. Moreover, there is a need to dedicate efforts towards a secure-by-design Metaverse that protects the confidentiality, integrity, and privacy of the personally identifiable information (PII) of users. In this research paper, we propose a logical substitute for established centralized identity management systems in compliance with the complexity of the Metaverse. This research proposes a sustainable Self-Sovereign Identity (SSI), a fully decentralized identity management system to mitigate PII leaks and corresponding cyber threats on all multiverse platforms. The principle of the proposed framework ensures that the users are the only custodians and proprietors of their own identities. In addition, this article provides a comprehensive approach to the implementation of the SSI principles to increase interoperability and trustworthiness in the Metaverse. Finally, the proposed framework is validated using mathematical modeling and proved to be stringent and resilient against modern-day cyber attacks targeting Metaverse platforms.
      Citation: Future Internet
      PubDate: 2024-05-18
      DOI: 10.3390/fi16050176
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 177: Teamwork Conflict Management Training
           and Conflict Resolution Practice via Large Language Models

    • Authors: Sakhi Aggrawal, Alejandra J. Magana
      First page: 177
      Abstract: This study implements a conflict management training approach guided by principles of transformative learning and conflict management practice simulated via an LLM. Transformative learning is more effective when learners are engaged mentally and behaviorally in learning experiences. Correspondingly, the conflict management training approach involved a three-step procedure consisting of a learning phase, a practice phase enabled by an LLM, and a reflection phase. Fifty-six students enrolled in a systems development course were exposed to the transformative learning approach to conflict management so they would be better prepared to address any potential conflicts within their teams as they approached a semester-long software development project. The study investigated the following: (1) How did the training and practice affect students’ level of confidence in addressing conflict' (2) Which conflict management styles did students use in the simulated practice' (3) Which strategies did students employ when engaging with the simulated conflict' The findings indicate that: (1) 65% of the students significantly increased in confidence in managing conflict by demonstrating collaborative, compromising, and accommodative approaches; (2) 26% of the students slightly increased in confidence by implementing collaborative and accommodative approaches; and (3) 9% of the students did not increase in confidence, as they were already confident in applying collaborative approaches. The three most frequently used strategies for managing conflict were identifying the root cause of the problem, actively listening, and being specific and objective in explaining their concerns.
      Citation: Future Internet
      PubDate: 2024-05-19
      DOI: 10.3390/fi16050177
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 178: Validation of Value-Driven Token
           Economy: Focus on Blockchain Content Platform

    • Authors: Young Sook Kim, Seng-Phil Hong, Marko Majer
      First page: 178
      Abstract: This study explores the architectural framework of a value-driven token economy on a blockchain content platform and critically evaluates the relationship between blockchain’s decentralization and sustainable economic practices. The existing literature often glorifies the rapid market expansion of cryptocurrencies but overlooks how underlying blockchain technology can fundamentally enhance content platforms through a more structured user engagement and equitable reward system. This study proposes a new token economy architecture by adopting the triple-bottom -line (TBL) framework and validates its practicality and effectiveness through an analytic-hierarchy-process (AHP) survey of industry experts. The study shows that the most influential factor in a successful token economy is not profit maximization but fostering a user-centric community where engagement and empowerment are prioritized. This shift can be expected to combine blockchain technology with meaningful economic innovation by challenging traditional profit-driven business models and refocusing on sustainability and user value.
      Citation: Future Internet
      PubDate: 2024-05-20
      DOI: 10.3390/fi16050178
      Issue No: Vol. 16, No. 5 (2024)
       
  • Future Internet, Vol. 16, Pages 107: Optimizing Hybrid V2X Communication:
           An Intelligent Technology Selection Algorithm Using 5G, C-V2X PC5 and DSRC
           

    • Authors: Ihtisham Khalid, Vasilis Maglogiannis, Dries Naudts, Adnan Shahid, Ingrid Moerman
      First page: 107
      Abstract: Cooperative communications advancements in Vehicular-to-Everything (V2X) are bolstering the autonomous driving paradigm. V2X nodes are connected through communication technology, such as a short-range communication mode (Dedicated Short Range Communication (DSRC) and Cellular-V2X) or a long-range communication mode (Uu). Conventional vehicular networks employ static wireless vehicular communication technology without considering the traffic load on any individual V2X communication technology and the traffic dynamics in the vicinity of the V2X node, and are hence inefficient. In this study, we investigate hybrid V2X communication and propose an autonomous and intelligent technology selection algorithm using a decision tree. The algorithm uses the information from the received Cooperative Intelligent Transport Systems (C-ITS) Cooperative Awareness Messages (CAMs) to collect statistics such as inter vehicular distance, one-way end-to-end latency and CAM density. These statistics are then used as input for the decision tree for selecting the appropriate technology (DSRC, C-V2X PC5 or 5G) for the subsequent scheduled C-ITS message transmission. The assessment of the intelligent hybrid V2X algorithm’s performance in our V2X test setup demonstrates enhancements in one-way end-to-end latency, reliability, and packet delivery rate when contrasted with the conventional utilization of static technology.
      Citation: Future Internet
      PubDate: 2024-03-23
      DOI: 10.3390/fi16040107
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 108: A Method for Mapping V2X
           Communication Requirements to Highly Automated and Autonomous Vehicle
           Functions

    • Authors: Arpad Takacs, Tamas Haidegger
      First page: 108
      Abstract: The significance of V2X (Vehicle-to-Everything) technology in the context of highly automated and autonomous vehicles can hardly be overestimated. While V2X is not considered a standalone technology for achieving high automation, it is recognized as a safety-redundant component in automated driving systems. This article aims to systematically assess the requirements towards V2X input data to highly automated and autonomous systems that can individually, or in combination with other sensors, enable certain levels of autonomy. It addresses the assessment of V2X input data requirements for different levels of autonomy defined by SAE International, regulatory challenges, scalability issues in hybrid environments, and the potential impact of Internet of Things (IoT)-based information in non-automotive technical fields. A method is proposed for assessing the applicability of V2X at various levels of automation based on system complexity. The findings provide valuable insights for the development, deployment and regulation of V2X-enabled automated systems, ultimately contributing to enhanced road safety and efficient mobility.
      Citation: Future Internet
      PubDate: 2024-03-25
      DOI: 10.3390/fi16040108
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 109: MOLM: Alleviating Congestion through
           Multi-Objective Simulated Annealing-Based Load Balancing Routing in LEO
           Satellite Networks

    • Authors: Yihu Zhou, Haiming Chen, Zhibin Dou
      First page: 109
      Abstract: In satellite networks, existing congestion resolution methods do not consider the predictability and stability of paths, leading to frequent path switches and high maintenance costs. In this regard, we propose a novel congestion resolution approach, named MOLM, which introduces a continuous neighbor set during path updates. This set includes nodes capable of establishing sustainable connections with the predecessors and successors of congested nodes. Combined with a multi-objective simulated annealing framework, MOLM iteratively derives an optimal selection from this set to replace congested nodes. Additionally, we employ a Fast Reroute mechanism based on backup paths (FRR-BP) to address node failures. The simulation results indicate that the optimal node endows the new path with optimal path stability and path latency.
      Citation: Future Internet
      PubDate: 2024-03-25
      DOI: 10.3390/fi16040109
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 110: Cloud Security Using Fine-Grained
           Efficient Information Flow Tracking

    • Authors: Fahad Alqahtani, Mohammed Almutairi, Frederick T. Sheldon
      First page: 110
      Abstract: This study provides a comprehensive review and comparative analysis of existing Information Flow Tracking (IFT) tools which underscores the imperative for mitigating data leakage in complex cloud systems. Traditional methods impose significant overhead on Cloud Service Providers (CSPs) and management activities, prompting the exploration of alternatives such as IFT. By augmenting consumer data subsets with security tags and deploying a network of monitors, IFT facilitates the detection and prevention of data leaks among cloud tenants. The research here has focused on preventing misuse, such as the exfiltration and/or extrusion of sensitive data in the cloud as well as the role of anonymization. The CloudMonitor framework was envisioned and developed to study and design mechanisms for transparent and efficient IFT (eIFT). The framework enables the experimentation, analysis, and validation of innovative methods for providing greater control to cloud service consumers (CSCs) over their data. Moreover, eIFT enables enhanced visibility to assess data conveyances by third-party services toward avoiding security risks (e.g., data exfiltration). Our implementation and validation of the framework uses both a centralized and dynamic IFT approach to achieve these goals. We measured the balance between dynamism and granularity of the data being tracked versus efficiency. To establish a security and performance baseline for better defense in depth, this work focuses primarily on unique Dynamic IFT tracking capabilities using e.g., Infrastructure as a Service (IaaS). Consumers and service providers can negotiate specific security enforcement standards using our framework. Thus, this study orchestrates and assesses, using a series of real-world experiments, how distinct monitoring capabilities combine to provide a comparatively higher level of security. Input/output performance was evaluated for execution time and resource utilization using several experiments. The results show that the performance is unaffected by the magnitude of the input/output data that is tracked. In other words, as the volume of data increases, we notice that the execution time grows linearly. However, this increase occurs at a rate that is notably slower than what would be anticipated in a strictly proportional relationship. The system achieves an average CPU and memory consumption overhead profile of 8% and 37% while completing less than one second for all of the validation test runs. The results establish a performance efficiency baseline for a better measure and understanding of the cost of preserving confidentiality, integrity, and availability (CIA) for cloud Consumers and Providers (C&P). Consumers can scrutinize the benefits (i.e., security) and tradeoffs (memory usage, bandwidth, CPU usage, and throughput) and the cost of ensuring CIA can be established, monitored, and controlled. This work provides the primary use-cases, formula for enforcing the rules of data isolation, data tracking policy framework, and the basis for managing confidential data flow and data leak prevention using the CloudMonitor framework.
      Citation: Future Internet
      PubDate: 2024-03-25
      DOI: 10.3390/fi16040110
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 111: A Novel Edge Platform Streamlining
           Connectivity between Modern Edge Devices and the Cloud

    • Authors: Anderson Carvalho, Daniel Riordan, Joseph Walsh
      First page: 111
      Abstract: This study presents a newly developed edge computing platform designed to enhance connectivity between edge devices and the cloud in the agricultural sector. Addressing the challenge of synchronizing a central database across 850 remote farm locations in various countries, the focus lies on maintaining data integrity and consistency for effective farm management. The incorporation of a new edge device into existing setups has significantly improved computational capabilities for tasks like data synchronization and machine learning. This research highlights the critical role of cloud computing in managing large data volumes, with Amazon Web Services hosting the databases. This paper showcases an integrated architecture combining edge devices, networks, and cloud computing, forming a seamless continuum of services from cloud to edge. This approach proves effective in managing the significant data volumes generated in remote agricultural areas. This paper also introduces the PAIR Mechanism, which is a solution developed in response to the unique challenges of agricultural data management, emphasizing resilience and simplicity in data synchronization between cloud and edge databases. The PAIR Mechanism’s potential for robust data management in IoT and cloud environments is explored, offering a novel perspective on synchronization challenges in edge computing.
      Citation: Future Internet
      PubDate: 2024-03-25
      DOI: 10.3390/fi16040111
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 112: Exploring Universal Filtered Multi
           Carrier Waveform for Last Meter Connectivity in 6G: A
           Street-Lighting-Driven Approach with Enhanced Simulator for IoT
           Application Dimensioning

    • Authors: Véronique Georlette, Anne-Carole Honfoga, Michel Dossou, Véronique Moeyaert
      First page: 112
      Abstract: In the dynamic landscape of 6G and smart cities, visible light communication (VLC) assumes critical significance for Internet of Things (IoT) applications spanning diverse sectors. The escalating demand for bandwidth and data underscores the need for innovative solutions, positioning VLC as a complementary technology within the electromagnetic spectrum. This paper focuses on the relevance of VLC in the 6G paradigm, shedding light on its applicability across smart cities and industries. The paper highlights the growing efficiency of lighting LEDs in infrastructure, facilitating the seamless integration of VLC. The study then emphasizes VLC’s robustness in outdoor settings, demonstrating effective communication up to 10 m. This resilience positions VLC as a key player in addressing the very last meter of wireless communication, offering a seamless solution for IoT connectivity. By introducing a freely available open-source simulator combined with an alternative waveform, UFMC, the study empowers researchers to dimension applications effectively, showcasing VLC’s potential to improve wireless communication in the evolving landscape of 6G and smart cities.
      Citation: Future Internet
      PubDate: 2024-03-26
      DOI: 10.3390/fi16040112
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 113: Research on Blockchain Transaction
           Privacy Protection Methods Based on Deep Learning

    • Authors: Jun Li, Chenyang Zhang, Jianyi Zhang, Yanhua Shao
      First page: 113
      Abstract: To address the challenge of balancing privacy protection with regulatory oversight in blockchain transactions, we propose a regulatable privacy protection scheme for blockchain transactions. Our scheme utilizes probabilistic public-key encryption to obscure the true identities of blockchain transaction participants. By integrating commitment schemes and zero-knowledge proof techniques with deep learning graph neural network technology, it provides privacy protection and regulatory analysis of blockchain transaction data. This approach not only prevents the leakage of sensitive transaction information, but also achieves regulatory capabilities at both macro and micro levels, ensuring the verification of the legality of transactions. By adopting an identity-based encryption system, regulatory bodies can conduct personalized supervision of blockchain transactions without storing users’ actual identities and key data, significantly reducing storage computation and key management burdens. Our scheme is independent of any particular consensus mechanism and can be applied to current blockchain technologies. Simulation experiments and complexity analysis demonstrate the practicality of the scheme.
      Citation: Future Internet
      PubDate: 2024-03-28
      DOI: 10.3390/fi16040113
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 114: NeXtFusion: Attention-Based
           Camera–Radar Fusion Network for Improved Three-Dimensional Object
           Detection and Tracking

    • Authors: Kalgaonkar, El-Sharkawy
      First page: 114
      Abstract: Accurate perception is crucial for autonomous vehicles (AVs) to navigate safely, especially in adverse weather and lighting conditions where single-sensor networks (e.g., cameras or radar) struggle with reduced maneuverability and unrecognizable targets. Deep camera–radar fusion neural networks offer a promising solution for reliable AV perception under any weather and lighting conditions. Cameras provide rich semantic information, while radars act like an X-ray vision, piercing through fog and darkness. This work proposes a novel, efficient camera–radar fusion network called NeXtFusion for robust AV perception with an improvement in object detection accuracy and tracking. Our proposed approach of utilizing an attention module enhances crucial feature representation for object detection while minimizing information loss from multi-modal data. Extensive experiments on the challenging nuScenes dataset demonstrate NeXtFusion’s superior performance in detecting small and distant objects compared to other methods. Notably, NeXtFusion achieves the highest mAP score (0.473) on the nuScenes validation set, outperforming competitors like OFT (35.1% improvement) and MonoDIS (9.5% improvement). Additionally, NeXtFusion demonstrates strong performance in other metrics like mATE (0.449) and mAOE (0.534), highlighting its overall effectiveness in 3D object detection. Furthermore, visualizations of nuScenes data processed by NeXtFusion further demonstrate its capability to handle diverse real-world scenarios. These results suggest that NeXtFusion is a promising deep fusion network for improving AV perception and safety for autonomous driving.
      Citation: Future Internet
      PubDate: 2024-03-28
      DOI: 10.3390/fi16040114
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 115: Implementing Federated Governance in
           Data Mesh Architecture

    • Authors: Anton Dolhopolov, Arnaud Castelltort, Anne Laurent
      First page: 115
      Abstract: Analytical data platforms have been used for decades to improve organizational performance. Starting from the data warehouses used primarily for structured data processing, through the data lakes oriented for raw data storage and post-hoc data analyses, to the data lakehouses—a combination of raw storage and business intelligence pre-processing for improving the platform’s efficacy. But in recent years, a new architecture called Data Mesh has emerged. The main promise of this architecture is to remove the barriers between operational and analytical teams in order to boost the overall value extraction from the big data. A number of attempts have been made to formalize and implement it in existing projects. Although being defined as a socio-technical paradigm, data mesh still lacks the technology support to enable its widespread adoption. To overcome this limitation, we propose a new view of the platform requirements alongside the formal governance definition that we believe can help in the successful adoption of the data mesh. It is based on fundamental aspects such as decentralized data domains and federated computational governance. In addition, we also present a blockchain-based implementation of a mesh platform as a practical validation of our theoretical proposal. Overall, this article demonstrates a novel research direction for information system decentralization technologies.
      Citation: Future Internet
      PubDate: 2024-03-29
      DOI: 10.3390/fi16040115
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 116: Performance Evaluation of Graph
           Neural Network-Based RouteNet Model with Attention Mechanism

    • Authors: Binita Kusum Dhamala, Babu R. Dawadi, Pietro Manzoni, Baikuntha Kumar Acharya
      First page: 116
      Abstract: Graph representation is recognized as an efficient method for modeling networks, precisely illustrating intricate, dynamic interactions within various entities of networks by representing entities as nodes and their relationships as edges. Leveraging the advantage of the network graph data along with deep learning technologies specialized for analyzing graph data, Graph Neural Networks (GNNs) have revolutionized the field of computer networking by effectively handling structured graph data and enabling precise predictions for various use cases such as performance modeling, routing optimization, and resource allocation. The RouteNet model, utilizing a GNN, has been effectively applied in determining Quality of Service (QoS) parameters for each source-to-destination pair in computer networks. However, a prevalent issue in the current GNN model is their struggle with generalization and capturing the complex relationships and patterns within network data. This research aims to enhance the predictive power of GNN-based models by enhancing the original RouteNet model by incorporating an attention layer into its architecture. A comparative analysis is conducted to evaluate the performance of the Modified RouteNet model against the Original RouteNet model. The effectiveness of the added attention layer has been examined to determine its impact on the overall model performance. The outcomes of this research contribute to advancing GNN-based network performance prediction, addressing the limitations of existing models, and providing reliable frameworks for predicting network delay.
      Citation: Future Internet
      PubDate: 2024-03-29
      DOI: 10.3390/fi16040116
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 117: Continuous Space Wireless
           Communication Tower Placement by Hybrid Simulated Annealing

    • Authors: Maolin Tang, Wei Li
      First page: 117
      Abstract: Wireless communication tower placement arises in many real-world applications. This paper investigates a new emerging wireless communication tower placement problem, namely, continuous space wireless communication tower placement. Unlike existing wireless communication tower placement problems, which are discrete computational problems, this new wireless communication tower placement problem is a continuous space computational problem. In this paper, we formulate the new wireless communication tower placement problem and propose a hybrid simulated annealing algorithm that can take advantage of the powerful exploration capacity of simulated annealing and the strong exploitation capacity of a local optimization procedure. We also demonstrate through experiments the effectiveness of this hybridization technique and the good performance and scalability of the hybrid simulated annulling in this paper.
      Citation: Future Internet
      PubDate: 2024-03-29
      DOI: 10.3390/fi16040117
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 118: Data Structure and Management
           Protocol to Enhance Name Resolving in Named Data Networking

    • Authors: Manar Aldaoud, Dawood Al-Abri, Medhat Awadalla, Firdous Kausar
      First page: 118
      Abstract: Named Data Networking (NDN) is a future Internet architecture that requires an Inter-Domain Routing (IDR) to route its traffic globally. Address resolution is a vital component of any IDR system that relies on a Domain Name System (DNS) resolver to translate domain names into their IP addresses in TCP/IP networks. This paper presents a novel two-element solution to enhance name-to-delivery location resolution in NDN networks, consisting of (1) a mapping table data structure and a searching mechanism and (2) a management protocol to automatically populate and modify the mapping table. The proposed solution is implemented and tested on the Peer Name Provider Server (PNPS) mapping table, and its performance is compared with two other algorithms: component and character tries. The findings show a notable enhancement in the operational speed of the mapping table when utilizing the proposed data structure. For instance, the insertion process is 37 times faster compared to previous algorithms.
      Citation: Future Internet
      PubDate: 2024-03-30
      DOI: 10.3390/fi16040118
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 119: Multi-Agent Deep Reinforcement
           Learning-Based Fine-Grained Traffic Scheduling in Data Center Networks

    • Authors: Huiting Wang, Yazhi Liu, Wei Li, Zhigang Yang
      First page: 119
      Abstract: In data center networks, when facing challenges such as traffic volatility, low resource utilization, and the difficulty of a single traffic scheduling strategy to meet demands, it is necessary to introduce intelligent traffic scheduling mechanisms to improve network resource utilization, optimize network performance, and adapt to the traffic scheduling requirements in a dynamic environment. This paper proposes a fine-grained traffic scheduling scheme based on multi-agent deep reinforcement learning (MAFS). This approach utilizes In-Band Network Telemetry to collect real-time network states on the programmable data plane, establishes the mapping relationship between real-time network state information and the forwarding efficiency on the control plane, and designs a multi-agent deep reinforcement learning algorithm to calculate the optimal routing strategy under the current network state. The experimental results demonstrate that compared to other traffic scheduling methods, MAFS can effectively enhance network throughput. It achieves a 1.2× better average throughput and achieves a 1.4–1.7× lower packet loss rate.
      Citation: Future Internet
      PubDate: 2024-03-31
      DOI: 10.3390/fi16040119
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 120: A Microservices-Based Control Plane
           for Time-Sensitive Networking

    • Authors: Anna Agustí-Torra, Marc Ferré-Mancebo, Gabriel David Orozco-Urrutia, David Rincón-Rivera, David Remondo
      First page: 120
      Abstract: Time-Sensitive Networking (TSN) aims to provide deterministic communications over Ethernet. The main characteristics of TSN are bounded latency and very high reliability, thus complying with the strict requirements of industrial communications or automotive applications, to name a couple of examples. In order to achieve this goal, TSN defines several scheduling algorithms, among them the Time-Aware Shaper (TAS), which is based on time slots and Gate Control Lists (GCLs). The configuration of network elements to allocate time slots, paths, and GCLs is laborious, and has to be updated promptly and in a dynamic way, as new data flows arrive or disappear. The IEEE 802.1Qcc standard provides the basis to design a TSN control plane to face these challenges, following the Software-Defined Networking (SDN) paradigm. However, most of the current SDN/TSN control plane solutions are monolithic applications designed to run on dedicated servers, and do not provide the required flexibility to escalate when facing increasing service requests. This work presents μTSN-CP, an SDN/TSN microservices-based control plane, based on the 802.1Qcc standard. Our architecture leverages the advantages of microservices, enabling the control plane to scale up or down in response to varying workloads dynamically. We achieve enhanced flexibility and resilience by breaking down the control plane into smaller, independent microservices. The performance of μTSN-CP is evaluated in a real environment with TSN switches, and various integer linear problem solvers, running over different computing platforms.
      Citation: Future Internet
      PubDate: 2024-04-01
      DOI: 10.3390/fi16040120
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 121: Research on Secure Community
           Opportunity Network Based on Trust Model

    • Authors: Bing Su, Jiwu Liang
      First page: 121
      Abstract: With the innovation of wireless communication technology and the surge of data in mobile networks, traditional routing strategies need to be improved. Given the shortcomings of existing opportunistic routing strategies in transmission performance and security, this paper proposes a community opportunistic routing decision-making method based on the trust model. This algorithm calculates the node’s trust value through the node’s historical forwarding behavior and then calculates the node’s trust value based on the trust model. Thresholds and trust attenuation divide dynamic security communities. For message forwarding, nodes in the security community are prioritized as next-hop relay nodes, thus ensuring that message delivery is always in a safe and reliable environment. On this basis, better relay nodes are further selected for message forwarding based on the node centrality, remaining cache space, and remaining energy, effectively improving the message forwarding efficiency. Through node trust value and community cooperation, safe and efficient data transmission is achieved, thereby improving the transmission performance and security of the network. Through comparison of simulation and opportunistic network routing algorithms, compared with traditional methods, this strategy has the highest transmission success rate of 81% with slightly increased routing overhead, and this algorithm has the lowest average transmission delay.
      Citation: Future Internet
      PubDate: 2024-04-01
      DOI: 10.3390/fi16040121
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 122: SDN-Based Secure Common Emergency
           Service for Railway and Road Co-Existence Scenarios

    • Authors: Radheshyam Singh, Leo Mendiboure, José Soler, Michael Stübert Berger, Tidiane Sylla, Marion Berbineau, Lars Dittmann
      First page: 122
      Abstract: In the near future, there will be a greater emphasis on sharing network resources between roads and railways to improve transportation efficiency and reduce infrastructure costs. This could enable the development of global Cooperative Intelligent Transport Systems (C-ITSs). In this paper, a software-defined networking (SDN)-based common emergency service is developed and validated for a railway and road telecommunication shared infrastructure. Along with this, the developed application is capable of reducing the chances of distributed denial-of-service (DDoS) situations. A level-crossing scenario is considered to demonstrate the developed solution where railway tracks are perpendicular to the roads. Two cases are considered to validate and analyze the developed SDN application for common emergency scenarios. In case 1, no cross-communication is available between the road and railway domains. In this case, emergency message distribution is carried out by the assigned emergency servers with the help of the SDN controller. In case 2, nodes (cars and trains) are defined with two wireless interfaces, and one interface is reserved for emergency data communication. To add the DDoS resiliency to the developed system the messaging behavior of each node is observed and if an abnormality is detected, packets are dropped to avoid malicious activity.
      Citation: Future Internet
      PubDate: 2024-04-02
      DOI: 10.3390/fi16040122
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 123: Minimum-Cost-Based Neighbour Node
           Discovery Scheme for Fault Tolerance under IoT-Fog Networks

    • Authors: Premalatha Baskar, Prakasam Periasamy
      First page: 123
      Abstract: The exponential growth in data traffic in the real world has drawn attention to the emerging computing technique called Fog Computing (FC) for offloading tasks in fault-free environments. This is a promising computing standard that offers higher computing benefits with a reduced cost, higher flexibility, and increased availability. With the increased number of tasks, the occurrence of faults increases and affects the offloading of tasks. A suitable mechanism is essential to rectify the faults that occur in the Fog network. In this research, the fault-tolerance (FT) mechanism is proposed based on cost optimization and fault minimization. Initially, the faulty nodes are identified based on the remaining residual energy with the proposed Priority Task-based Fault-Tolerance (PTFT) mechanism. The Minimum-Cost Neighbour Candidate Node Discovery (MCNCND) algorithm is proposed to discover the neighbouring candidate Fog access node that can replace the faulty Fog node. The Replication and Pre-emptive Forwarding (RPF) algorithm is proposed to forward the task information to the new candidate Fog access node for reliable transmission. These proposed mechanisms are simulated, analysed, and compared with existing FT methods. It is observed that the proposed FT mechanism improves the utilization of an active number of Fog access nodes. It also saved a residual energy of 1.55 J without replicas, compared to the 0.85 J of energy that is used without the FT method.
      Citation: Future Internet
      PubDate: 2024-04-03
      DOI: 10.3390/fi16040123
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 124: Task Allocation of Heterogeneous
           Multi-Unmanned Systems Based on Improved Sheep Flock Optimization
           Algorithm

    • Authors: Haibo Liu, Yang Liao, Changting Shi, Jing Shen
      First page: 124
      Abstract: The objective of task allocation in unmanned systems is to complete tasks at minimal costs. However, the current algorithms employed for coordinating multiple unmanned systems in task allocation tasks frequently converge to local optima, thus impeding the identification of the best solutions. To address these challenges, this study builds upon the sheep flock optimization algorithm (SFOA) by preserving individuals eliminated during the iterative process within a prior knowledge set, which is continuously updated. During the reproduction phase of the algorithm, this prior knowledge is utilized to guide the generation of new individuals, preventing their rapid reconvergence to local optima. This approach aids in reducing the frequency at which the algorithm converges to local optima, continually steering the algorithm towards the global optimum and thereby enhancing the efficiency of task allocation. Finally, various task scenarios are presented to evaluate the performances of various algorithms. The results show that the algorithm proposed in this paper is more likely than other algorithms to escape from local optima and find the global optimum.
      Citation: Future Internet
      PubDate: 2024-04-07
      DOI: 10.3390/fi16040124
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 125: Perspectives of Young Digital Natives
           on Digital Marketing: Exploring Annoyance and Effectiveness with
           Eye-Tracking Analysis

    • Authors: Stefanos Balaskas, Georgia Kotsari, Maria Rigou
      First page: 125
      Abstract: Currently, there are a wide range of approaches to deploying digital ads, with advanced technologies now being harnessed to craft advertising that is engaging and even tailored to personal interests and preferences, yet potentially distracting and irritating. This research seeks to evaluate contemporary digital advertising methods by assessing how annoying they are to users, particularly when they distract users from intended tasks or cause delays in regular online activities. To pursue this, an eye-tracking study was conducted, with 51 participants navigating a specially designed website featuring seven distinct types of advertisements without a specific content to avoid the effect of ad content on the collected data. Participants were asked to execute specific information-seeking tasks during the experiment and afterwards to report if they recalled seeing each ad and the degree of annoyance by each ad type. Ad effectiveness is assessed by eye-tracking metrics (time to first fixation, average fixation duration, dwell time, fixation count, and revisit count) depicting how appealing an ad is as a marketing stimulus. Findings indicated that pop-ups, ads with content reorganization, and non-skippable videos ranked as the most annoying forms of advertising. Conversely, in-content ads without content reorganization, banners, and right rail ads were indicated as less intrusive options, seeming to strike a balance between effectiveness and user acceptance.
      Citation: Future Internet
      PubDate: 2024-04-08
      DOI: 10.3390/fi16040125
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 126: Metaverse Meets Smart
           Cities—Applications, Benefits, and Challenges

    • Authors: Florian Maier, Markus Weinberger
      First page: 126
      Abstract: The metaverse aims to merge the virtual and real worlds. The target is to generate a virtual community where social components play a crucial role and combine different areas such as entertainment, work, shopping, and services. This idea is explicitly appealing in the context of smart cities. The metaverse offers digitalization approaches and can strengthen citizens’ social community. While the existing literature covers the exemplary potential of smart city metaverse applications, this study aims to provide a comprehensive overview of the potential and already implemented metaverse applications in the context of cities and municipalities. In addition, challenges related to these applications are identified. The study combines literature reviews and expert interviews to ensure a broad overview. Forty-eight smart city metaverse applications from eleven areas were identified, and actual projects from eleven cities demonstrate the current state of development. Still, further research should evaluate the benefits of the various applications and find strategies to overcome the identified challenges.
      Citation: Future Internet
      PubDate: 2024-04-08
      DOI: 10.3390/fi16040126
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 127: Multi-WiIR: Multi-User Identity
           Legitimacy Authentication Based on WiFi Device

    • Authors: Zhongcheng Wei, Yanhu Dong
      First page: 127
      Abstract: With the proliferation of WiFi devices, WiFi-based identification technology has garnered attention in the security domain and has demonstrated initial success. Nonetheless, when untrained illegitimate users appear, the classifier tends to categorize them as if they were trained users. In response to this issue, researchers have proposed identity legitimacy authentication systems to identify illicit users, albeit only applicable to individual users. In this article, we propose a multi-user legitimacy authentication system based on WiFi, termed Multi-WiIR. Leveraging WiFi signals, the system captures users’ walking patterns to ascertain their legitimacy. The core concept entails training a multi-branch deep neural network, designated WiIR-Net, for feature extraction of individual users. Binary classifiers are then applied to each user, and legitimacy is established by comparing the model’s output to predefined thresholds, thus facilitating multi-user legitimacy authentication. Moreover, the study experimentally investigated the impact of the number of legitimate individuals on accuracy rates. The results demonstrated that The Multi-WiIR system showed commendable performance with low latency, being capable of conducting legitimacy recognition in scenarios involving up to four users, with an accuracy rate reaching 85.11%.
      Citation: Future Internet
      PubDate: 2024-04-08
      DOI: 10.3390/fi16040127
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 128: A Survey on Energy-Aware Security
           Mechanisms for the Internet of Things

    • Authors: Peixiong He, Yi Zhou, Xiao Qin
      First page: 128
      Abstract: The Internet of Things (IoT) employs sensors and the Internet for information exchange, enabling intelligent identification, monitoring, and management, which has deeply impacted various sectors such as power, medical care, and security, transforming social activities and lifestyles. Regrettably, IoT systems suffer from two main challenges, namely sustainability and security. Hence, pondering how to enhance sustainable and energy-efficient practices for IoT systems to mitigate risks becomes a worthwhile endeavor. To address this issue, we conduct a survey of energy-aware security mechanisms in the Internet of Things. Specifically, we examine the challenges that IoT is facing in terms of energy efficiency and security, and we inspect current energy-saving and privacy-preserving technologies for IoT systems. Moreover, we delineate a vision for the future of IoT, emphasizing energy-aware security mechanisms. Finally, we outline the challenges encountered in achieving energy-aware security mechanisms, as well as the direction of future research. Motivated by this study, we envision advancements in the IoT that not only harness the benefits of science and technology but also enhance the security and safety of our data.
      Citation: Future Internet
      PubDate: 2024-04-08
      DOI: 10.3390/fi16040128
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 129: All about Delay-Tolerant Networking
           (DTN) Contributions to Future Internet

    • Authors: Georgios Koukis, Konstantina Safouri, Vassilis Tsaoussidis
      First page: 129
      Abstract: Although several years have passed since its first introduction, the significance of Delay-Tolerant Networking (DTN) remains evident, particularly in challenging environments where traditional networks face operational limitations such as disrupted communication or high latency. This survey paper aims to explore the diverse array of applications where DTN technologies have proven successful, with a focus on emerging and novel application paradigms. In particular, we focus on the contributions of DTN in the Future Internet, including its contribution to space applications, smart cities and the Internet of Things, but also to underwater communications. We also discuss its potential to be used jointly with information-centric networks to change the internet communication paradigm in the future.
      Citation: Future Internet
      PubDate: 2024-04-09
      DOI: 10.3390/fi16040129
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 130: Polling Mechanisms for Industrial IoT
           Applications in Long-Range Wide-Area Networks

    • Authors: David Todoli-Ferrandis, Javier Silvestre-Blanes, Víctor Sempere-Payá, Salvador Santonja-Climent
      First page: 130
      Abstract: LoRaWAN is a low-power wide-area network (LPWAN) technology that is well suited for industrial IoT (IIoT) applications. One of the challenges of using LoRaWAN for IIoT is the need to collect data from a large number of devices. Polling is a common way to collect data from devices, but it can be inefficient for LoRaWANs, which are designed for low data rates and long battery life. LoRaWAN devices operating in two specific modes can receive messages from a gateway even when they are not sending data themselves. This allows the gateway to send commands to devices at any time, without having to wait for them to check for messages. This paper proposes various polling mechanisms for industrial IoT applications in LoRaWANs and presents specific considerations for designing efficient polling mechanisms in the context of industrial IoT applications leveraging LoRaWAN technology.
      Citation: Future Internet
      PubDate: 2024-04-12
      DOI: 10.3390/fi16040130
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 131: Congestion Control Mechanism Based on
           Backpressure Feedback in Data Center Networks

    • Authors: Wei Li, Mengzhen Ren, Yazhi Liu, Chenyu Li, Hui Qian, Zhenyou Zhang
      First page: 131
      Abstract: In order to solve the congestion problem caused by the dramatic growth of traffic in data centers, many end-to-end congestion controls have been proposed to respond to congestion in one round-trip time (RTT). In this paper, we propose a new congestion control mechanism based on backpressure feedback (BFCC), which is designed with the primary goal of switch-to-switch congestion control to resolve congestion in a one-hop RTT. This approach utilizes a programmable data plane to continuously monitor network congestion in real time and identify real-congested flows. In addition, it employs targeted flow control through backpressure feedback. We validate the feasibility of this mechanism on BMV2, a programmable virtual switch based on programming protocol-independent packet processors (P4). Simulation results demonstrate that BFCC greatly enhances flow completion times (FCTs) compared to other end-to-end congestion control mechanisms. It achieves 1.2–2× faster average completion times than other mechanisms.
      Citation: Future Internet
      PubDate: 2024-04-15
      DOI: 10.3390/fi16040131
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 132: SeedChain: A Secure and Transparent
           Blockchain-Driven Framework to Revolutionize the Seed Supply Chain

    • Authors: Rohit Ahuja, Sahil Chugh, Raman Singh
      First page: 132
      Abstract: Farming is a major sector required for any nation to become self-sustainable. Quality seeds heavily influence the effectiveness of farming. Seeds cultivated by breeders pass through several entities in order to reach farmers. The existing seed supply chain is opaque and intractable, which not only hinders the growth of crops but also makes the life of a farmer miserable. Blockchain has been widely employed to enable fair and secure transactions between farmers and buyers, but concerns related to transparency and traceability in the seed supply chain, counterfeit seeds, middlemen involvement, and inefficient processes in the agricultural ecosystem have not received enough attention. To address these concerns, a blockchain-based solution is proposed that brings breeders, farmers, warehouse owners, transporters, and food corporations to a single platform to enhance transparency, traceability, and trust among trust-less parties. A smart contract updates the status of seeds from a breeder from submitted to approved. Then, a non-fungible token (NFT) corresponding to approved seeds is minted for the breeder, which records the date of cultivation and its owner (breeder). The NFT enables farmers to keep track of seeds right from the date of their cultivation and their owner, which helps them to make better decisions about picking seeds from the correct owner. Farmers directly interact with warehouses to purchase seeds, which removes the need for middlemen and improves the trust among trust-less entities. Furthermore, a tender for the transportation of seeds is auctioned on the basis of the priority location locp, Score, and bid_amount of every transporter, which provides a fair chance to every transporter to restrict the monopoly of a single transporter. The proposed system achieves immutability, decentralization, and efficiency inherently from the blockchain. We implemented the proposed scheme and deployed it on the Ethereum network. Smart contracts deployed over the Ethereum network interact with React-based web pages. The analysis and results of the proposed model indicate that it is viable and secure, as well as superior to the current seed supply chain system.
      Citation: Future Internet
      PubDate: 2024-04-15
      DOI: 10.3390/fi16040132
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 133: Secure Data Sharing in Federated
           Learning through Blockchain-Based Aggregation

    • Authors: Bowen Liu, Qiang Tang
      First page: 133
      Abstract: In this paper, we explore the realm of federated learning (FL), a distributed machine learning (ML) paradigm, and propose a novel approach that leverages the robustness of blockchain technology. FL, a concept introduced by Google in 2016, allows multiple entities to collaboratively train an ML model without the need to expose their raw data. However, it faces several challenges, such as privacy concerns and malicious attacks (e.g., data poisoning attacks). Our paper examines the existing EIFFeL framework, a protocol for decentralized real-time messaging in continuous integration and delivery pipelines, and introduces an enhanced scheme that leverages the trustworthy nature of blockchain technology. Our scheme eliminates the need for a central server and any other third party, such as a public bulletin board, thereby mitigating the risks associated with the compromise of such third parties.
      Citation: Future Internet
      PubDate: 2024-04-15
      DOI: 10.3390/fi16040133
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 134: Leveraging Digital Twin Technology
           for Enhanced Cybersecurity in Cyber–Physical Production Systems

    • Authors: Yuning Jiang, Wei Wang, Jianguo Ding, Xin Lu, Yanguo Jing
      First page: 134
      Abstract: The convergence of cyber and physical systems through cyber–physical systems (CPSs) has been integrated into cyber–physical production systems (CPPSs), leading to a paradigm shift toward intelligent manufacturing. Despite the transformative benefits that CPPS provides, its increased connectivity exposes manufacturers to cyber-attacks through exploitable vulnerabilities. This paper presents a novel approach to CPPS security protection by leveraging digital twin (DT) technology to develop a comprehensive security model. This model enhances asset visibility and supports prioritization in mitigating vulnerable components through DT-based virtual tuning, providing quantitative assessment results for effective mitigation. Our proposed DT security model also serves as an advanced simulation environment, facilitating the evaluation of CPPS vulnerabilities across diverse attack scenarios without disrupting physical operations. The practicality and effectiveness of our approach are illustrated through its application in a human–robot collaborative assembly system, demonstrating the potential of DT technology.
      Citation: Future Internet
      PubDate: 2024-04-17
      DOI: 10.3390/fi16040134
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 135: Blockchain-Enabled Provenance
           Tracking for Sustainable Material Reuse in Construction Supply Chains

    • Authors: Stanly Wilson, Kwabena Adu-Duodu, Yinhao Li, Ringo Sham, Mohammed Almubarak, Yingli Wang, Ellis Solaiman, Charith Perera, Rajiv Ranjan, Omer Rana
      First page: 135
      Abstract: The growing complexity of construction supply chains and the significant impact of the construction industry on the environment demand an understanding of how to reuse and repurpose materials. In response to this critical challenge, research gaps that are significant in promoting material circularity are described. Despite its potential, the use of blockchain technology in construction faces challenges in verifiability, scalability, privacy, and interoperability. We propose a novel multilayer blockchain framework to enhance provenance tracking and data retrieval to enable a reliable audit trail. The framework utilises a privacy-centric solution that combines decentralised and centralised storage, security, and privacy. Furthermore, the framework implements access control to strengthen security and privacy, fostering transparency and information sharing among the stakeholders. These contributions collectively lead to trusted material circularity in a built environment. The implementation framework aims to create a prototype for blockchain applications in construction supply chains.
      Citation: Future Internet
      PubDate: 2024-04-17
      DOI: 10.3390/fi16040135
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 136: Computation Offloading Based on a
           Distributed Overlay Network Cache-Sharing Mechanism in Multi-Access Edge
           Computing

    • Authors: Yazhi Liu, Pengfei Zhong, Zhigang Yang, Wei Li, Siwei Li
      First page: 136
      Abstract: Multi-access edge computing (MEC) enhances service quality for users and reduces computational overhead by migrating workloads and application data to the network edge. However, current solutions for task offloading and cache replacement in edge scenarios are constrained by factors such as communication bandwidth, wireless network coverage, and limited storage capacity of edge devices, making it challenging to achieve high cache reuse and lower system energy consumption. To address these issues, a framework leveraging cooperative edge servers deployed in wireless access networks across different geographical regions is designed. Specifically, we propose the Distributed Edge Service Caching and Offloading (DESCO) network architecture and design a decentralized resource-sharing algorithm based on consistent hashing, named Cache Chord. Subsequently, based on DESCO and aiming to minimize overall user energy consumption while maintaining user latency constraints, we introduce the real-time computation offloading (RCO) problem and transform RCO into a multi-player static game, prove the existence of Nash equilibrium solutions, and solve it using a multi-dimensional particle swarm optimization algorithm. Finally, simulation results demonstrate that the proposed solution reduces the average energy consumption by over 27% in the DESCO network compared to existing algorithms.
      Citation: Future Internet
      PubDate: 2024-04-19
      DOI: 10.3390/fi16040136
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 137: From Seek-and-Destroy to
           Split-and-Destroy: Connection Partitioning as an Effective Tool against
           Low-Rate DoS Attacks

    • Authors: Vyron Kampourakis, Georgios Michail Makrakis, Constantinos Kolias
      First page: 137
      Abstract: Low-rate Denial of Service (LDoS) attacks are today considered one of the biggest threats against modern data centers and industrial infrastructures. Unlike traditional Distributed Denial of Service (DDoS) attacks that are mainly volumetric, LDoS attacks exhibit a very small network footprint, and therefore can easily elude standard detection and defense mechanisms. This work introduces a defense strategy that may prove particularly effective against attacks that are based on long-lived connections, an inherent trait of LDoS attacks. Our approach is based on iteratively partitioning the active connections of a victim server across a number of replica servers, and then re-evaluating the health status of each replica instance. At its core, this approach relies on live migration and containerization technologies. The main advantage of the proposed approach is that it can discover and isolate malicious connections with virtually no information about the type and characteristics of the performed attack. Additionally, while the defense takes place, there is little to no indication of the fact to the attacker. We assess various rudimentary schemes to quantify the scalability of our approach. The results from the simulations indicate that it is possible to save the vast majority of the benign connections (80%) in less than 5 min.
      Citation: Future Internet
      PubDate: 2024-04-19
      DOI: 10.3390/fi16040137
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 138: SRv6-Based Edge Service Continuity in
           5G Mobile Networks

    • Authors: Laura Lemmi, Carlo Puliafito, Antonio Virdis, Enzo Mingozzi
      First page: 138
      Abstract: Ensuring compliance with the stringent latency requirements of edge services requires close cooperation between the network and computing components. Within mobile 5G networks, the nomadic behavior of users may impact the performance of edge services, prompting the need for workload migration techniques. These techniques allow services to follow users by moving between edge nodes. This paper introduces an innovative approach for edge service continuity by integrating Segment Routing over IPv6 (SRv6) into the 5G core data plane alongside the ETSI multi-access edge computing (MEC) architecture. Our approach maintains compatibility with non-SRv6 5G network components. We use SRv6 for packet steering and Software-Defined Networking (SDN) for dynamic network configuration. Leveraging the SRv6 Network Programming paradigm, we achieve lossless workload migration by implementing a packet buffer as a virtual network function. Our buffer may be dynamically allocated and configured within the network. We test our proposed solution on a small-scale testbed consisting of an Open Network Operating System (ONOS) SDN controller and a core network made of P4 BMv2 switches, emulated using Mininet. A comparison with a non-SRv6 alternative that uses IPv6 routing shows the higher scalability and flexibility of our approach in terms of the number of rules to be installed and time required for configuration.
      Citation: Future Internet
      PubDate: 2024-04-19
      DOI: 10.3390/fi16040138
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 139: A Comprehensive Review of Machine
           Learning Approaches for Anomaly Detection in Smart Homes: Experimental
           Analysis and Future Directions

    • Authors: Md Motiur Rahman, Deepti Gupta, Smriti Bhatt, Shiva Shokouhmand, Miad Faezipour
      First page: 139
      Abstract: Detecting anomalies in human activities is increasingly crucial today, particularly in nuclear family settings, where there may not be constant monitoring of individuals’ health, especially the elderly, during critical periods. Early anomaly detection can prevent from attack scenarios and life-threatening situations. This task becomes notably more complex when multiple ambient sensors are deployed in homes with multiple residents, as opposed to single-resident environments. Additionally, the availability of datasets containing anomalies representing the full spectrum of abnormalities is limited. In our experimental study, we employed eight widely used machine learning and two deep learning classifiers to identify anomalies in human activities. We meticulously generated anomalies, considering all conceivable scenarios. Our findings reveal that the Gated Recurrent Unit (GRU) excels in accurately classifying normal and anomalous activities, while the naïve Bayes classifier demonstrates relatively poor performance among the ten classifiers considered. We conducted various experiments to assess the impact of different training–test splitting ratios, along with a five-fold cross-validation technique, on the performance. Notably, the GRU model consistently outperformed all other classifiers under both conditions. Furthermore, we offer insights into the computational costs associated with these classifiers, encompassing training and prediction phases. Extensive ablation experiments conducted in this study underscore that all these classifiers can effectively be deployed for anomaly detection in two-resident homes.
      Citation: Future Internet
      PubDate: 2024-04-19
      DOI: 10.3390/fi16040139
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 140: SUDC: Synchronous Update with the
           Division and Combination of SRv6 Policy

    • Authors: Yuze Liu, Weihong Wu, Ying Wang, Jiang Liu, Fan Yang
      First page: 140
      Abstract: With the expansion of network scale, new network services are emerging. Segment Routing over IPv6 (SRv6) can meet the diverse needs of more new services due to its excellent scalability and programmability. In the intelligent 6-Generation (6G) scenario, frequent SRv6 Traffic Engineering (TE) policy updates will result in the serious problem of unsynchronized updates across routers. Existing solutions suffer from issues such as long update cycles or large data overhead. To optimize the policy-update process, this paper proposes a scheme called Synchronous Update with the Division and Combination of SRv6 Policy (SUDC). Based on the characteristics of the SRv6 TE policy, SUDC divides the policies and introduces Bit Index Explicit Replication IPv6 Encapsulation (BIERv6) to multicast the policy blocks derived from policy dividing. The contribution of this paper is to propose the policy-dividing and combination mechanism and the policy-dividing algorithm. The simulation results demonstrate that compared with the existing schemes, the update overhead and update cycle of SUDC are reduced by 46.71% and 46.6%, respectively. The problem of unsynchronized updates across routers has been further improved.
      Citation: Future Internet
      PubDate: 2024-04-22
      DOI: 10.3390/fi16040140
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 141: Correction: Li et al. A Learning
           Game-Based Approach to Task-Dependent Edge Resource Allocation. Future
           Internet 2023, 15, 395

    • Authors: Zuopeng Li, Hengshuai Ju, Zepeng Ren
      First page: 141
      Abstract: In the original publication [...]
      Citation: Future Internet
      PubDate: 2024-04-22
      DOI: 10.3390/fi16040141
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 142: Edge Federated Optimization for
           Heterogeneous Data

    • Authors: Hsin-Tung Lin, Chih-Yu Wen
      First page: 142
      Abstract: This study focuses on optimizing federated learning in heterogeneous data environments. We implement the FedProx and a baseline algorithm (i.e., the FedAvg) with advanced optimization strategies to tackle non-IID data issues in distributed learning. Model freezing and pruning techniques are explored to showcase the effective operations of deep learning models on resource-constrained edge devices. Experimental results show that at a pruning rate of 10%, the FedProx with structured pruning in the MIT-BIH and ST databases achieved the best F1 scores, reaching 96.01% and 77.81%, respectively, which achieves a good balance between system efficiency and model accuracy compared to those of the FedProx with the original configuration, reaching F1 scores of 66.12% and 89.90%, respectively. Similarly, with layer freezing technique, unstructured pruning method, and a pruning rate of 20%, the FedAvg algorithm effectively balances classification performance and degradation of pruned model accuracy, achieving F1 scores of 88.75% and 72.75%, respectively, compared to those of the FedAvg with the original configuration, reaching 56.82% and 85.80%, respectively. By adopting model optimization strategies, a practical solution is developed for deploying complex models in edge federated learning, vital for its efficient implementation.
      Citation: Future Internet
      PubDate: 2024-04-22
      DOI: 10.3390/fi16040142
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 143: Multi-Constraint and Multi-Policy
           Path Hopping Active Defense Method Based on SDN

    • Authors: Bing Zhang, Hui Li, Shuai Zhang, Jing Sun, Ning Wei, Wenhong Xu, Huan Wang
      First page: 143
      Abstract: Path hopping serves as an active defense mechanism in network security, yet it encounters challenges like a restricted path switching space, the recurrent use of similar paths and vital nodes, a singular triggering mechanism for path switching, and fixed hopping intervals. This paper introduces an active defense method employing multiple constraints and strategies for path hopping. A depth-first search (DFS) traversal is utilized to compute all possible paths between nodes, thereby broadening the path switching space while simplifying path generation complexity. Subsequently, constraints are imposed on residual bandwidth, selection periods, path similitude, and critical nodes to reduce the likelihood of reusing similar paths and crucial nodes. Moreover, two path switching strategies are formulated based on the weights of residual bandwidth and critical nodes, along with the calculation of path switching periods. This facilitates adaptive switching of path hopping paths and intervals, contingent on the network’s residual bandwidth threshold, in response to diverse attack scenarios. Simulation outcomes illustrate that this method, while maintaining normal communication performance, expands the path switching space effectively, safeguards against eavesdropping and link-flooding attacks, enhances path switching diversity and unpredictability, and fortifies the network’s resilience against malicious attacks.
      Citation: Future Internet
      PubDate: 2024-04-22
      DOI: 10.3390/fi16040143
      Issue No: Vol. 16, No. 4 (2024)
       
  • Future Internet, Vol. 16, Pages 106: Factors Affecting Trust and
           Acceptance for Blockchain Adoption in Digital Payment Systems: A
           Systematic Review

    • Authors: Tenzin Norbu, Joo Yeon Park, Kok Wai Wong, Hui Cui
      First page: 106
      Abstract: Blockchain technology has become significant for financial sectors, especially digital payment systems, offering enhanced security, transparency, and efficiency. However, there is limited research on the factors influencing user trust in and acceptance of blockchain adoption in digital payment systems. This systematic review provides insight into the key factors impacting consumers’ perceptions and behaviours towards embracing blockchain technology. A total of 1859 studies were collected, with 48 meeting the criteria for comprehensive analysis. The results showed that security, privacy, transparency, and regulation are the most significant factors influencing trust for blockchain adoption. The most influential factors identified in the Unified Theory of Acceptance and Use of Technology (UTAUT) model include performance expectancy, effort expectancy, social influence, and facilitating conditions. Incorporating a trust and acceptance model could be a viable approach to tackling obstacles and ensuring the successful integration of blockchain technology into digital payment systems. Understanding these factors is crucial for creating a favourable atmosphere for adopting blockchain technology in digital payments. User-perspective research on blockchain adoption in digital payment systems is still insufficient, and this aspect still requires further investigation. Blockchain adoption in digital payment systems has not been sufficiently conducted from the user’s perspective, and there is a scope for it to be carried out. This review aims to shed light on the factors of trust in and acceptance of blockchain adoption in digital payment systems so that the full potential of blockchain technology can be realised. Understanding these factors and their intricate connections is imperative in fostering a conducive environment for the widespread acceptance of blockchain technology in digital payments.
      Citation: Future Internet
      PubDate: 2024-03-21
      DOI: 10.3390/fi16030106
      Issue No: Vol. 16, No. 3 (2024)
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 34.239.151.124
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-
JournalTOCs
 
 
  Subjects -> COMMUNICATIONS (Total: 518 journals)
    - COMMUNICATIONS (446 journals)
    - DIGITAL AND WIRELESS COMMUNICATION (31 journals)
    - HUMAN COMMUNICATION (19 journals)
    - MEETINGS AND CONGRESSES (7 journals)
    - RADIO, TELEVISION AND CABLE (15 journals)

DIGITAL AND WIRELESS COMMUNICATION (31 journals)

Showing 1 - 31 of 31 Journals sorted alphabetically
Ada : A Journal of Gender, New Media, and Technology     Open Access   (Followers: 22)
Advances in Image and Video Processing     Open Access   (Followers: 27)
Communications and Network     Open Access   (Followers: 13)
E-Health Telecommunication Systems and Networks     Open Access   (Followers: 3)
EURASIP Journal on Wireless Communications and Networking     Open Access   (Followers: 14)
Future Internet     Open Access   (Followers: 88)
Granular Computing     Hybrid Journal  
IEEE Transactions on Wireless Communications     Hybrid Journal   (Followers: 26)
IEEE Wireless Communications Letters     Hybrid Journal   (Followers: 42)
IET Wireless Sensor Systems     Open Access   (Followers: 18)
International Journal of Communications, Network and System Sciences     Open Access   (Followers: 9)
International Journal of Digital Earth     Hybrid Journal   (Followers: 15)
International Journal of Embedded and Real-Time Communication Systems     Full-text available via subscription   (Followers: 6)
International Journal of Interactive Communication Systems and Technologies     Full-text available via subscription   (Followers: 2)
International Journal of Machine Intelligence and Sensory Signal Processing     Hybrid Journal   (Followers: 4)
International Journal of Mobile Computing and Multimedia Communications     Full-text available via subscription   (Followers: 2)
International Journal of Satellite Communications and Networking     Hybrid Journal   (Followers: 39)
International Journal of Wireless and Mobile Computing     Hybrid Journal   (Followers: 8)
International Journal of Wireless Networks and Broadband Technologies     Full-text available via subscription   (Followers: 2)
International Journals Digital Communication and Analog Signals     Full-text available via subscription   (Followers: 2)
Journal of Digital Information     Open Access   (Followers: 206)
Journal of Interconnection Networks     Hybrid Journal   (Followers: 1)
Journal of the Southern Association for Information Systems     Open Access   (Followers: 2)
Mobile Media & Communication     Hybrid Journal   (Followers: 10)
Nano Communication Networks     Hybrid Journal   (Followers: 5)
Psychology of Popular Media Culture     Full-text available via subscription   (Followers: 1)
Signal, Image and Video Processing     Hybrid Journal   (Followers: 11)
Ukrainian Information Space     Open Access  
Vehicular Communications     Full-text available via subscription   (Followers: 4)
Vista     Open Access   (Followers: 4)
Wireless Personal Communications     Hybrid Journal   (Followers: 6)
Similar Journals
Similar Journals
HOME > Browse the 73 Subjects covered by JournalTOCs  
SubjectTotal Journals
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 34.239.151.124
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-