Subjects -> COMMUNICATIONS (Total: 518 journals)
    - COMMUNICATIONS (446 journals)
    - DIGITAL AND WIRELESS COMMUNICATION (31 journals)
    - HUMAN COMMUNICATION (19 journals)
    - MEETINGS AND CONGRESSES (7 journals)
    - RADIO, TELEVISION AND CABLE (15 journals)

DIGITAL AND WIRELESS COMMUNICATION (31 journals)

Showing 1 - 31 of 31 Journals sorted alphabetically
Ada : A Journal of Gender, New Media, and Technology     Open Access   (Followers: 22)
Advances in Image and Video Processing     Open Access   (Followers: 24)
Communications and Network     Open Access   (Followers: 13)
E-Health Telecommunication Systems and Networks     Open Access   (Followers: 3)
EURASIP Journal on Wireless Communications and Networking     Open Access   (Followers: 14)
Future Internet     Open Access   (Followers: 87)
Granular Computing     Hybrid Journal  
IEEE Transactions on Wireless Communications     Hybrid Journal   (Followers: 26)
IEEE Wireless Communications Letters     Hybrid Journal   (Followers: 42)
IET Wireless Sensor Systems     Open Access   (Followers: 18)
International Journal of Communications, Network and System Sciences     Open Access   (Followers: 9)
International Journal of Digital Earth     Hybrid Journal   (Followers: 15)
International Journal of Embedded and Real-Time Communication Systems     Full-text available via subscription   (Followers: 6)
International Journal of Interactive Communication Systems and Technologies     Full-text available via subscription   (Followers: 2)
International Journal of Machine Intelligence and Sensory Signal Processing     Hybrid Journal   (Followers: 3)
International Journal of Mobile Computing and Multimedia Communications     Full-text available via subscription   (Followers: 2)
International Journal of Satellite Communications and Networking     Hybrid Journal   (Followers: 39)
International Journal of Wireless and Mobile Computing     Hybrid Journal   (Followers: 8)
International Journal of Wireless Networks and Broadband Technologies     Full-text available via subscription   (Followers: 2)
International Journals Digital Communication and Analog Signals     Full-text available via subscription   (Followers: 2)
Journal of Digital Information     Open Access   (Followers: 181)
Journal of Interconnection Networks     Hybrid Journal   (Followers: 1)
Journal of the Southern Association for Information Systems     Open Access   (Followers: 2)
Mobile Media & Communication     Hybrid Journal   (Followers: 10)
Nano Communication Networks     Hybrid Journal   (Followers: 5)
Psychology of Popular Media Culture     Full-text available via subscription   (Followers: 1)
Signal, Image and Video Processing     Hybrid Journal   (Followers: 11)
Ukrainian Information Space     Open Access  
Vehicular Communications     Full-text available via subscription   (Followers: 4)
Vista     Open Access   (Followers: 4)
Wireless Personal Communications     Hybrid Journal   (Followers: 6)
Similar Journals
Journal Cover
Future Internet
Journal Prestige (SJR): 0.219
Citation Impact (citeScore): 1
Number of Followers: 87  

  This is an Open Access Journal Open Access journal
ISSN (Print) 1999-5903
Published by MDPI Homepage  [258 journals]
  • Future Internet, Vol. 16, Pages 67: A Systematic Survey on 5G and 6G
           Security Considerations, Challenges, Trends, and Research Areas

    • Authors: Paul Scalise, Matthew Boeding, Michael Hempel, Hamid Sharif, Joseph Delloiacovo, John Reed
      First page: 67
      Abstract: With the rapid rollout and growing adoption of 3GPP 5thGeneration (5G) cellular services, including in critical infrastructure sectors, it is important to review security mechanisms, risks, and potential vulnerabilities within this vital technology. Numerous security capabilities need to work together to ensure and maintain a sufficiently secure 5G environment that places user privacy and security at the forefront. Confidentiality, integrity, and availability are all pillars of a privacy and security framework that define major aspects of 5G operations. They are incorporated and considered in the design of the 5G standard by the 3rd Generation Partnership Project (3GPP) with the goal of providing a highly reliable network operation for all. Through a comprehensive review, we aim to analyze the ever-evolving landscape of 5G, including any potential attack vectors and proposed measures to mitigate or prevent these threats. This paper presents a comprehensive survey of the state-of-the-art research that has been conducted in recent years regarding 5G systems, focusing on the main components in a systematic approach: the Core Network (CN), Radio Access Network (RAN), and User Equipment (UE). Additionally, we investigate the utilization of 5G in time-dependent, ultra-confidential, and private communications built around a Zero Trust approach. In today’s world, where everything is more connected than ever, Zero Trust policies and architectures can be highly valuable in operations containing sensitive data. Realizing a Zero Trust Architecture entails continuous verification of all devices, users, and requests, regardless of their location within the network, and grants permission only to authorized entities. Finally, developments and proposed methods of new 5G and future 6G security approaches, such as Blockchain technology, post-quantum cryptography (PQC), and Artificial Intelligence (AI) schemes, are also discussed to understand better the full landscape of current and future research within this telecommunications domain.
      Citation: Future Internet
      PubDate: 2024-02-20
      DOI: 10.3390/fi16030067
      Issue No: Vol. 16, No. 3 (2024)
       
  • Future Internet, Vol. 16, Pages 68: The Future of Healthcare with Industry
           5.0: Preliminary Interview-Based Qualitative Analysis

    • Authors: Juliana Basulo-Ribeiro, Leonor Teixeira
      First page: 68
      Abstract: With the advent of Industry 5.0 (I5.0), healthcare is undergoing a profound transformation, integrating human capabilities with advanced technologies to promote a patient-centered, efficient, and empathetic healthcare ecosystem. This study aims to examine the effects of Industry 5.0 on healthcare, emphasizing the synergy between human experience and technology. To this end, 6 specific objectives were found, which were answered in the results through an empirical study based on interviews with 11 healthcare professionals. This article thus outlines strategic and policy guidelines for the integration of I5.0 in healthcare, advocating policy-driven change, and contributes to the literature by offering a solid theoretical basis on I5.0 and its impact on the healthcare sector.
      Citation: Future Internet
      PubDate: 2024-02-22
      DOI: 10.3390/fi16030068
      Issue No: Vol. 16, No. 3 (2024)
       
  • Future Internet, Vol. 16, Pages 69: An Ontology-Based Cybersecurity
           Framework for AI-Enabled Systems and Applications

    • Authors: Davy Preuveneers, Wouter Joosen
      First page: 69
      Abstract: Ontologies have the potential to play an important role in the cybersecurity landscape as they are able to provide a structured and standardized way to semantically represent and organize knowledge about a domain of interest. They help in unambiguously modeling the complex relationships between various cybersecurity concepts and properties. Leveraging this knowledge, they provide a foundation for designing more intelligent and adaptive cybersecurity systems. In this work, we propose an ontology-based cybersecurity framework that extends well-known cybersecurity ontologies to specifically model and manage threats imposed on applications, systems, and services that rely on artificial intelligence (AI). More specifically, our efforts focus on documenting prevalent machine learning (ML) threats and countermeasures, including the mechanisms by which emerging attacks circumvent existing defenses as well as the arms race between them. In the ever-expanding AI threat landscape, the goal of this work is to systematically formalize a body of knowledge intended to complement existing taxonomies and threat-modeling approaches of applications empowered by AI and to facilitate their automated assessment by leveraging enhanced reasoning capabilities.
      Citation: Future Internet
      PubDate: 2024-02-22
      DOI: 10.3390/fi16030069
      Issue No: Vol. 16, No. 3 (2024)
       
  • Future Internet, Vol. 16, Pages 70: Micro-FL: A Fault-Tolerant Scalable
           Microservice-Based Platform for Federated Learning

    • Authors: Mikael Sabuhi, Petr Musilek, Cor-Paul Bezemer
      First page: 70
      Abstract: As the number of machine learning applications increases, growing concerns about data privacy expose the limitations of traditional cloud-based machine learning methods that rely on centralized data collection and processing. Federated learning emerges as a promising alternative, offering a novel approach to training machine learning models that safeguards data privacy. Federated learning facilitates collaborative model training across various entities. In this approach, each user trains models locally and shares only the local model parameters with a central server, which then generates a global model based on these individual updates. This approach ensures data privacy since the training data itself is never directly shared with a central entity. However, existing federated machine learning frameworks are not without challenges. In terms of server design, these frameworks exhibit limited scalability with an increasing number of clients and are highly vulnerable to system faults, particularly as the central server becomes a single point of failure. This paper introduces Micro-FL, a federated learning framework that uses a microservices architecture to implement the federated learning system. It demonstrates that the framework is fault-tolerant and scalable, showing its ability to handle an increasing number of clients. A comprehensive performance evaluation confirms that Micro-FL proficiently handles component faults, enabling a smooth and uninterrupted operation.
      Citation: Future Internet
      PubDate: 2024-02-22
      DOI: 10.3390/fi16030070
      Issue No: Vol. 16, No. 3 (2024)
       
  • Future Internet, Vol. 16, Pages 71: Scheduling of Industrial Control
           Traffic for Dynamic RAN Slicing with Distributed Massive MIMO

    • Authors: Emma Fitzgerald, Michał Pióro
      First page: 71
      Abstract: Industry 4.0, with its focus on flexibility and customizability, is pushing in the direction of wireless communication in future smart factories, in particular, massive multiple-input-multiple-output (MIMO) and its future evolution of large intelligent surfaces (LIS), which provide more reliable channel quality than previous technologies. At the same time, network slicing in 5G and beyond systems provides easier management of different categories of users and traffic, and a better basis for providing quality of service, especially for demanding use cases such as industrial control. In previous works, we have presented solutions for scheduling industrial control traffic in LIS and massive MIMO systems. We now consider the case of dynamic slicing in the radio access network, where we need to not only meet the stringent latency and reliability requirements of industrial control traffic, but also minimize the radio resources occupied by the network slice serving the control traffic, ensuring resources are available for lower-priority traffic slices. In this paper, we provide mixed-integer programming optimization formulations for radio resource usage minimization for dynamic network slicing. We tested our formulations in numerical experiments with varying traffic profiles and numbers of nodes, up to a maximum of 32 nodes. For all problem instances tested, we were able to calculate an optimal schedule within 1 s, making our approach feasible for use in real deployment scenarios.
      Citation: Future Internet
      PubDate: 2024-02-23
      DOI: 10.3390/fi16030071
      Issue No: Vol. 16, No. 3 (2024)
       
  • Future Internet, Vol. 16, Pages 72: Analyzing GPU Performance in
           Virtualized Environments: A Case Study

    • Authors: Adel Belkhiri, Michel Dagenais
      First page: 72
      Abstract: The graphics processing unit (GPU) plays a crucial role in boosting application performance and enhancing computational tasks. Thanks to its parallel architecture and energy efficiency, the GPU has become essential in many computing scenarios. On the other hand, the advent of GPU virtualization has been a significant breakthrough, as it provides scalable and adaptable GPU resources for virtual machines. However, this technology faces challenges in debugging and analyzing the performance of GPU-accelerated applications. Most current performance tools do not support virtual GPUs (vGPUs), highlighting the need for more advanced tools. Thus, this article introduces a novel performance analysis tool that is designed for systems using vGPUs. Our tool is compatible with the Intel GVT-g virtualization solution, although its underlying principles can apply to many vGPU-based systems. Our tool uses software tracing techniques to gather detailed runtime data and generate relevant performance metrics. It also offers many synchronized graphical views, which gives practitioners deep insights into GVT-g operations and helps them identify potential performance bottlenecks in vGPU-enabled virtual machines.
      Citation: Future Internet
      PubDate: 2024-02-23
      DOI: 10.3390/fi16030072
      Issue No: Vol. 16, No. 3 (2024)
       
  • Future Internet, Vol. 16, Pages 73: Deep Learning for Intrusion Detection
           Systems (IDSs) in Time Series Data

    • Authors: Konstantinos Psychogyios, Andreas Papadakis, Stavroula Bourou, Nikolaos Nikolaou, Apostolos Maniatis, Theodore Zahariadis
      First page: 73
      Abstract: The advent of computer networks and the internet has drastically altered the means by which we share information and interact with each other. However, this technological advancement has also created opportunities for malevolent behavior, with individuals exploiting vulnerabilities to gain access to confidential data, obstruct activity, etc. To this end, intrusion detection systems (IDSs) are needed to filter malicious traffic and prevent common attacks. In the past, these systems relied on a fixed set of rules or comparisons with previous attacks. However, with the increased availability of computational power and data, machine learning has emerged as a promising solution for this task. While many systems now use this methodology in real-time for a reactive approach to mitigation, we explore the potential of configuring it as a proactive time series prediction. In this work, we delve into this possibility further. More specifically, we convert a classic IDS dataset to a time series format and use predictive models to forecast forthcoming malign packets. We propose a new architecture combining convolutional neural networks, long short-term memory networks, and attention. The findings indicate that our model performs strongly, exhibiting an F1 score and AUC that are within margins of 1% and 3%, respectively, when compared to conventional real-time detection. Also, our architecture achieves an ∼8% F1 score improvement compared to an LSTM (long short-term memory) model.
      Citation: Future Internet
      PubDate: 2024-02-23
      DOI: 10.3390/fi16030073
      Issue No: Vol. 16, No. 3 (2024)
       
  • Future Internet, Vol. 16, Pages 74: The Electronic Medical Record—A
           New Look at the Challenges and Opportunities

    • Authors: Reeva Lederman, Esther Brainin, Ofir Ben-Assuli
      First page: 74
      Abstract: Electronic medical record (EMR) systems possess the potential to enable smart healthcare by serving as a hub for the transformation of medical data into meaningful information, knowledge, and wisdom in the health care sector [...]
      Citation: Future Internet
      PubDate: 2024-02-26
      DOI: 10.3390/fi16030074
      Issue No: Vol. 16, No. 3 (2024)
       
  • Future Internet, Vol. 16, Pages 75: A Lightweight Neural Network Model for
           Disease Risk Prediction in Edge Intelligent Computing Architecture

    • Authors: Feng Zhou, Shijing Hu, Xin Du, Xiaoli Wan, Jie Wu
      First page: 75
      Abstract: In the current field of disease risk prediction research, there are many methods of using servers for centralized computing to train and infer prediction models. However, this centralized computing method increases storage space, the load on network bandwidth, and the computing pressure on the central server. In this article, we design an image preprocessing method and propose a lightweight neural network model called Linge (Lightweight Neural Network Models for the Edge). We propose a distributed intelligent edge computing technology based on the federated learning algorithm for disease risk prediction. The intelligent edge computing method we proposed for disease risk prediction directly performs prediction model training and inference at the edge without increasing storage space. It also reduces the load on network bandwidth and reduces the computing pressure on the server. The lightweight neural network model we designed has only 7.63 MB of parameters and only takes up 155.28 MB of memory. In the experiment with the Linge model compared with the EfficientNetV2 model, the accuracy and precision increased by 2%, the recall rate increased by 1%, the specificity increased by 4%, the F1 score increased by 3%, and the AUC (Area Under the Curve) value increased by 2%.
      Citation: Future Internet
      PubDate: 2024-02-26
      DOI: 10.3390/fi16030075
      Issue No: Vol. 16, No. 3 (2024)
       
  • Future Internet, Vol. 16, Pages 76: A Review on Decentralized Finance
           Ecosystems

    • Authors: Andry Alamsyah, Gede Natha Wijaya Kusuma, Dian Puteri Ramadhani
      First page: 76
      Abstract: The future of the internet is moving toward decentralization, with decentralized networks and blockchain technology playing essential roles in different sectors. Decentralized networks offer equality, accessibility, and security at a societal level, while blockchain technology guarantees security, authentication, and openness. Integrating blockchain technology with decentralized characteristics has become increasingly significant in finance; we call this “decentralized finance” (DeFi). As of January 2023, the DeFi crypto market capitalized USD 46.21 billion and served over 6.6 million users. As DeFi continues to outperform traditional finance (TradFi), it provides reduced fees, increased inclusivity, faster transactions, enhanced security, and improved accessibility, transparency, and programmability; it also eliminates intermediaries. For end users, DeFi presents asset custody options, peer-to-peer transactions, programmable control features, and innovative financial solutions. Despite its rapid growth in recent years, there is limited comprehensive research on mapping DeFi’s benefits and risks alongside its role as an enabling technology within the financial services sector. This research addresses these gaps by developing a DeFi classification system, organizing information, and clarifying connections among its various aspects. The research goal is to improve the understanding of DeFi in both academic and industrial circles to promote comprehension of DeFi taxonomy. This well-organized DeFi taxonomy aids experts, regulators, and decision-makers in making informed and strategic decisions, thereby fostering responsible integration into TradFi for effective risk management. This study enhances DeFi security by providing users with clear guidance on existing mechanisms and risks in DeFi, reducing susceptibility to misinformation, and promoting secure participation. Additionally, it offers an overview of DeFi’s role in shaping the future of the internet.
      Citation: Future Internet
      PubDate: 2024-02-26
      DOI: 10.3390/fi16030076
      Issue No: Vol. 16, No. 3 (2024)
       
  • Future Internet, Vol. 16, Pages 77: Enabling Vehicle-to-Vehicle Trust in
           Rural Areas: An Evaluation of a Pre-Signature Scheme for
           Infrastructure-Limited Environments

    • Authors: Dimah Almani, Tim Muller, Xavier Carpent, Takahito Yoshizawa, Steven Furnell
      First page: 77
      Abstract: This research investigates the deployment and effectiveness of the novel Pre-Signature scheme, developed to allow for up-to-date reputation being available in Vehicle-to-Vehicle (V2V) communications in rural landscapes, where the communications infrastructure is limited. We discuss how existing standards and specifications can be adjusted to incorporate the Pre-Signature scheme to disseminate reputation. Addressing the unique challenges posed by sparse or irregular Roadside Units (RSUs) coverage in these areas, the study investigates the implications of such environmental factors on the integrity and reliability of V2V communication networks. Using the widely used SUMO traffic simulation tool, we create and simulate real-world rural scenarios. We have conducted an in-depth performance evaluation of the Pre-Signature scheme under the typical infrastructural limitations encountered in rural scenarios. Our findings demonstrate the scheme’s usefulness in scenarios with variable or constrained RSUs access. Furthermore, the relationships between the three variables, communication range, amount of RSUs, and degree of home-to-vehicle connectivity overnight, are studied, offering an exhaustive analysis of the determinants influencing V2V communication efficiency in rural contexts. The important findings are (1) that access to accurate Reputation Values increases with all three variables and (2) the necessity of Pre-Signatures decreases if the amount and range of RSUs increase to high numbers. Together, these findings imply that areas with a low degree of adoption of RSUs (typically rural areas) benefit the most from our approach.
      Citation: Future Internet
      PubDate: 2024-02-26
      DOI: 10.3390/fi16030077
      Issue No: Vol. 16, No. 3 (2024)
       
  • Future Internet, Vol. 16, Pages 78: Deterministic K-Identification for
           Future Communication Networks: The Binary Symmetric Channel Results

    • Authors: Mohammad Javad Salariseddigh, Ons Dabbabi, Christian Deppe, Holger Boche
      First page: 78
      Abstract: Numerous applications of the Internet of Things (IoT) feature an event recognition behavior where the established Shannon capacity is not authorized to be the central performance measure. Instead, the identification capacity for such systems is considered to be an alternative metric, and has been developed in the literature. In this paper, we develop deterministic K-identification (DKI) for the binary symmetric channel (BSC) with and without a Hamming weight constraint imposed on the codewords. This channel may be of use for IoT in the context of smart system technologies, where sophisticated communication models can be reduced to a BSC for the aim of studying basic information theoretical properties. We derive inner and outer bounds on the DKI capacity of the BSC when the size of the goal message set K may grow in the codeword length n. As a major observation, we find that, for deterministic encoding, assuming that K grows exponentially in n, i.e., K=2nκ, where κ is the identification goal rate, then the number of messages that can be accurately identified grows exponentially in n, i.e., 2nR, where R is the DKI coding rate. Furthermore, the established inner and outer bound regions reflects impact of the input constraint (Hamming weight) and the channel statistics, i.e., the cross-over probability.
      Citation: Future Internet
      PubDate: 2024-02-26
      DOI: 10.3390/fi16030078
      Issue No: Vol. 16, No. 3 (2024)
       
  • Future Internet, Vol. 16, Pages 79: A Multi-Head LSTM Architecture for
           Bankruptcy Prediction with Time Series Accounting Data

    • Authors: Mattia Pellegrino, Gianfranco Lombardo, George Adosoglou, Stefano Cagnoni, Panos M. Pardalos, Agostino Poggi
      First page: 79
      Abstract: With the recent advances in machine learning (ML), several models have been successfully applied to financial and accounting data to predict the likelihood of companies’ bankruptcy. However, time series have received little attention in the literature, with a lack of studies on the application of deep learning sequence models such as Recurrent Neural Networks (RNNs) and the recent Attention-based models in general. In this research work, we investigated the application of Long Short-Term Memory (LSTM) networks to exploit time series of accounting data for bankruptcy prediction. The main contributions of our work are the following: (a) We proposed a multi-head LSTM that models each financial variable in a time window independently and compared it with a single-input LSTM and other traditional ML models. The multi-head LSTM outperformed all the other models. (b) We identified the optimal time series length for bankruptcy prediction to be equal to 4 years of accounting data. (c) We made public the dataset we used for the experiments which includes data from 8262 different public companies in the American stock market generated in the period between 1999 and 2018. Furthermore, we proved the efficacy of the multi-head LSTM model in terms of fewer false positives and the better division of the two classes.
      Citation: Future Internet
      PubDate: 2024-02-27
      DOI: 10.3390/fi16030079
      Issue No: Vol. 16, No. 3 (2024)
       
  • Future Internet, Vol. 16, Pages 36: Synchronization of Separate
           Sensors’ Data Transferred through a Local Wi-Fi Network: A Use Case
           of Human-Gait Monitoring

    • Authors: Viktor Masalskyi, Dominykas Čičiurėnas, Andrius Dzedzickis, Urtė Prentice, Gediminas Braziulis, Vytautas Bučinskas
      First page: 36
      Abstract: This paper addresses the challenge of synchronizing data acquisition from independent sensor systems in a local network. The network comprises microcontroller-based systems that collect data from physical sensors used for monitoring human gait. The synchronized data are transmitted to a PC or cloud storage through a central controller. The performed research proposes a solution for effectively synchronizing the data acquisition using two alternative data-synchronization approaches. Additionally, it explores techniques to handle varying amounts of data from different sensor types. The experimental research validates the proposed solution by providing trial results and stability evaluations and comparing them to the human-gait-monitoring system requirements. The alternative data-transmission method was used to compare the data-transmission quality and data-loss rate. The developed algorithm allows data acquisition from six pressure sensors and two accelerometer/gyroscope modules, ensuring a 24.6 Hz sampling rate and 1 ms synchronization accuracy. The obtained results prove the algorithm’s suitability for human-gait monitoring under its regular activity. The paper concludes with discussions and key insights derived from the obtained results.
      Citation: Future Internet
      PubDate: 2024-01-23
      DOI: 10.3390/fi16020036
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 37: DDPG-MPCC: An Experience Driven
           Multipath Performance Oriented Congestion Control

    • Authors: Shiva Raj Pokhrel, Jonathan Kua, Deol Satish, Sebnem Ozer, Jeff Howe, Anwar Walid
      First page: 37
      Abstract: We introduce a novel multipath data transport approach at the transport layer referred to as ‘Deep Deterministic Policy Gradient for Multipath Performance-oriented Congestion Control’ (DDPG-MPCC), which leverages deep reinforcement learning to enhance congestion management in multipath networks. Our method combines DDPG with online convex optimization to optimize fairness and performance in simultaneously challenging multipath internet congestion control scenarios. Through experiments by developing kernel implementation, we show how DDPG-MPCC performs compared to the state-of-the-art solutions.
      Citation: Future Internet
      PubDate: 2024-01-23
      DOI: 10.3390/fi16020037
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 38: Refined Semi-Supervised Modulation
           Classification: Integrating Consistency Regularization and Pseudo-Labeling
           Techniques

    • Authors: Min Ma, Shanrong Liu, Shufei Wang, Shengnan Shi
      First page: 38
      Abstract: Automatic modulation classification (AMC) plays a crucial role in wireless communication by identifying the modulation scheme of received signals, bridging signal reception and demodulation. Its main challenge lies in performing accurate signal processing without prior information. While deep learning has been applied to AMC, its effectiveness largely depends on the availability of labeled samples. To address the scarcity of labeled data, we introduce a novel semi-supervised AMC approach combining consistency regularization and pseudo-labeling. This method capitalizes on the inherent data distribution of unlabeled data to supplement the limited labeled data. Our approach involves a dual-component objective function for model training: one part focuses on the loss from labeled data, while the other addresses the regularized loss for unlabeled data, enhanced through two distinct levels of data augmentation. These combined losses concurrently refine the model parameters. Our method demonstrates superior performance over established benchmark algorithms, such as decision trees (DTs), support vector machines (SVMs), pi-models, and virtual adversarial training (VAT). It exhibits a marked improvement in the recognition accuracy, particularly when the proportion of labeled samples is as low as 1–4%.
      Citation: Future Internet
      PubDate: 2024-01-23
      DOI: 10.3390/fi16020038
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 39: Volumetric Techniques for Product
           Routing and Loading Optimisation in Industry 4.0: A Review

    • Authors: Ricardo Lopes, Marcello Trovati, Ella Pereira
      First page: 39
      Abstract: Industry 4.0 has become a crucial part in the majority of processes, components, and related modelling, as well as predictive tools that allow a more efficient, automated and sustainable approach to industry. The availability of large quantities of data, and the advances in IoT, AI, and data-driven frameworks, have led to an enhanced data gathering, assessment, and extraction of actionable information, resulting in a better decision-making process. Product picking and its subsequent packing is an important area, and has drawn increasing attention for the research community. However, depending of the context, some of the related approaches tend to be either highly mathematical, or applied to a specific context. This article aims to provide a survey on the main methods, techniques, and frameworks relevant to product packing and to highlight the main properties and features that should be further investigated to ensure a more efficient and optimised approach.
      Citation: Future Internet
      PubDate: 2024-01-24
      DOI: 10.3390/fi16020039
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 40: A Holistic Analysis of Internet of
           Things (IoT) Security: Principles, Practices, and New Perspectives

    • Authors: Mahmud Hossain, Golam Kayas, Ragib Hasan, Anthony Skjellum, Shahid Noor, S. M. Riazul Islam
      First page: 40
      Abstract: Driven by the rapid escalation of its utilization, as well as ramping commercialization, Internet of Things (IoT) devices increasingly face security threats. Apart from denial of service, privacy, and safety concerns, compromised devices can be used as enablers for committing a variety of crime and e-crime. Despite ongoing research and study, there remains a significant gap in the thorough analysis of security challenges, feasible solutions, and open secure problems for IoT. To bridge this gap, we provide a comprehensive overview of the state of the art in IoT security with a critical investigation-based approach. This includes a detailed analysis of vulnerabilities in IoT-based systems and potential attacks. We present a holistic review of the security properties required to be adopted by IoT devices, applications, and services to mitigate IoT vulnerabilities and, thus, successful attacks. Moreover, we identify challenges to the design of security protocols for IoT systems in which constituent devices vary markedly in capability (such as storage, computation speed, hardware architecture, and communication interfaces). Next, we review existing research and feasible solutions for IoT security. We highlight a set of open problems not yet addressed among existing security solutions. We provide a set of new perspectives for future research on such issues including secure service discovery, on-device credential security, and network anomaly detection. We also provide directions for designing a forensic investigation framework for IoT infrastructures to inspect relevant criminal cases, execute a cyber forensic process, and determine the facts about a given incident. This framework offers a means to better capture information on successful attacks as part of a feedback mechanism to thwart future vulnerabilities and threats. This systematic holistic review will both inform on current challenges in IoT security and ideally motivate their future resolution.
      Citation: Future Internet
      PubDate: 2024-01-24
      DOI: 10.3390/fi16020040
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 41: Beyond Lexical Boundaries:
           LLM-Generated Text Detection for Romanian Digital Libraries

    • Authors: Melania Nitu, Mihai Dascalu
      First page: 41
      Abstract: Machine-generated content reshapes the landscape of digital information; hence, ensuring the authenticity of texts within digital libraries has become a paramount concern. This work introduces a corpus of approximately 60 k Romanian documents, including human-written samples as well as generated texts using six distinct Large Language Models (LLMs) and three different generation methods. Our robust experimental dataset covers five domains, namely books, news, legal, medical, and scientific publications. The exploratory text analysis revealed differences between human-authored and artificially generated texts, exposing the intricacies of lexical diversity and textual complexity. Since Romanian is a less-resourced language requiring dedicated detectors on which out-of-the-box solutions do not work, this paper introduces two techniques for discerning machine-generated texts. The first method leverages a Transformer-based model to categorize texts as human or machine-generated, while the second method extracts and examines linguistic features, such as identifying the top textual complexity indices via Kruskal–Wallis mean rank and computes burstiness, which are further fed into a machine-learning model leveraging an extreme gradient-boosting decision tree. The methods show competitive performance, with the first technique’s results outperforming the second one in two out of five domains, reaching an F1 score of 0.96. Our study also includes a text similarity analysis between human-authored and artificially generated texts, coupled with a SHAP analysis to understand which linguistic features contribute more to the classifier’s decision.
      Citation: Future Internet
      PubDate: 2024-01-25
      DOI: 10.3390/fi16020041
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 42: TinyML Algorithms for Big Data
           Management in Large-Scale IoT Systems

    • Authors: Aristeidis Karras, Anastasios Giannaros, Christos Karras, Leonidas Theodorakopoulos, Constantinos S. Mammassis, George A. Krimpas, Spyros Sioutas
      First page: 42
      Abstract: In the context of the Internet of Things (IoT), Tiny Machine Learning (TinyML) and Big Data, enhanced by Edge Artificial Intelligence, are essential for effectively managing the extensive data produced by numerous connected devices. Our study introduces a set of TinyML algorithms designed and developed to improve Big Data management in large-scale IoT systems. These algorithms, named TinyCleanEDF, EdgeClusterML, CompressEdgeML, CacheEdgeML, and TinyHybridSenseQ, operate together to enhance data processing, storage, and quality control in IoT networks, utilizing the capabilities of Edge AI. In particular, TinyCleanEDF applies federated learning for Edge-based data cleaning and anomaly detection. EdgeClusterML combines reinforcement learning with self-organizing maps for effective data clustering. CompressEdgeML uses neural networks for adaptive data compression. CacheEdgeML employs predictive analytics for smart data caching, and TinyHybridSenseQ concentrates on data quality evaluation and hybrid storage strategies. Our experimental evaluation of the proposed techniques includes executing all the algorithms in various numbers of Raspberry Pi devices ranging from one to ten. The experimental results are promising as we outperform similar methods across various evaluation metrics. Ultimately, we anticipate that the proposed algorithms offer a comprehensive and efficient approach to managing the complexities of IoT, Big Data, and Edge AI.
      Citation: Future Internet
      PubDate: 2024-01-25
      DOI: 10.3390/fi16020042
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 43: A Spectral Gap-Based Topology Control
           Algorithm for Wireless Backhaul Networks

    • Authors: Sergio Jesús González-Ambriz, Rolando Menchaca-Méndez, Sergio Alejandro Pinacho-Castellanos, Mario Eduardo Rivero-Ángeles 
      First page: 43
      Abstract: This paper presents the spectral gap-based topology control algorithm (SGTC) for wireless backhaul networks, a novel approach that employs the Laplacian Spectral Gap (LSG) to find expander-like graphs that optimize the topology of the network in terms of robustness, diameter, energy cost, and network entropy. The latter measures the network’s ability to promote seamless traffic offloading from the Macro Base Stations to smaller cells by providing a high diversity of shortest paths connecting all the stations. Given the practical constraints imposed by cellular technologies, the proposed algorithm uses simulated annealing to search for feasible network topologies with a large LSG. Then, it computes the Pareto front of the set of feasible solutions found during the annealing process when considering robustness, diameter, and entropy as objective functions. The algorithm’s result is the Pareto efficient solution that minimizes energy cost. A set of experimental results shows that by optimizing the LSG, the proposed algorithm simultaneously optimizes the set of desirable topological properties mentioned above. The results also revealed that generating networks with good spectral expansion is possible even under the restrictions imposed by current wireless technologies. This is a desirable feature because these networks have strong connectivity properties even if they do not have a large number of links.
      Citation: Future Internet
      PubDate: 2024-01-26
      DOI: 10.3390/fi16020043
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 44: Computer Vision and Machine
           Learning-Based Predictive Analysis for Urban Agricultural Systems

    • Authors: Arturs Kempelis, Inese Polaka, Andrejs Romanovs, Antons Patlins
      First page: 44
      Abstract: Urban agriculture presents unique challenges, particularly in the context of microclimate monitoring, which is increasingly important in food production. This paper explores the application of convolutional neural networks (CNNs) to forecast key sensor measurements from thermal images within this context. This research focuses on using thermal images to forecast sensor measurements of relative air humidity, soil moisture, and light intensity, which are integral to plant health and productivity in urban farming environments. The results indicate a higher accuracy in forecasting relative air humidity and soil moisture levels, with Mean Absolute Percentage Errors (MAPEs) within the range of 10–12%. These findings correlate with the strong dependency of these parameters on thermal patterns, which are effectively extracted by the CNNs. In contrast, the forecasting of light intensity proved to be more challenging, yielding lower accuracy. The reduced performance is likely due to the more complex and variable factors that affect light in urban environments. The insights gained from the higher predictive accuracy for relative air humidity and soil moisture may inform targeted interventions for urban farming practices, while the lower accuracy in light intensity forecasting highlights the need for further research into the integration of additional data sources or hybrid modeling approaches. The conclusion suggests that the integration of these technologies can significantly enhance the predictive maintenance of plant health, leading to more sustainable and efficient urban farming practices. However, the study also acknowledges the challenges in implementing these technologies in urban agricultural models.
      Citation: Future Internet
      PubDate: 2024-01-28
      DOI: 10.3390/fi16020044
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 45: Non-Profiled Unsupervised Horizontal
           Iterative Attack against Hardware Elliptic Curve Scalar Multiplication
           Using Machine Learning

    • Authors: Marcin Aftowicz, Ievgen Kabin, Zoya Dyka, Peter Langendörfer
      First page: 45
      Abstract: While IoT technology makes industries, cities, and homes smarter, it also opens the door to security risks. With the right equipment and physical access to the devices, the attacker can leverage side-channel information, like timing, power consumption, or electromagnetic emanation, to compromise cryptographic operations and extract the secret key. This work presents a side channel analysis of a cryptographic hardware accelerator for the Elliptic Curve Scalar Multiplication operation, implemented in a Field-Programmable Gate Array and as an Application-Specific Integrated Circuit. The presented framework consists of initial key extraction using a state-of-the-art statistical horizontal attack and is followed by regularized Artificial Neural Networks, which take, as input, the partially incorrect key guesses from the horizontal attack and correct them iteratively. The initial correctness of the horizontal attack, measured as the fraction of correctly extracted bits of the secret key, was improved from 75% to 98% by applying the iterative learning.
      Citation: Future Internet
      PubDate: 2024-01-29
      DOI: 10.3390/fi16020045
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 46: Context-Aware Behavioral Tips to
           Improve Sleep Quality via Machine Learning and Large Language Models

    • Authors: Erica Corda, Silvia M. Massa, Daniele Riboni
      First page: 46
      Abstract: As several studies demonstrate, good sleep quality is essential for individuals’ well-being, as a lack of restoring sleep may disrupt different physical, mental, and social dimensions of health. For this reason, there is increasing interest in tools for the monitoring of sleep based on personal sensors. However, there are currently few context-aware methods to help individuals to improve their sleep quality through behavior change tips. In order to tackle this challenge, in this paper, we propose a system that couples machine learning algorithms and large language models to forecast the next night’s sleep quality, and to provide context-aware behavior change tips to improve sleep. In order to encourage adherence and to increase trust, our system includes the use of large language models to describe the conditions that the machine learning algorithm finds harmful to sleep health, and to explain why the behavior change tips are generated as a consequence. We develop a prototype of our system, including a smartphone application, and perform experiments with a set of users. Results show that our system’s forecast is correlated to the actual sleep quality. Moreover, a preliminary user study suggests that the use of large language models in our system is useful in increasing trust and engagement.
      Citation: Future Internet
      PubDate: 2024-01-30
      DOI: 10.3390/fi16020046
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 47: Enhancing Urban Resilience: Smart City
           Data Analyses, Forecasts, and Digital Twin Techniques at the Neighborhood
           Level

    • Authors: Andreas F. Gkontzis, Sotiris Kotsiantis, Georgios Feretzakis, Vassilios S. Verykios
      First page: 47
      Abstract: Smart cities, leveraging advanced data analytics, predictive models, and digital twin techniques, offer a transformative model for sustainable urban development. Predictive analytics is critical to proactive planning, enabling cities to adapt to evolving challenges. Concurrently, digital twin techniques provide a virtual replica of the urban environment, fostering real-time monitoring, simulation, and analysis of urban systems. This study underscores the significance of real-time monitoring, simulation, and analysis of urban systems to support test scenarios that identify bottlenecks and enhance smart city efficiency. This paper delves into the crucial roles of citizen report analytics, prediction, and digital twin technologies at the neighborhood level. The study integrates extract, transform, load (ETL) processes, artificial intelligence (AI) techniques, and a digital twin methodology to process and interpret urban data streams derived from citizen interactions with the city’s coordinate-based problem mapping platform. Using an interactive GeoDataFrame within the digital twin methodology, dynamic entities facilitate simulations based on various scenarios, allowing users to visualize, analyze, and predict the response of the urban system at the neighborhood level. This approach reveals antecedent and predictive patterns, trends, and correlations at the physical level of each city area, leading to improvements in urban functionality, resilience, and resident quality of life.
      Citation: Future Internet
      PubDate: 2024-01-30
      DOI: 10.3390/fi16020047
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 48: Efficient Privacy-Aware Forwarding for
           Enhanced Communication Privacy in Opportunistic Mobile Social Networks

    • Authors: Azizah Assiri, Hassen Sallay
      First page: 48
      Abstract: Opportunistic mobile social networks (OMSNs) have become increasingly popular in recent years due to the rise of social media and smartphones. However, message forwarding and sharing social information through intermediary nodes on OMSNs raises privacy concerns as personal data and activities become more exposed. Therefore, maintaining privacy without limiting efficient social interaction is a challenging task. This paper addresses this specific problem of safeguarding user privacy during message forwarding by integrating a privacy layer on the state-of-the-art OMSN routing decision models that empowers users to control their message dissemination. Mainly, we present three user-centric privacy-aware forwarding modes guiding the selection of the next hop in the forwarding path based on social metrics such as common friends and exchanged messages between OMSN nodes. More specifically, we define different social relationship strengths approximating real-world scenarios (familiar, weak tie, stranger) and trust thresholds to give users choices on trust levels for different social contexts and guide the routing decisions. We evaluate the privacy enhancement and network performance through extensive simulations using ONE simulator for several routing schemes (Epidemic, Prophet, and Spray and Wait) and different movement models (random way, bus, and working day). We demonstrate that our modes can enhance privacy by up to 45% in various network scenarios, as measured by the reduction in the likelihood of unintended message propagation, while keeping the message-delivery process effective and efficient.
      Citation: Future Internet
      PubDate: 2024-01-31
      DOI: 10.3390/fi16020048
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 49: A New Dynamic Game-Based Pricing Model
           for Cloud Environment

    • Authors: Hamid Saadatfar, Hamid Gholampour Ahangar, Javad Hassannataj Joloudari
      First page: 49
      Abstract: Resource pricing in cloud computing has become one of the main challenges for cloud providers. The challenge is determining a fair and appropriate price to satisfy users and resource providers. To establish a justifiable price, it is imperative to take into account the circumstances and requirements of both the provider and the user. This research tries to provide a pricing mechanism for cloud computing based on game theory. The suggested approach considers three aspects: the likelihood of faults, the interplay among virtual machines, and the amount of energy used, in order to determine a justifiable price. In the game that is being proposed, the provider is responsible for determining the price of the virtual machine that can be made available to the user on each physical machine. The user, on the other hand, has the authority to choose between the virtual machines that are offered in order to run their application. The whole game is implemented as a function of the resource broker component. The proposed mechanism is simulated and evaluated using the CloudSim simulator. Its performance is compared with several previous recent mechanisms. The results indicate that the suggested mechanism has successfully identified a more rational price for both the user and the provider, consequently enhancing the overall profitability of the cloud system.
      Citation: Future Internet
      PubDate: 2024-01-31
      DOI: 10.3390/fi16020049
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 50: Enhancing Smart City Safety and
           Utilizing AI Expert Systems for Violence Detection

    • Authors: Pradeep Kumar, Guo-Liang Shih, Bo-Lin Guo, Siva Kumar Nagi, Yibeltal Chanie Manie, Cheng-Kai Yao, Michael Augustine Arockiyadoss, Peng-Chun Peng
      First page: 50
      Abstract: Violent attacks have been one of the hot issues in recent years. In the presence of closed-circuit televisions (CCTVs) in smart cities, there is an emerging challenge in apprehending criminals, leading to a need for innovative solutions. In this paper, the propose a model aimed at enhancing real-time emergency response capabilities and swiftly identifying criminals. This initiative aims to foster a safer environment and better manage criminal activity within smart cities. The proposed architecture combines an image-to-image stable diffusion model with violence detection and pose estimation approaches. The diffusion model generates synthetic data while the object detection approach uses YOLO v7 to identify violent objects like baseball bats, knives, and pistols, complemented by MediaPipe for action detection. Further, a long short-term memory (LSTM) network classifies the action attacks involving violent objects. Subsequently, an ensemble consisting of an edge device and the entire proposed model is deployed onto the edge device for real-time data testing using a dash camera. Thus, this study can handle violent attacks and send alerts in emergencies. As a result, our proposed YOLO model achieves a mean average precision (MAP) of 89.5% for violent attack detection, and the LSTM classifier model achieves an accuracy of 88.33% for violent action classification. The results highlight the model’s enhanced capability to accurately detect violent objects, particularly in effectively identifying violence through the implemented artificial intelligence system.
      Citation: Future Internet
      PubDate: 2024-01-31
      DOI: 10.3390/fi16020050
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 51: Optimizing Session-Aware Recommenders:
           A Deep Dive into GRU-Based Latent Interaction Integration

    • Authors: Ming-Yen Lin, Ping-Chun Wu, Sue-Chen Hsueh
      First page: 51
      Abstract: This study introduces session-aware recommendation models, leveraging GRU (Gated Recurrent Unit) and attention mechanisms for advanced latent interaction data integration. A primary advancement is enhancing latent context, a critical factor for boosting recommendation accuracy. We address the existing models’ rigidity by dynamically blending short-term (most recent) and long-term (historical) preferences, moving beyond static period definitions. Our approaches, pre-combination (LCII-Pre) and post-combination (LCII-Post), with fixed (Fix) and flexible learning (LP) weight configurations, are thoroughly evaluated. We conducted extensive experiments to assess our models’ performance on public datasets such as Amazon and MovieLens 1M. Notably, on the MovieLens 1M dataset, LCII-PreFix achieved a 1.85% and 2.54% higher Recall@20 than II-RNN and BERT4Rec+st+TSA, respectively. On the Steam dataset, LCII-PostLP outperformed these models by 18.66% and 5.5%. Furthermore, on the Amazon dataset, LCII showed a 2.59% and 1.89% improvement in Recall@20 over II-RNN and CAII. These results affirm the significant enhancement our models bring to session-aware recommendation systems, showcasing their potential for both academic and practical applications in the field.
      Citation: Future Internet
      PubDate: 2024-02-01
      DOI: 10.3390/fi16020051
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 52: A QoS-Aware IoT Edge Network for
           Mobile Telemedicine Enabling In-Transit Monitoring of Emergency Patients

    • Authors: Adwitiya Mukhopadhyay, Aryadevi Remanidevi Devidas, Venkat P. Rangan, Maneesha Vinodini Ramesh
      First page: 52
      Abstract: Addressing the inadequacy of medical facilities in rural communities and the high number of patients affected by ailments that need to be treated immediately is of prime importance for all countries. The various recent healthcare emergency situations bring out the importance of telemedicine and demand rapid transportation of patients to nearby hospitals with available resources to provide the required medical care. Many current healthcare facilities and ambulances are not equipped to provide real-time risk assessment for each patient and dynamically provide the required medical interventions. This work proposes an IoT-based mobile medical edge (IM2E) node to be integrated with wearable and portable devices for the continuous monitoring of emergency patients transported via ambulances and it delves deeper into the existing challenges, such as (a) a lack of a simplified patient risk scoring system, (b) the need for architecture that enables seamless communication for dynamically varying QoS requirements, and (c)the need for context-aware knowledge regarding the effect of end-to-end delay and the packet loss ratio (PLR) on the real-time monitoring of health risks in emergency patients. The proposed work builds a data path selection model to identify the most effective path through which to route the data packets in an effective manner. The signal-to-noise interference ratio and the fading in the path are chosen to analyze the suitable path for data transmission.
      Citation: Future Internet
      PubDate: 2024-02-06
      DOI: 10.3390/fi16020052
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 53: Special Issue: State-of-the-Art Future
           Internet Technology in Italy 2022–2023

    • Authors: Massimo Cafaro, Italo Epicoco, Marco Pulimeno
      First page: 53
      Abstract: This Special Issue aims to provide a comprehensive overview of the current state of the art in Future Internet Technology in Italy [...]
      Citation: Future Internet
      PubDate: 2024-02-06
      DOI: 10.3390/fi16020053
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 54: Modern Trends in Multi-Agent Systems

    • Authors: Martin Kenyeres, Ivana Budinská, Ladislav Hluchý, Agostino Poggi
      First page: 54
      Abstract: The term “multi-agent system” is generally understood as an interconnected set of independent entities that can effectively solve complex and time-consuming problems exceeding the individual abilities of common problem solvers [...]
      Citation: Future Internet
      PubDate: 2024-02-08
      DOI: 10.3390/fi16020054
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 55: Automated Identification of Sensitive
           Financial Data Based on the Topic Analysis

    • Authors: Meng Li, Jiqiang Liu, Yeping Yang
      First page: 55
      Abstract: Data governance is an extremely important protection and management measure throughout the entire life cycle of data. However, there are still data governance issues, such as data security risks, data privacy breaches, and difficulties in data management and access control. These problems lead to a risk of data breaches and abuse. Therefore, the security classification and grading of data has become an important task to accurately identify sensitive data and adopt appropriate maintenance and management measures with different sensitivity levels. This work started from the problems existing in the current data security classification and grading work, such as inconsistent classification and grading standards, difficult data acquisition and sorting, and weak semantic information of data fields, to find the limitations of the current methods and the direction for improvement. The automatic identification method of sensitive financial data proposed in this paper is based on topic analysis and was constructed by incorporating Jieba word segmentation, word frequency statistics, the skip-gram model, K-means clustering, and other technologies. Expert assistance was sought to select appropriate keywords for enhanced accuracy. This work used the descriptive text library and real business data of a Chinese financial institution for training and testing to further demonstrate its effectiveness and usefulness. The evaluation indicators illustrated the effectiveness of this method in the classification of data security. The proposed method addressed the challenge of sensitivity level division in texts with limited semantic information, which overcame the limitations on model expansion across different domains and provided an optimized application model. All of the above pointed out the direction for the real-time updating of the method.
      Citation: Future Internet
      PubDate: 2024-02-08
      DOI: 10.3390/fi16020055
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 56: A Secure Opportunistic Network with
           Efficient Routing for Enhanced Efficiency and Sustainability

    • Authors: Ayman Khalil, Besma Zeddini
      First page: 56
      Abstract: The intersection of cybersecurity and opportunistic networks has ushered in a new era of innovation in the realm of wireless communications. In an increasingly interconnected world, where seamless data exchange is pivotal for both individual users and organizations, the need for efficient, reliable, and sustainable networking solutions has never been more pressing. Opportunistic networks, characterized by intermittent connectivity and dynamic network conditions, present unique challenges that necessitate innovative approaches for optimal performance and sustainability. This paper introduces a groundbreaking paradigm that integrates the principles of cybersecurity with opportunistic networks. At its core, this study presents a novel routing protocol meticulously designed to significantly outperform existing solutions concerning key metrics such as delivery probability, overhead ratio, and communication delay. Leveraging cybersecurity’s inherent strengths, our protocol not only fortifies the network’s security posture but also provides a foundation for enhancing efficiency and sustainability in opportunistic networks. The overarching goal of this paper is to address the inherent limitations of conventional opportunistic network protocols. By proposing an innovative routing protocol, we aim to optimize data delivery, minimize overhead, and reduce communication latency. These objectives are crucial for ensuring seamless and timely information exchange, especially in scenarios where traditional networking infrastructures fall short. By large-scale simulations, the new model proves its effectiveness in the different scenarios, especially in terms of message delivery probability, while ensuring reasonable overhead and latency.
      Citation: Future Internet
      PubDate: 2024-02-08
      DOI: 10.3390/fi16020056
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 57: Distributed Mobility Management
           Support for Low-Latency Data Delivery in Named Data Networking for UAVs

    • Authors: Mohammed Bellaj, Najib Naja, Abdellah Jamali
      First page: 57
      Abstract: Named Data Networking (NDN) has emerged as a promising architecture to overcome the limitations of the conventional Internet Protocol (IP) architecture, particularly in terms of mobility, security, and data availability. However, despite the advantages it offers, producer mobility management remains a significant challenge for NDN, especially for moving vehicles and emerging technologies such as Unmanned Aerial Vehicles (UAVs), known for their high-speed and unpredictable movements, which makes it difficult for NDN to maintain seamless communication. To solve this mobility problem, we propose a Distributed Mobility Management Scheme (DMMS) to support UAV mobility and ensure low-latency content delivery in NDN architecture. DMMS utilizes decentralized Anchors to forward proactively the consumer’s Interest packets toward the producer’s predicted location when handoff occurs. Moreover, it introduces a new forwarding approach that combines the standard and location-based forwarding strategy to improve forwarding efficiency under producer mobility without changing the network structure. Using a realistic scenario, DMMS is evaluated and compared against two well-known solutions, namely MAP-ME and Kite, using the ndnSIM simulations. We demonstrate that DMMS achieves better results compared to Kite and MAP-ME solutions in terms of network cost and consumer quality-of-service metrics.
      Citation: Future Internet
      PubDate: 2024-02-10
      DOI: 10.3390/fi16020057
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 58: CROWDMATCH: Optimizing Crowdsourcing
           Matching through the Integration of Matching Theory and Coalition Games

    • Authors: Adedamola Adesokan, Rowan Kinney, Eirini Eleni Tsiropoulou
      First page: 58
      Abstract: This paper tackles the challenges inherent in crowdsourcing dynamics by introducing the CROWDMATCH mechanism. Aimed at enabling crowdworkers to strategically select suitable crowdsourcers while contributing information to crowdsourcing tasks, CROWDMATCH considers incentives, information availability and cost, and the decisions of fellow crowdworkers to model the utility functions for both the crowdworkers and the crowdsourcers. Specifically, the paper presents an initial Approximate CROWDMATCH mechanism grounded in matching theory principles, eliminating externalities from crowdworkers’ decisions and enabling each entity to maximize its utility. Subsequently, the Accurate CROWDMATCH mechanism is introduced, which is initiated by the outcome of the Approximate CROWDMATCH mechanism, and coalition game-theoretic principles are employed to refine the matching process by accounting for externalities. The paper’s contributions include the introduction of the CROWDMATCH system model, the development of both Approximate and Accurate CROWDMATCH mechanisms, and a demonstration of their superior performance through comprehensive simulation results. The mechanisms’ scalability in large-scale crowdsourcing systems and operational advantages are highlighted, distinguishing them from existing methods and highlighting their efficacy in empowering crowdworkers in crowdsourcer selection.
      Citation: Future Internet
      PubDate: 2024-02-11
      DOI: 10.3390/fi16020058
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 59: Digital-Twin-Based Monitoring System
           for Slab Production Process

    • Authors: Tianjie Fu, Peiyu Li, Chenke Shi, Youzhu Liu
      First page: 59
      Abstract: The growing demand for high-quality steel across various industries has led to an increasing need for superior-grade steel. The quality of slab ingots is a pivotal factor influencing the final quality of steel production. However, the current level of intelligence in the steelmaking industry’s processes is relatively insufficient. Consequently, slab ingot quality inspection is characterized by high-temperature risks and imprecision. The positional accuracy of quality detection is inadequate, and the precise quantification of slab ingot production and quality remains challenging. This paper proposes a digital twin (DT)-based monitoring system for the slab ingot production process that integrates DT technology with slab ingot process detection. A neural network is introduced for defect identification to ensure precise defect localization and efficient recognition. Concurrently, environmental production factors are considered, leading to the introduction of a defect prediction module. The effectiveness of this system is validated through experimental verification.
      Citation: Future Internet
      PubDate: 2024-02-13
      DOI: 10.3390/fi16020059
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 60: The Microverse: A Task-Oriented
           Edge-Scale Metaverse

    • Authors: Qian Qu, Mohsen Hatami, Ronghua Xu, Deeraj Nagothu, Yu Chen, Xiaohua Li, Erik Blasch, Erika Ardiles-Cruz, Genshe Chen
      First page: 60
      Abstract: Over the past decade, there has been a remarkable acceleration in the evolution of smart cities and intelligent spaces, driven by breakthroughs in technologies such as the Internet of Things (IoT), edge–fog–cloud computing, and machine learning (ML)/artificial intelligence (AI). As society begins to harness the full potential of these smart environments, the horizon brightens with the promise of an immersive, interconnected 3D world. The forthcoming paradigm shift in how we live, work, and interact owes much to groundbreaking innovations in augmented reality (AR), virtual reality (VR), extended reality (XR), blockchain, and digital twins (DTs). However, realizing the expansive digital vista in our daily lives is challenging. Current limitations include an incomplete integration of pivotal techniques, daunting bandwidth requirements, and the critical need for near-instantaneous data transmission, all impeding the digital VR metaverse from fully manifesting as envisioned by its proponents. This paper seeks to delve deeply into the intricacies of the immersive, interconnected 3D realm, particularly in applications demanding high levels of intelligence. Specifically, this paper introduces the microverse, a task-oriented, edge-scale, pragmatic solution for smart cities. Unlike all-encompassing metaverses, each microverse instance serves a specific task as a manageable digital twin of an individual network slice. Each microverse enables on-site/near-site data processing, information fusion, and real-time decision-making within the edge–fog–cloud computing framework. The microverse concept is verified using smart public safety surveillance (SPSS) for smart communities as a case study, demonstrating its feasibility in practical smart city applications. The aim is to stimulate discussions and inspire fresh ideas in our community, guiding us as we navigate the evolving digital landscape of smart cities to embrace the potential of the metaverse.
      Citation: Future Internet
      PubDate: 2024-02-13
      DOI: 10.3390/fi16020060
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 61: Enhancing Energy Efficiency in IoT-NDN
           via Parameter Optimization

    • Authors: Dennis Papenfuß, Bennet Gerlach, Stefan Fischer, Mohamed Ahmed Hail
      First page: 61
      Abstract: The IoT encompasses objects, sensors, and everyday items not typically considered computers. IoT devices are subject to severe energy, memory, and computation power constraints. Employing NDN for the IoT is a recent approach to accommodate these issues. To gain a deeper insight into how different network parameters affect energy consumption, analyzing a range of parameters using hyperparameter optimization seems reasonable. The experiments from this work’s ndnSIM-based hyperparameter setup indicate that the data packet size has the most significant impact on energy consumption, followed by the caching scheme, caching strategy, and finally, the forwarding strategy. The energy footprint of these parameters is orders of magnitude apart. Surprisingly, the packet request sequence influences the caching parameters’ energy footprint more than the graph size and topology. Regarding energy consumption, the results indicate that data compression may be more relevant than expected, and caching may be more significant than the forwarding strategy. The framework for ndnSIM developed in this work can be used to simulate NDN networks more efficiently. Furthermore, the work presents a valuable basis for further research on the effect of specific parameter combinations not examined before.
      Citation: Future Internet
      PubDate: 2024-02-16
      DOI: 10.3390/fi16020061
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 62: Merging Ontologies and Data from
           Electronic Health Records

    • Authors: Salvatore Calcagno , Andrea Calvagna , Emiliano Tramontana , Gabriella Verga 
      First page: 62
      Abstract: The Electronic Health Record (EHR) is a system for collecting and storing patient medical records as data that can be mechanically accessed, hence facilitating and assisting the medical decision-making process. EHRs exist in several formats, and each format lists thousands of keywords to classify patients data. The keywords are specific and are medical jargon; hence, data classification is very accurate. As the keywords constituting the formats of medical records express concepts by means of specific jargon without definitions or references, their proper use is left to clinicians and could be affected by their background, hence the interpretation of data could become slow or less accurate than that desired. This article presents an approach that accurately relates data in EHRs to ontologies in the medical realm. Thanks to ontologies, clinicians can be assisted when writing or analysing health records, e.g., our solution promptly suggests rigorous definitions for scientific terms, and automatically connects data spread over several parts of EHRs. The first step of our approach consists of converting selected data and keywords from several EHR formats into a format easier to parse, then the second step is merging the extracted data with specialised medical ontologies. Finally, enriched versions of the medical data are made available to professionals. The proposed approach was validated by taking samples of medical records and ontologies in the real world. The results have shown both versatility on handling data, precision of query results, and appropriate suggestions for relations among medical records.
      Citation: Future Internet
      PubDate: 2024-02-17
      DOI: 10.3390/fi16020062
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 63: Speech Inpainting Based on Multi-Layer
           Long Short-Term Memory Networks

    • Authors: Haohan Shi, Xiyu Shi, Safak Dogan
      First page: 63
      Abstract: Audio inpainting plays an important role in addressing incomplete, damaged, or missing audio signals, contributing to improved quality of service and overall user experience in multimedia communications over the Internet and mobile networks. This paper presents an innovative solution for speech inpainting using Long Short-Term Memory (LSTM) networks, i.e., a restoring task where the missing parts of speech signals are recovered from the previous information in the time domain. The lost or corrupted speech signals are also referred to as gaps. We regard the speech inpainting task as a time-series prediction problem in this research work. To address this problem, we designed multi-layer LSTM networks and trained them on different speech datasets. Our study aims to investigate the inpainting performance of the proposed models on different datasets and with varying LSTM layers and explore the effect of multi-layer LSTM networks on the prediction of speech samples in terms of perceived audio quality. The inpainted speech quality is evaluated through the Mean Opinion Score (MOS) and a frequency analysis of the spectrogram. Our proposed multi-layer LSTM models are able to restore up to 1 s of gaps with high perceptual audio quality using the features captured from the time domain only. Specifically, for gap lengths under 500 ms, the MOS can reach up to 3~4, and for gap lengths ranging between 500 ms and 1s, the MOS can reach up to 2~3. In the time domain, the proposed models can proficiently restore the envelope and trend of lost speech signals. In the frequency domain, the proposed models can restore spectrogram blocks with higher similarity to the original signals at frequencies less than 2.0 kHz and comparatively lower similarity at frequencies in the range of 2.0 kHz~8.0 kHz.
      Citation: Future Internet
      PubDate: 2024-02-17
      DOI: 10.3390/fi16020063
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 64: Online Optimization of Pickup and
           Delivery Problem Considering Feasibility

    • Authors: Ryo Matsuoka, Koichi Kobayashi, Yuh Yamashita
      First page: 64
      Abstract: A pickup and delivery problem by multiple agents has many applications, such as food delivery service and disaster rescue. In this problem, there are cases where fuels must be considered (e.g., the case of using drones as agents). In addition, there are cases where demand forecasting should be considered (e.g., the case where a large number of orders are carried by a small number of agents). In this paper, we consider an online pickup and delivery problem considering fuel and demand forecasting. First, the pickup and delivery problem with fuel constraints is formulated. The information on demand forecasting is included in the cost function. Based on the orders, the agents’ paths (e.g., the paths from stores to customers) are calculated. We suppose that the target area is given by an undirected graph. Using a given graph, several constraints such as the moves and fuels of the agents are introduced. This problem is reduced to a mixed integer linear programming (MILP) problem. Next, in online optimization, the MILP problem is solved depending on the acceptance of orders. Owing to new orders, the calculated future paths may be changed. Finally, by using a numerical example, we present the effectiveness of the proposed method.
      Citation: Future Internet
      PubDate: 2024-02-17
      DOI: 10.3390/fi16020064
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 65: IoTwins: Implementing Distributed and
           Hybrid Digital Twins in Industrial Manufacturing and Facility Management
           Settings

    • Authors: Paolo Bellavista, Giuseppe Di Di Modica
      First page: 65
      Abstract: A Digital Twin (DT) refers to a virtual representation or digital replica of a physical object, system, process, or entity. This concept involves creating a detailed, real-time digital counterpart that mimics the behavior, characteristics, and attributes of its physical counterpart. DTs have the potential to improve efficiency, reduce costs, and enhance decision-making by providing a detailed, real-time understanding of the physical systems they represent. While this technology is finding application in numerous fields, such as energy, healthcare, and transportation, it appears to be a key component of the digital transformation of industries fostered by the fourth Industrial revolution (Industry 4.0). In this paper, we present the research results achieved by IoTwins, a European research project aimed at investigating opportunities and issues of adopting DTs in the fields of industrial manufacturing and facility management. Particularly, we discuss a DT model and a reference architecture for use by the research community to implement a platform for the development and deployment of industrial DTs in the cloud continuum. Guided by the devised architectures’ principles, we implemented an open platform and a development methodology to help companies build DT-based industrial applications and deploy them in the so-called Edge/Cloud continuum. To prove the research value and the usability of the implemented platform, we discuss a simple yet practical development use case.
      Citation: Future Internet
      PubDate: 2024-02-17
      DOI: 10.3390/fi16020065
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 66: Energy-Efficient De-Duplication
           Mechanism for Healthcare Data Aggregation in IoT

    • Authors: Muhammad Nafees Ulfat Khan, Weiping Cao, Zhiling Tang, Ata Ullah, Wanghua Pan
      First page: 66
      Abstract: The rapid development of the Internet of Things (IoT) has opened the way for transformative advances in numerous fields, including healthcare. IoT-based healthcare systems provide unprecedented opportunities to gather patients’ real-time data and make appropriate decisions at the right time. Yet, the deployed sensors generate normal readings most of the time, which are transmitted to Cluster Heads (CHs). Handling these voluminous duplicated data is quite challenging. The existing techniques have high energy consumption, storage costs, and communication costs. To overcome these problems, in this paper, an innovative Energy-Efficient Fuzzy Data Aggregation System (EE-FDAS) has been presented. In it, at the first level, it is checked that sensors either generate normal or critical readings. In the first case, readings are converted to Boolean digit 0. This reduced data size takes only 1 digit which considerably reduces energy consumption. In the second scenario, sensors generating irregular readings are transmitted in their original 16 or 32-bit form. Then, data are aggregated and transmitted to respective CHs. Afterwards, these data are further transmitted to Fog servers, from where doctors have access. Lastly, for later usage, data are stored in the cloud server. For checking the proficiency of the proposed EE-FDAS scheme, extensive simulations are performed using NS-2.35. The results showed that EE-FDAS has performed well in terms of aggregation factor, energy consumption, packet drop rate, communication, and storage cost.
      Citation: Future Internet
      PubDate: 2024-02-19
      DOI: 10.3390/fi16020066
      Issue No: Vol. 16, No. 2 (2024)
       
  • Future Internet, Vol. 16, Pages 19: Proximal Policy Optimization for
           Efficient D2D-Assisted Computation Offloading and Resource Allocation in
           Multi-Access Edge Computing

    • Authors: Chen Zhang, Celimuge Wu, Min Lin, Yangfei Lin, William Liu
      First page: 19
      Abstract: In the advanced 5G and beyond networks, multi-access edge computing (MEC) is increasingly recognized as a promising technology, offering the dual advantages of reducing energy utilization in cloud data centers while catering to the demands for reliability and real-time responsiveness in end devices. However, the inherent complexity and variability of MEC networks pose significant challenges in computational offloading decisions. To tackle this problem, we propose a proximal policy optimization (PPO)-based Device-to-Device (D2D)-assisted computation offloading and resource allocation scheme. We construct a realistic MEC network environment and develop a Markov decision process (MDP) model that minimizes time loss and energy consumption. The integration of a D2D communication-based offloading framework allows for collaborative task offloading between end devices and MEC servers, enhancing both resource utilization and computational efficiency. The MDP model is solved using the PPO algorithm in deep reinforcement learning to derive an optimal policy for offloading and resource allocation. Extensive comparative analysis with three benchmarked approaches has confirmed our scheme’s superior performance in latency, energy consumption, and algorithmic convergence, demonstrating its potential to improve MEC network operations in the context of emerging 5G and beyond technologies.
      Citation: Future Internet
      PubDate: 2024-01-02
      DOI: 10.3390/fi16010019
      Issue No: Vol. 16, No. 1 (2024)
       
  • Future Internet, Vol. 16, Pages 20: Joint Beam-Forming Optimization for
           Active-RIS-Assisted Internet-of-Things Networks with SWIPT

    • Authors: Lidong Liu, Shidang Li, Mingsheng Wei, Jinsong Xu, Bencheng Yu
      First page: 20
      Abstract: Network energy resources are limited in communication systems, which may cause energy shortages in mobile devices at the user end. Active Reconfigurable Intelligent Surfaces (A-RIS) not only have phase modulation properties but also enhance the signal strength; thus, they are expected to solve the energy shortage problem experience at the user end in 6G communications. In this paper, a resource allocation algorithm for maximizing the sum of harvested energy is proposed for an active RIS-assisted Simultaneous Wireless Information and Power Transfer (SWIPT) system to solve the problem of low performance of harvested energy for users due to multiplicative fading. First, in the active RIS-assisted SWIPT system using a power splitting architecture to achieve information and energy co-transmission, the joint resource allocation problem is constructed with the objective function of maximizing the sum of the collected energy of all users, under the constraints of signal-to-noise ratio, active RIS and base station transmit power, and power splitting factors. Second, the considered non-convex problem can be turned into a standard convex problem by using alternating optimization, semi-definite relaxation, successive convex approximation, penalty function, etc., and then an alternating iterative algorithm for harvesting energy is proposed. The proposed algorithm splits the problem into two sub-problems and then performs iterative optimization separately, and then the whole is alternately optimized to obtain the optimal solution. Simulation results show that the proposed algorithm improves the performance by 45.2% and 103.7% compared to the passive RIS algorithm and the traditional without-RIS algorithm, respectively, at the maximum permissible transmitting power of 45 dBm at the base station.
      Citation: Future Internet
      PubDate: 2024-01-06
      DOI: 10.3390/fi16010020
      Issue No: Vol. 16, No. 1 (2024)
       
  • Future Internet, Vol. 16, Pages 21: A Comprehensive Study and Analysis of
           the Third Generation Partnership Project’s 5G New Radio for
           Vehicle-to-Everything Communication

    • Authors: G. G. Md. Nawaz Ali, Mohammad Nazmus Sadat, Md Suruz Miah, Sameer Ahmed Sharief, Yun Wang
      First page: 21
      Abstract: Recently, the Third Generation Partnership Project (3GPP) introduced new radio (NR) technology for vehicle-to-everything (V2X) communication to enable delay-sensitive and bandwidth-hungry applications in vehicular communication. The NR system is strategically crafted to complement the existing long-term evolution (LTE) cellular-vehicle-to-everything (C-V2X) infrastructure, particularly to support advanced services such as the operation of automated vehicles. It is widely anticipated that the fifth-generation (5G) NR system will surpass LTE C-V2X in terms of achieving superior performance in scenarios characterized by high throughput, low latency, and enhanced reliability, especially in the context of congested traffic conditions and a diverse range of vehicular applications. This article will provide a comprehensive literature review on vehicular communications from dedicated short-range communication (DSRC) to NR V2X. Subsequently, it delves into a detailed examination of the challenges and opportunities inherent in NR V2X technology. Finally, we proceed to elucidate the process of creating and analyzing an open-source 5G NR V2X module in network simulation-3 (ns-3) and then demonstrate the NR V2X performance in terms of different key performance indicators implemented through diverse operational scenarios.
      Citation: Future Internet
      PubDate: 2024-01-06
      DOI: 10.3390/fi16010021
      Issue No: Vol. 16, No. 1 (2024)
       
  • Future Internet, Vol. 16, Pages 22: A Novel Semantic IoT Middleware for
           Secure Data Management: Blockchain and AI-Driven Context Awareness

    • Authors: Mahmoud Elkhodr, Samiya Khan, Ergun Gide
      First page: 22
      Abstract: In the modern digital landscape of the Internet of Things (IoT), data interoperability and heterogeneity present critical challenges, particularly with the increasing complexity of IoT systems and networks. Addressing these challenges, while ensuring data security and user trust, is pivotal. This paper proposes a novel Semantic IoT Middleware (SIM) for healthcare. The architecture of this middleware comprises the following main processes: data generation, semantic annotation, security encryption, and semantic operations. The data generation module facilitates seamless data and event sourcing, while the Semantic Annotation Component assigns structured vocabulary for uniformity. SIM adopts blockchain technology to provide enhanced data security, and its layered approach ensures robust interoperability and intuitive user-centric operations for IoT systems. The security encryption module offers data protection, and the semantic operations module underpins data processing and integration. A distinctive feature of this middleware is its proficiency in service integration, leveraging semantic descriptions augmented by user feedback. Additionally, SIM integrates artificial intelligence (AI) feedback mechanisms to continuously refine and optimise the middleware’s operational efficiency.
      Citation: Future Internet
      PubDate: 2024-01-07
      DOI: 10.3390/fi16010022
      Issue No: Vol. 16, No. 1 (2024)
       
  • Future Internet, Vol. 16, Pages 23: Future Sustainable Internet
           Energy-Defined Networking

    • Authors: Alex Galis
      First page: 23
      Abstract: This paper presents a comprehensive set of design methods for making future Internet networking fully energy-aware and sustainably minimizing and managing the energy footprint. It includes (a) 41 energy-aware design methods, grouped into Service Operations Support, Management Operations Support, Compute Operations Support, Connectivity/Forwarding Operations Support, Traffic Engineering Methods, Architectural Support for Energy Instrumentation, and Network Configuration; (b) energy consumption models and energy metrics are identified and specified. It specifies the requirements for energy-defined network compliance, which include energy-measurable network devices with the support of several control messages: registration, discovery, provisioning, discharge, monitoring, synchronization, flooding, performance, and pushback.
      Citation: Future Internet
      PubDate: 2024-01-09
      DOI: 10.3390/fi16010023
      Issue No: Vol. 16, No. 1 (2024)
       
  • Future Internet, Vol. 16, Pages 24: Blockchain-Based Implementation of
           National Census as a Supplementary Instrument for Enhanced Transparency,
           Accountability, Privacy, and Security

    • Authors: Sana Rasheed, Soulla Louca
      First page: 24
      Abstract: A national population census is instrumental in offering a holistic view of a country’s progress, directly influencing policy formulation and strategic planning. Potential flaws in the census system can have detrimental impacts on national development. Our prior research has pinpointed various deficiencies in current census methodologies, including inadequate population coverage, racial and ethnic discrimination, and challenges related to data privacy, security, and distribution. This study aims to address the “missing persons” challenge in the national census population and housing system. The integration of blockchain technology emerges as a promising solution for addressing these identified issues, enhancing the integrity and efficacy of census processes. Building upon our earlier research which examined the national census system of Pakistan, we propose an architecture design incorporating Hyperledger Fabric, performing system sizing for the entire nation count. The Blockchain-Based Implementation of National Census as a Supplementary Instrument for Enhanced Transparency, Accountability, Privacy, and Security (BINC-TAPS) seeks to provide a robust, transparent, scalable, immutable, and tamper-proof solution for conducting national population and housing censuses, while also fostering socio-economic advancements. This paper presents a comprehensive overview of our research, with a primary focus on the implementation of the blockchain-based proposed solution, including prototype testing and the resulting outcomes.
      Citation: Future Internet
      PubDate: 2024-01-11
      DOI: 10.3390/fi16010024
      Issue No: Vol. 16, No. 1 (2024)
       
  • Future Internet, Vol. 16, Pages 25: Classification Tendency Difference
           Index Model for Feature Selection and Extraction in Wireless Intrusion
           Detection

    • Authors: Chinyang Henry Tseng, Woei-Jiunn Tsaur, Yueh-Mao Shen
      First page: 25
      Abstract: In detecting large-scale attacks, deep neural networks (DNNs) are an effective approach based on high-quality training data samples. Feature selection and feature extraction are the primary approaches for data quality enhancement for high-accuracy intrusion detection. However, their enhancement root causes usually present weak relationships to the differences between normal and attack behaviors in the data samples. Thus, we propose a Classification Tendency Difference Index (CTDI) model for feature selection and extraction in intrusion detection. The CTDI model consists of three indexes: Classification Tendency Frequency Difference (CTFD), Classification Tendency Membership Difference (CTMD), and Classification Tendency Distance Difference (CTDD). In the dataset, each feature has many feature values (FVs). In each FV, the normal and attack samples indicate the FV classification tendency, and CTDI shows the classification tendency differences between the normal and attack samples. CTFD is the frequency difference between the normal and attack samples. By employing fuzzy C means (FCM) to establish the normal and attack clusters, CTMD is the membership difference between the clusters, and CTDD is the distance difference between the cluster centers. CTDI calculates the index score in each FV and summarizes the scores of all FVs in the feature as the feature score for each of the three indexes. CTDI adopts an Auto Encoder for feature extraction to generate new features from the dataset and calculate the three index scores for the new features. CTDI sorts the original and new features for each of the three indexes to select the best features. The selected CTDI features indicate the best classification tendency differences between normal and attack samples. The experiment results demonstrate that the CTDI features achieve better detection accuracy as classified by DNN for the Aegean WiFi Intrusion Dataset than their related works, and the detection enhancements are based on the improved classification tendency differences in the CTDI features.
      Citation: Future Internet
      PubDate: 2024-01-12
      DOI: 10.3390/fi16010025
      Issue No: Vol. 16, No. 1 (2024)
       
  • Future Internet, Vol. 16, Pages 26: Digital Communication and Social
           Organizations: An Evaluation of the Communication Strategies of the
           Most-Valued NGOs Worldwide

    • Authors: Andrea Moreno-Cabanillas, Elizabet Castillero-Ostio, Antonio Castillo-Esparcia
      First page: 26
      Abstract: The communication of organizations with their audiences has undergone changes thanks to the Internet. Non-Governmental Organizations (NGOs), as influential groups, are no exception, as much of their activism takes place through grassroots digital lobbying. The consolidation of Web 2.0 has not only provided social organizations with a new and powerful tool for disseminating information but also brought about significant changes in the relationship between nonprofit organizations and their diverse audiences. This has facilitated and improved interaction between them. The purpose of this article is to analyze the level of interactivity implemented on the websites of leading NGOs worldwide and their presence on social networks, with the aim of assessing whether these influential groups are moving towards more dialogic systems in relation to their audience. The results reveal that NGOs have a high degree of interactivity in the tools used to present and disseminate information on their websites. However, not all maintain the same level of interactivity in the resources available for interaction with Internet users, as very few have high interactivity regarding bidirectional resources. It was concluded that international non-governmental organizations still suffer from certain shortcomings in the strategic management of digital communication on their web platforms, while, on the other hand, a strong presence can be noted on the most-popular social networks.
      Citation: Future Internet
      PubDate: 2024-01-13
      DOI: 10.3390/fi16010026
      Issue No: Vol. 16, No. 1 (2024)
       
  • Future Internet, Vol. 16, Pages 27: Service Function Chain Deployment
           Algorithm Based on Deep Reinforcement Learning in
           Space–Air–Ground Integrated Network

    • Authors: Xu Feng, Mengyang He, Lei Zhuang, Yanrui Song, Rumeng Peng
      First page: 27
      Abstract: SAGIN is formed by the fusion of ground networks and aircraft networks. It breaks through the limitation of communication, which cannot cover the whole world, bringing new opportunities for network communication in remote areas. However, many heterogeneous devices in SAGIN pose significant challenges in terms of end-to-end resource management, and the limited regional heterogeneous resources also threaten the QoS for users. In this regard, this paper proposes a hierarchical resource management structure for SAGIN, named SAGIN-MEC, based on a SDN, NFV, and MEC, aiming to facilitate the systematic management of heterogeneous network resources. Furthermore, to minimize the operator deployment costs while ensuring the QoS, this paper formulates a resource scheduling optimization model tailored to SAGIN scenarios to minimize energy consumption. Additionally, we propose a deployment algorithm, named DRL-G, which is based on heuristics and DRL, aiming to allocate heterogeneous network resources within SAGIN effectively. Experimental results showed that SAGIN-MEC can reduce the end-to-end delay by 6–15 ms compared to the terrestrial edge network, and compared to other algorithms, the DRL-G algorithm can improve the service request reception rate by up to 20%. In terms of energy consumption, it reduces the average energy consumption by 4.4% compared to the PG algorithm.
      Citation: Future Internet
      PubDate: 2024-01-16
      DOI: 10.3390/fi16010027
      Issue No: Vol. 16, No. 1 (2024)
       
  • Future Internet, Vol. 16, Pages 28: Clustering on the Chicago Array of
           Things: Spotting Anomalies in the Internet of Things Records

    • Authors: Kyle DeMedeiros, Chan Young Koh, Abdeltawab Hendawi
      First page: 28
      Abstract: The Chicago Array of Things (AoT) is a robust dataset taken from over 100 nodes over four years. Each node contains over a dozen sensors. The array contains a series of Internet of Things (IoT) devices with multiple heterogeneous sensors connected to a processing and storage backbone to collect data from across Chicago, IL, USA. The data collected include meteorological data such as temperature, humidity, and heat, as well as chemical data like CO2 concentration, PM2.5, and light intensity. The AoT sensor network is one of the largest open IoT systems available for researchers to utilize its data. Anomaly detection (AD) in IoT and sensor networks is an important tool to ensure that the ever-growing IoT ecosystem is protected from faulty data and sensors, as well as from attacking threats. Interestingly, an in-depth analysis of the Chicago AoT for anomaly detection is rare. Here, we study the viability of the Chicago AoT dataset to be used in anomaly detection by utilizing clustering techniques. We utilized K-Means, DBSCAN, and Hierarchical DBSCAN (H-DBSCAN) to determine the viability of labeling an unlabeled dataset at the sensor level. The results show that the clustering algorithm best suited for this task varies based on the density of the anomalous readings and the variability of the data points being clustered; however, at the sensor level, the K-Means algorithm, though simple, is better suited for the task of determining specific, at-a-glance anomalies than the more complex DBSCAN and HDBSCAN algorithms, though it comes with drawbacks.
      Citation: Future Internet
      PubDate: 2024-01-16
      DOI: 10.3390/fi16010028
      Issue No: Vol. 16, No. 1 (2024)
       
  • Future Internet, Vol. 16, Pages 29: Cross-Layer Methods for Ad Hoc
           Networks—Review and Classification

    • Authors: Valeriy Ivanov, Maxim Tereshonok
      First page: 29
      Abstract: The OSI model used to be a common network model for years. In the case of ad hoc networks with dynamic topology and difficult radio communications conditions, gradual departure is happening from the classical kind of OSI network model with a clear delineation of layers (physical, channel, network, transport, application) to the cross-layer approach. The layers of the network model in ad hoc networks strongly influence each other. Thus, the cross-layer approach can improve the performance of an ad hoc network by jointly developing protocols using interaction and collaborative optimization of multiple layers. The existing cross-layer methods classification is too complicated because it is based on the whole manifold of network model layer combinations, regardless of their importance. In this work, we review ad hoc network cross-layer methods, propose a new useful classification of cross-layer methods, and show future research directions in the development of ad hoc network cross-layer methods. The proposed classification can help to simplify the goal-oriented cross-layer protocol development.
      Citation: Future Internet
      PubDate: 2024-01-16
      DOI: 10.3390/fi16010029
      Issue No: Vol. 16, No. 1 (2024)
       
  • Future Internet, Vol. 16, Pages 30: An Imbalanced Sequence Feature
           Extraction Approach for the Detection of LTE-R Cells with Degraded
           Communication Performance

    • Authors: Jiantao Qu, Chunyu Qi, He Meng
      First page: 30
      Abstract: Within the Shuo Huang Railway Company (Suning, China ) the long-term evolution for railways (LTE-R) network carries core wireless communication services for trains. The communication performance of LTE-R cells directly affects the operational safety of the trains. Therefore, this paper proposes a novel detection method for LTE-R cells with degraded communication performance. Considering that the number of LTE-R cells with degraded communication performance and that of normal cells are extremely imbalanced and that the communication performance indicator data for each cell are sequence data, we propose a feature extraction neural network structure for imbalanced sequences, based on shapelet transformation and a convolutional neural network (CNN). Then, to train the network, we set the optimization objective based on the Fisher criterion. Finally, using a two-stage training method, we obtain a neural network model that can distinguish LTE-R cells with degraded communication performance from normal cells at the feature level. Experiments on a real-world dataset show that the proposed method can realize the accurate detection of LTE-R cells with degraded communication performance and has high practical application value.
      Citation: Future Internet
      PubDate: 2024-01-16
      DOI: 10.3390/fi16010030
      Issue No: Vol. 16, No. 1 (2024)
       
  • Future Internet, Vol. 16, Pages 31: Investigation of Phishing
           Susceptibility with Explainable Artificial Intelligence

    • Authors: Zhengyang Fan, Wanru Li, Kathryn Blackmond Laskey, Kuo-Chu Chang
      First page: 31
      Abstract: Phishing attacks represent a significant and growing threat in the digital world, affecting individuals and organizations globally. Understanding the various factors that influence susceptibility to phishing is essential for developing more effective strategies to combat this pervasive cybersecurity challenge. Machine learning has become a prevalent method in the study of phishing susceptibility. Most studies in this area have taken one of two approaches: either they explore statistical associations between various factors and susceptibility, or they use complex models such as deep neural networks to predict phishing behavior. However, these approaches have limitations in terms of providing practical insights for individuals to avoid future phishing attacks and delivering personalized explanations regarding their susceptibility to phishing. In this paper, we propose a machine-learning approach that leverages explainable artificial intelligence techniques to examine the influence of human and demographic factors on susceptibility to phishing attacks. The machine learning model yielded an accuracy of 78%, with a recall of 71%, and a precision of 57%. Our analysis reveals that psychological factors such as impulsivity and conscientiousness, as well as appropriate online security habits, significantly affect an individual’s susceptibility to phishing attacks. Furthermore, our individualized case-by-case approach offers personalized recommendations on mitigating the risk of falling prey to phishing exploits, considering the specific circumstances of each individual.
      Citation: Future Internet
      PubDate: 2024-01-17
      DOI: 10.3390/fi16010031
      Issue No: Vol. 16, No. 1 (2024)
       
  • Future Internet, Vol. 16, Pages 32: A Holistic Review of Machine Learning
           Adversarial Attacks in IoT Networks

    • Authors: Hassan Khazane, Mohammed Ridouani, Fatima Salahdine, Naima Kaabouch
      First page: 32
      Abstract: With the rapid advancements and notable achievements across various application domains, Machine Learning (ML) has become a vital element within the Internet of Things (IoT) ecosystem. Among these use cases is IoT security, where numerous systems are deployed to identify or thwart attacks, including intrusion detection systems (IDSs), malware detection systems (MDSs), and device identification systems (DISs). Machine Learning-based (ML-based) IoT security systems can fulfill several security objectives, including detecting attacks, authenticating users before they gain access to the system, and categorizing suspicious activities. Nevertheless, ML faces numerous challenges, such as those resulting from the emergence of adversarial attacks crafted to mislead classifiers. This paper provides a comprehensive review of the body of knowledge about adversarial attacks and defense mechanisms, with a particular focus on three prominent IoT security systems: IDSs, MDSs, and DISs. The paper starts by establishing a taxonomy of adversarial attacks within the context of IoT. Then, various methodologies employed in the generation of adversarial attacks are described and classified within a two-dimensional framework. Additionally, we describe existing countermeasures for enhancing IoT security against adversarial attacks. Finally, we explore the most recent literature on the vulnerability of three ML-based IoT security systems to adversarial attacks.
      Citation: Future Internet
      PubDate: 2024-01-19
      DOI: 10.3390/fi16010032
      Issue No: Vol. 16, No. 1 (2024)
       
  • Future Internet, Vol. 16, Pages 33: Overview of Protocols and Standards
           for Wireless Sensor Networks in Critical Infrastructures

    • Authors: Spyridon Daousis, Nikolaos Peladarinos, Vasileios Cheimaras, Panagiotis Papageorgas, Dimitrios D. Piromalis, Radu Adrian Munteanu
      First page: 33
      Abstract: This paper highlights the crucial role of wireless sensor networks (WSNs) in the surveillance and administration of critical infrastructures (CIs), contributing to their reliability, security, and operational efficiency. It starts by detailing the international significance and structural aspects of these infrastructures, mentions the market tension in recent years in the gradual development of wireless networks for industrial applications, and proceeds to categorize WSNs and examine the protocols and standards of WSNs in demanding environments like critical infrastructures, drawing on the recent literature. This review concentrates on the protocols and standards utilized in WSNs for critical infrastructures, and it concludes by identifying a notable gap in the literature concerning quality standards for equipment used in such infrastructures.
      Citation: Future Internet
      PubDate: 2024-01-21
      DOI: 10.3390/fi16010033
      Issue No: Vol. 16, No. 1 (2024)
       
  • Future Internet, Vol. 16, Pages 34: An Innovative Information Hiding
           Scheme Based on Block-Wise Pixel Reordering

    • Authors: Jui-Chuan Liu, Heng-Xiao Chi, Ching-Chun Chang, Chin-Chen Chang
      First page: 34
      Abstract: Information has been uploaded and downloaded through the Internet, day in and day out, ever since we immersed ourselves in the Internet. Data security has become an area demanding high attention, and one of the most efficient techniques for protecting data is data hiding. In recent studies, it has been shown that the indices of a codebook can be reordered to hide secret bits. The hiding capacity of the codeword index reordering scheme increases when the size of the codebook increases. Since the codewords in the codebook are not modified, the visual performance of compressed images is retained. We propose a novel scheme making use of the fundamental principle of the codeword index reordering technique to hide secret data in encrypted images. By observing our experimental results, we can see that the obtained embedding capacity of 197,888 is larger than other state-of-the-art schemes. Secret data can be extracted when a receiver owns a data hiding key, and the image can be recovered when a receiver owns an encryption key.
      Citation: Future Internet
      PubDate: 2024-01-22
      DOI: 10.3390/fi16010034
      Issue No: Vol. 16, No. 1 (2024)
       
  • Future Internet, Vol. 16, Pages 35: A Bee Colony-Based Optimized Searching
           Mechanism in the Internet of Things

    • Authors: Muhammad Sher Ramzan, Anees Asghar, Ata Ullah, Fawaz Alsolami, Iftikhar Ahmad
      First page: 35
      Abstract: The Internet of Things (IoT) consists of complex and dynamically aggregated elements or smart entities that need decentralized supervision for data exchanging throughout different networks. The artificial bee colony (ABC) is utilized in optimization problems for the big data in IoT, cloud and central repositories. The main limitation during the searching mechanism is that every single food site is compared with every other food site to find the best solution in the neighboring regions. In this way, an extensive number of redundant comparisons are required, which results in a slower convergence rate, greater time consumption and increased delays. This paper presents a solution to optimize search operations with an enhanced ABC (E-ABC) approach. The proposed algorithm compares the best food sites with neighboring sites to exclude poor sources. It achieves an efficient mechanism, where the number of redundant comparisons is decreased during the searching mechanism of the employed bee phase and the onlooker bee phase. The proposed algorithm is implemented in a replication scenario to validate its performance in terms of the mean objective function values for different functions, as well as the probability of availability and the response time. The results prove the superiority of the E-ABC in contrast to its counterparts.
      Citation: Future Internet
      PubDate: 2024-01-22
      DOI: 10.3390/fi16010035
      Issue No: Vol. 16, No. 1 (2024)
       
  • Future Internet, Vol. 16, Pages 1: A Vulnerability Assessment of
           Open-Source Implementations of Fifth-Generation Core Network Functions

    • Authors: Filippo Dolente, Rosario Giuseppe Garroppo, Michele Pagano
      First page: 1
      Abstract: The paper presents an experimental security assessment within two widely used open-source 5G projects, namely Open5GS and OAI (Open-Air Interface). The examination concentrates on two network functions (NFs) that are externally exposed within the core network architecture, i.e., the Access and Mobility Management Function (AMF) and the Network Repository Function/Network Exposure Function (NRF/NEF) of the Service-Based Architecture (SBA). Focusing on the Service-Based Interface (SBI) of these exposed NFs, the analysis not only identifies potential security gaps but also underscores the crucial role of Mobile Network Operators (MNOs) in implementing robust security measures. Furthermore, given the shift towards Network Function Virtualization (NFV), this paper emphasizes the importance of secure development practices to enhance the integrity of 5G network functions. In essence, this paper underscores the significance of scrutinizing security vulnerabilities in open-source 5G projects, particularly within the core network’s SBI and externally exposed NFs. The research outcomes provide valuable insights for MNOs, enabling them to establish effective security measures and promote secure development practices to safeguard the integrity of 5G network functions. Additionally, the empirical investigation aids in identifying potential vulnerabilities in open-source 5G projects, paving the way for future enhancements and standard releases.
      Citation: Future Internet
      PubDate: 2023-12-19
      DOI: 10.3390/fi16010001
      Issue No: Vol. 16, No. 1 (2023)
       
  • Future Internet, Vol. 16, Pages 2: Integrating Knowledge Graphs into
           Distribution Grid Decision Support Systems

    • Authors: Yashar Kor, Liang Tan, Petr Musilek, Marek Z. Reformat
      First page: 2
      Abstract: Distribution grids are complex networks containing multiple pieces of equipment. These components are interconnected, and each of them is described by various attributes. A knowledge graph is an interesting data format that represents pieces of information as nodes and relations between the pieces as edges. In this paper, we describe the proposed vocabulary used to build a distribution system knowledge graph. We identify the concepts used in such graphs and a set of relations to represent links between concepts. Both provide a semantically rich representation of a system. Additionally, we offer a few illustrative examples of how a distributed system knowledge graph can be utilized to gain more insight into the operations of the grid. We show a simplified analysis of how outages can influence customers based on their locations and how adding DERs can influence/change it. These demonstrative use cases show that the graph-based representation of a distribution grid allows for integrating information of different types and how such a repository can be efficiently utilized. Based on the experiments with distribution system knowledge graphs presented in this article, we postulate that graph-based representation enables a novel way of storing information about power grids and facilitates interactive methods for their visualization and analysis.
      Citation: Future Internet
      PubDate: 2023-12-20
      DOI: 10.3390/fi16010002
      Issue No: Vol. 16, No. 1 (2023)
       
  • Future Internet, Vol. 16, Pages 3: Investigating the Key Aspects of a
           Smart City through Topic Modeling and Thematic Analysis

    • Authors: Anestis Kousis, Christos Tjortjis
      First page: 3
      Abstract: In recent years, the emergence of the smart city concept has garnered attention as a promising innovation aimed at addressing the multifactorial challenges arising from the concurrent trends of urban population growth and the climate crisis. In this study, we delve into the multifaceted dimensions of the smart city paradigm to unveil its underlying structure, employing a combination of quantitative and qualitative techniques. To achieve this, we collected textual data from three sources: scientific publication abstracts, news blog posts, and social media entries. For the analysis of this textual data, we introduce an innovative semi-automated methodology that integrates topic modeling and thematic analysis. Our findings highlight the intricate nature of the smart city domain, which necessitates examination from three perspectives: applications, technology, and socio-economic perspective. Through our analysis, we identified ten distinct aspects of the smart city paradigm, encompassing mobility, energy, infrastructure, environment, IoT, data, business, planning and administration, security, and people. When comparing the outcomes across the three diverse datasets, we noted a relative lack of attention within the scientific community towards certain aspects, notably in the realm of business, as well as themes relevant to citizens’ everyday lives, such as food, shopping, and green spaces. This work reveals the underlying thematic structure of the smart city concept to help researchers, practitioners, and public administrators participate effectively in smart city transformation initiatives. Furthermore, it introduces a novel data-driven method for conducting thematic analysis on large text datasets.
      Citation: Future Internet
      PubDate: 2023-12-22
      DOI: 10.3390/fi16010003
      Issue No: Vol. 16, No. 1 (2023)
       
  • Future Internet, Vol. 16, Pages 4: Development of An Assessment Scale for
           Measurement of Usability and User Experience Characteristics of Bing Chat
           Conversational AI

    • Authors: Goran Bubaš, Antonela Čižmešija, Andreja Kovačić
      First page: 4
      Abstract: After the introduction of the ChatGPT conversational artificial intelligence (CAI) tool in November 2022, there has been a rapidly growing interest in the use of such tools in higher education. While the educational uses of some other information technology (IT) tools (including collaboration and communication tools, learning management systems, chatbots, and videoconferencing tools) have been frequently evaluated regarding technology acceptance and usability attributes of those technologies, similar evaluations of CAI tools and services like ChatGPT, Bing Chat, and Bard have only recently started to appear in the scholarly literature. In our study, we present a newly developed set of assessment scales that are related to the usability and user experiences of CAI tools when used by university students, as well as the results of evaluation of these assessment scales specifically regarding the CAI Bing Chat tool (i.e., Microsoft Copilot). The following scales were developed and evaluated using a convenience sample (N = 126) of higher education students: Perceived Usefulness, General Usability, Learnability, System Reliability, Visual Design and Navigation, Information Quality, Information Display, Cognitive Involvement, Design Appeal, Trust, Personification, Risk Perception, and Intention to Use. For most of the aforementioned scales, internal consistency (Cronbach alpha) was in the range from satisfactory to good, which implies their potential usefulness for further studies of related attributes of CAI tools. A stepwise linear regression revealed that the most influential predictors of Intention to Use Bing Chat (or ChatGPT) in the future were the usability variable Perceived Usefulness and two user experience variables—Trust and Design Appeal. Also, our study revealed that students’ perceptions of various specific usability and user experience characteristics of Bing Chat were predominantly positive. The evaluated assessment scales could be beneficial in further research that would include other CAI tools like ChatGPT/GPT-4 and Bard.
      Citation: Future Internet
      PubDate: 2023-12-23
      DOI: 10.3390/fi16010004
      Issue No: Vol. 16, No. 1 (2023)
       
  • Future Internet, Vol. 16, Pages 5: The Future of Flying Base Stations:
           Empirical and Numerical Investigations of mmWave-Enabled UAVs

    • Authors: Ryunosuke Masaoka, Gia Khanh Tran, Jin Nakazato, Kei Sakaguchi
      First page: 5
      Abstract: Nowadays, wireless communications are ubiquitously available. However, as pervasive as this technology is, there are distinct situations, such as during substantial public events, catastrophic disasters, or unexpected malfunctions of base stations (BSs), where the reliability of these communications might be jeopardized. Such scenarios highlight the vulnerabilities inherent in our current infrastructure. As a result, there is growing interest in establishing temporary networks that offer high-capacity communications and can adaptively shift service locations. To address this gap, this paper investigates the promising avenue of merging two powerful technologies: Unmanned Aerial Vehicles (UAVs) and millimeter-wave (mmWave) transmissions. UAVs, with their ability to be operated remotely and to take flight without being constrained by terrestrial limitations, present a compelling case for being the cellular BSs of the future. When integrated with the high-speed data transfer capabilities of mmWave technology, the potential is boundless. We embark on a hands-on approach to provide a tangible foundation for our hypothesis. We carry out comprehensive experiments using an actual UAV equipped with an mmWave device. Our main objective is to meticulously study its radio wave propagation attributes when the UAVs are in flight mode. The insights gleaned from this hands-on experimentation are profound. We contrast our experimental findings with a rigorous numerical analysis to refine our understanding. This comparative study aimed to shed light on the intricacies of wave propagation behaviors within the vast expanse of the atmosphere.
      Citation: Future Internet
      PubDate: 2023-12-25
      DOI: 10.3390/fi16010005
      Issue No: Vol. 16, No. 1 (2023)
       
  • Future Internet, Vol. 16, Pages 6: TDLearning: Trusted Distributed
           Collaborative Learning Based on Blockchain Smart Contracts

    • Authors: Jing Liu, Xuesong Hai, Keqin Li
      First page: 6
      Abstract: Massive amounts of data drive the performance of deep learning models, but in practice, data resources are often highly dispersed and bound by data privacy and security concerns, making it difficult for multiple data sources to share their local data directly. Data resources are difficult to aggregate effectively, resulting in a lack of support for model training. How to collaborate between data sources in order to aggregate the value of data resources is therefore an important research question. However, existing distributed-collaborative-learning architectures still face serious challenges in collaborating between nodes that lack mutual trust, with security and trust issues seriously affecting the confidence and willingness of data sources to participate in collaboration. Blockchain technology provides trusted distributed storage and computing, and combining it with collaboration between data sources to build trusted distributed-collaborative-learning architectures is an extremely valuable research direction for application. We propose a trusted distributed-collaborative-learning mechanism based on blockchain smart contracts. Firstly, the mechanism uses blockchain smart contracts to define and encapsulate collaborative behaviours, relationships and norms between distributed collaborative nodes. Secondly, we propose a model-fusion method based on feature fusion, which replaces the direct sharing of local data resources with distributed-model collaborative training and organises distributed data resources for distributed collaboration to improve model performance. Finally, in order to verify the trustworthiness and usability of the proposed mechanism, on the one hand, we implement formal modelling and verification of the smart contract by using Coloured Petri Net and prove that the mechanism satisfies the expected trustworthiness properties by verifying the formal model of the smart contract associated with the mechanism. On the other hand, the model-fusion method based on feature fusion is evaluated in different datasets and collaboration scenarios, while a typical collaborative-learning case is implemented for a comprehensive analysis and validation of the mechanism. The experimental results show that the proposed mechanism can provide a trusted and fair collaboration infrastructure for distributed-collaboration nodes that lack mutual trust and organise decentralised data resources for collaborative model training to develop effective global models.
      Citation: Future Internet
      PubDate: 2023-12-25
      DOI: 10.3390/fi16010006
      Issue No: Vol. 16, No. 1 (2023)
       
  • Future Internet, Vol. 16, Pages 7: Design and Implementation of a Digital
           Twin System for Log Rotary Cutting Optimization

    • Authors: Yadi Zhao, Lei Yan, Jian Wu, Ximing Song
      First page: 7
      Abstract: To address the low level of intelligence and low utilization of logs in current rotary cutting equipment, this paper proposes a digital twin-based system for optimizing the rotary cutting of logs using a five-dimensional model of digital twins. The system features a log perception platform to capture three-dimensional point cloud data, outlining the logs’ contours. Utilizing the Delaunay3D algorithm, this model performs a three-dimensional reconstruction of the log point cloud, constructing a precise digital twin. Feature information is extracted from the point cloud using the least squares method. Processing parameters, determined through the kinematic model, are verified in rotary cutting simulations via Bool operations. The system’s efficacy has been substantiated through experimental validation, demonstrating its capability to output specific processing schemes for irregular logs and to verify these through simulation. This approach notably improves log recovery rates, decreasing volume error from 12.8% to 2.7% and recovery rate error from 23.5% to 5.7% The results validate the efficacy of the proposed digital twin system in optimizing the rotary cutting process, demonstrating its capability not only to enhance the utilization rate of log resources but also to improve the economic efficiency of the factory, thereby facilitating industrial development.
      Citation: Future Internet
      PubDate: 2023-12-25
      DOI: 10.3390/fi16010007
      Issue No: Vol. 16, No. 1 (2023)
       
  • Future Internet, Vol. 16, Pages 8: Latent Autoregressive Student-t Prior
           Process Models to Assess Impact of Interventions in Time Series

    • Authors: Patrick Toman, Nalini Ravishanker, Nathan Lally, Sanguthevar Rajasekaran
      First page: 8
      Abstract: With the advent of the “Internet of Things” (IoT), insurers are increasingly leveraging remote sensor technology in the development of novel insurance products and risk management programs. For example, Hartford Steam Boiler’s (HSB) IoT freeze loss program uses IoT temperature sensors to monitor indoor temperatures in locations at high risk of water-pipe burst (freeze loss) with the goal of reducing insurances losses via real-time monitoring of the temperature data streams. In the event these monitoring systems detect a potentially risky temperature environment, an alert is sent to the end-insured (business manager, tenant, maintenance staff, etc.), prompting them to take remedial action by raising temperatures. In the event that an alert is sent and freeze loss occurs, the firm is not liable for any damages incurred by the event. For the program to be effective, there must be a reliable method of verifying if customers took appropriate corrective action after receiving an alert. Due to the program’s scale, direct follow up via text or phone calls is not possible for every alert event. In addition, direct feedback from customers is not necessarily reliable. In this paper, we propose the use of a non-linear, auto-regressive time series model, coupled with the time series intervention analysis method known as causal impact, to directly evaluate whether or not a customer took action directly from IoT temperature streams. Our method offers several distinct advantages over other methods as it is (a) readily scalable with continued program growth, (b) entirely automated, and (c) inherently less biased than human labelers or direct customer response. We demonstrate the efficacy of our method using a sample of actual freeze alert events from the freeze loss program.
      Citation: Future Internet
      PubDate: 2023-12-28
      DOI: 10.3390/fi16010008
      Issue No: Vol. 16, No. 1 (2023)
       
  • Future Internet, Vol. 16, Pages 9: Smart Grid Security: A PUF-Based
           Authentication and Key Agreement Protocol

    • Authors: Nasour Bagheri, Ygal Bendavid, Masoumeh Safkhani, Samad Rostampour
      First page: 9
      Abstract: A smart grid is an electricity network that uses advanced technologies to facilitate the exchange of information and electricity between utility companies and customers. Although most of the technologies involved in such grids have reached maturity, smart meters—as connected devices—introduce new security challenges. To overcome this significant obstacle to grid modernization, safeguarding privacy has emerged as a paramount concern. In this paper, we begin by evaluating the security levels of recently proposed authentication methods for smart meters. Subsequently, we introduce an enhanced protocol named PPSG, designed for smart grids, which incorporates physical unclonable functions (PUF) and an elliptic curve cryptography (ECC) module to address the vulnerabilities identified in previous approaches. Our security analysis, utilizing a real-or-random (RoR) model, demonstrates that PPSG effectively mitigates the weaknesses found in prior methods. To assess the practicality of PPSG, we conduct simulations using an Arduino UNO board, measuring computation, communication, and energy costs. Our results, including a processing time of 153 ms, a communication cost of 1376 bits, and an energy consumption of 13.468 mJ, align with the requirements of resource-constrained devices within smart grids.
      Citation: Future Internet
      PubDate: 2023-12-28
      DOI: 10.3390/fi16010009
      Issue No: Vol. 16, No. 1 (2023)
       
  • Future Internet, Vol. 16, Pages 10: Automotive Cybersecurity Application
           Based on CARDIAN

    • Authors: Emanuele Santonicola, Ennio Andrea Adinolfi, Simone Coppola, Francesco Pascale
      First page: 10
      Abstract: Nowadays, a vehicle can contain from 20 to 100 ECUs, which are responsible for ordering, controlling and monitoring all the components of the vehicle itself. Each of these units can also send and receive information to other units on the network or externally. For most vehicles, the controller area network (CAN) is the main communication protocol and system used to build their internal network. Technological development, the growing integration of devices and the numerous advances in the field of connectivity have allowed the vehicle to become connected, and the flow of information exchanged between the various ECUs (electronic control units) becomes increasingly important and varied. Furthermore, the vehicle itself is capable of exchanging information with other vehicles, with the surrounding environment and with the Internet. As shown by the CARDIAN project, this type of innovation allows the user an increasingly safe and varied driving experience, but at the same time, it introduces a series of vulnerabilities and dangers due to the connection itself. The job of making the vehicle safe therefore becomes critical. In recent years, it has been demonstrated in multiple ways how easy it is to compromise the safety of a vehicle and its passengers by injecting malicious messages into the CAN network present inside the vehicle itself. The purpose of this article is the construction of a system that, integrated within the vehicle network, is able to effectively recognize any type of intrusion and tampering.
      Citation: Future Internet
      PubDate: 2023-12-28
      DOI: 10.3390/fi16010010
      Issue No: Vol. 16, No. 1 (2023)
       
  • Future Internet, Vol. 16, Pages 11: Utilizing User Bandwidth Resources in
           Information-Centric Networking through Blockchain-Based Incentive
           Mechanism

    • Authors: Qiang Liu, Rui Han, Yang Li
      First page: 11
      Abstract: Idle bandwidth resources are inefficiently distributed among different users. Currently, the utilization of user bandwidth resources mostly relies on traditional IP networks, implementing relevant techniques at the application layer, which creates scalability issues and brings additional system overheads. Information-Centric Networking (ICN), based on the idea of separating identifiers and locators, offers the potential to aggregate idle bandwidth resources from a network layer perspective. This paper proposes a method for utilizing user bandwidth resources in ICN; specifically, we treat the use of user bandwidth resources as a service and assign service IDs (identifiers), and when network congestion (the network nodes are overloaded) occurs, the traffic can be routed to the user side for forwarding through the ID/NA (Network Address) cooperative routing mechanism of ICN, thereby improving the scalability of ICN transmission and the utilization of underlying network resources. To enhance the willingness of users to contribute idle bandwidth resources, we establish a secure and trustworthy bandwidth trading market using blockchain technology. We also design an incentive mechanism based on the Proof-of-Network-Contribution (PoNC) consensus algorithm; users can “mine” by forwarding packets. The experimental results show that utilizing idle bandwidth can significantly improve the scalability of ICN transmission under experimental conditions, bringing a maximum throughput improvement of 19.4% and reducing the packet loss rate. Compared with existing methods, using ICN technology to aggregate idle bandwidth for network transmission will have a more stable and lower latency, and it brings a maximum utilization improvement of 13.7%.
      Citation: Future Internet
      PubDate: 2023-12-28
      DOI: 10.3390/fi16010011
      Issue No: Vol. 16, No. 1 (2023)
       
  • Future Internet, Vol. 16, Pages 12: Evaluating Embeddings from Pre-Trained
           Language Models and Knowledge Graphs for Educational Content
           Recommendation

    • Authors: Xiu Li, Aron Henriksson, Martin Duneld, Jalal Nouri, Yongchao Wu
      First page: 12
      Abstract: Educational content recommendation is a cornerstone of AI-enhanced learning. In particular, to facilitate navigating the diverse learning resources available on learning platforms, methods are needed for automatically linking learning materials, e.g., in order to recommend textbook content based on exercises. Such methods are typically based on semantic textual similarity (STS) and the use of embeddings for text representation. However, it remains unclear what types of embeddings should be used for this task. In this study, we carry out an extensive empirical evaluation of embeddings derived from three different types of models: (i) static embeddings trained using a concept-based knowledge graph, (ii) contextual embeddings from a pre-trained language model, and (iii) contextual embeddings from a large language model (LLM). In addition to evaluating the models individually, various ensembles are explored based on different strategies for combining two models in an early vs. late fusion fashion. The evaluation is carried out using digital textbooks in Swedish for three different subjects and two types of exercises. The results show that using contextual embeddings from an LLM leads to superior performance compared to the other models, and that there is no significant improvement when combining these with static embeddings trained using a knowledge graph. When using embeddings derived from a smaller language model, however, it helps to combine them with knowledge graph embeddings. The performance of the best-performing model is high for both types of exercises, resulting in a mean Recall@3 of 0.96 and 0.95 and a mean MRR of 0.87 and 0.86 for quizzes and study questions, respectively, demonstrating the feasibility of using STS based on text embeddings for educational content recommendation. The ability to link digital learning materials in an unsupervised manner—relying only on readily available pre-trained models—facilitates the development of AI-enhanced learning.
      Citation: Future Internet
      PubDate: 2023-12-29
      DOI: 10.3390/fi16010012
      Issue No: Vol. 16, No. 1 (2023)
       
  • Future Internet, Vol. 16, Pages 13: Vnode: Low-Overhead Transparent
           Tracing of Node.js-Based Microservice Architectures

    • Authors: Herve M. Kabamba, Matthew Khouzam, Michel R. Dagenais
      First page: 13
      Abstract: Tracing serves as a key method for evaluating the performance of microservices-based architectures, which are renowned for their scalability, resource efficiency, and high availability. Despite their advantages, these architectures often pose unique debugging challenges that necessitate trade-offs, including the burden of instrumentation overhead. With Node.js emerging as a leading development environment recognized for its rapidly growing ecosystem, there is a pressing need for innovative performance debugging approaches that reduce the telemetry data collection efforts and the overhead incurred by the environment’s instrumentation. In response, we introduce a new approach designed for transparent tracing and performance debugging of microservices in cloud settings. This approach is centered around our newly developed Internal Transparent Tracing and Context Reconstruction (ITTCR) technique. ITTCR is adept at correlating internal metrics from various distributed trace files to reconstruct the intricate execution contexts of microservices operating in a Node.js environment. Our method achieves transparency by directly instrumenting the Node.js virtual machine, enabling the collection and analysis of trace events in a transparent manner. This process facilitates the creation of visualization tools, enhancing the understanding and analysis of microservice performance in cloud environments. Compared to other methods, our approach incurs an overhead of approximately 5% on the system for the trace collection infrastructure while exhibiting minimal utilization of system resources during analysis execution. Experiments demonstrate that our technique scales well with very large trace files containing huge numbers of events and performs analyses in very acceptable timeframes.
      Citation: Future Internet
      PubDate: 2023-12-29
      DOI: 10.3390/fi16010013
      Issue No: Vol. 16, No. 1 (2023)
       
  • Future Internet, Vol. 16, Pages 14: 1-D Convolutional Neural Network-Based
           Models for Cooperative Spectrum Sensing

    • Authors: Omar Serghini, Hayat Semlali, Asmaa Maali, Abdelilah Ghammaz, Salvatore Serrano
      First page: 14
      Abstract: Spectrum sensing is an essential function of cognitive radio technology that can enable the reuse of available radio resources by so-called secondary users without creating harmful interference with licensed users. The application of machine learning techniques to spectrum sensing has attracted considerable interest in the literature. In this contribution, we study cooperative spectrum sensing in a cognitive radio network where multiple secondary users cooperate to detect a primary user. We introduce multiple cooperative spectrum sensing schemes based on a deep neural network, which incorporate a one-dimensional convolutional neural network and a long short-term memory network. The primary objective of these schemes is to effectively learn the activity patterns of the primary user. The scenario of an imperfect transmission channel is considered for service messages to demonstrate the robustness of the proposed model. The performance of the proposed methods is evaluated with the receiver operating characteristic curve, the probability of detection for various SNR levels and the computational time. The simulation results confirm the effectiveness of the bidirectional long short-term memory-based method, surpassing the performance of the other proposed schemes and the current state-of-the-art methods in terms of detection probability, while ensuring a reasonable online detection time.
      Citation: Future Internet
      PubDate: 2023-12-29
      DOI: 10.3390/fi16010014
      Issue No: Vol. 16, No. 1 (2023)
       
  • Future Internet, Vol. 16, Pages 15: Resource Indexing and Querying in
           Large Connected Environments

    • Authors: Fouad Achkouty, Richard Chbeir, Laurent Gallon, Elio Mansour, Antonio Corral
      First page: 15
      Abstract: The proliferation of sensor and actuator devices in Internet of things (IoT) networks has garnered significant attention in recent years. However, the increasing number of IoT devices, and the corresponding resources, has introduced various challenges, particularly in indexing and querying. In essence, resource management has become more complex due to the non-uniform distribution of related devices and their limited capacity. Additionally, the diverse demands of users have further complicated resource indexing. This paper proposes a distributed resource indexing and querying algorithm for large connected environments, specifically designed to address the challenges posed by IoT networks. The algorithm considers both the limited device capacity and the non-uniform distribution of devices, acknowledging that devices cannot store information about the entire environment. Furthermore, it places special emphasis on uncovered zones, to reduce the response time of queries related to these areas. Moreover, the algorithm introduces different types of queries, to cater to various user needs, including fast queries and urgent queries suitable for different scenarios. The effectiveness of the proposed approach was evaluated through extensive experiments covering index creation, coverage, and query execution, yielding promising and insightful results.
      Citation: Future Internet
      PubDate: 2023-12-30
      DOI: 10.3390/fi16010015
      Issue No: Vol. 16, No. 1 (2023)
       
  • Future Internet, Vol. 16, Pages 16: An Analysis of Methods and Metrics for
           Task Scheduling in Fog Computing

    • Authors: Javid Misirli, Emiliano Casalicchio
      First page: 16
      Abstract: The Internet of Things (IoT) uptake brought a paradigm shift in application deployment. Indeed, IoT applications are not centralized in cloud data centers, but the computation and storage are moved close to the consumers, creating a computing continuum between the edge of the network and the cloud. This paradigm shift is called fog computing, a concept introduced by Cisco in 2012. Scheduling applications in this decentralized, heterogeneous, and resource-constrained environment is challenging. The task scheduling problem in fog computing has been widely explored and addressed using many approaches, from traditional operational research to heuristics and machine learning. This paper aims to analyze the literature on task scheduling in fog computing published in the last five years to classify the criteria used for decision-making and the technique used to solve the task scheduling problem. We propose a taxonomy of task scheduling algorithms, and we identify the research gaps and challenges.
      Citation: Future Internet
      PubDate: 2023-12-30
      DOI: 10.3390/fi16010016
      Issue No: Vol. 16, No. 1 (2023)
       
  • Future Internet, Vol. 16, Pages 17: Internet-of-Things Traffic Analysis
           and Device Identification Based on Two-Stage Clustering in Smart Home
           Environments

    • Authors: Mizuki Asano, Takumi Miyoshi, Taku Yamazaki
      First page: 17
      Abstract: Smart home environments, which consist of various Internet of Things (IoT) devices to support and improve our daily lives, are expected to be widely adopted in the near future. Owing to a lack of awareness regarding the risks associated with IoT devices and challenges in replacing or the updating their firmware, adequate security measures have not been implemented. Instead, IoT device identification methods based on traffic analysis have been proposed. Since conventional methods process and analyze traffic data simultaneously, bias in the occurrence rate of traffic patterns has a negative impact on the analysis results. Therefore, this paper proposes an IoT traffic analysis and device identification method based on two-stage clustering in smart home environments. In the first step, traffic patterns are extracted by clustering IoT traffic at a local gateway located in each smart home and subsequently sent to a cloud server. In the second step, the cloud server extracts common traffic units to represent IoT traffic by clustering the patterns obtained in the first step. Two-stage clustering can reduce the impact of data bias, because each cluster extracted in the first clustering is summarized as one value and used as a single data point in the second clustering, regardless of the occurrence rate of traffic patterns. Through the proposed two-stage clustering method, IoT traffic is transformed into time series vector data that consist of common unit patterns and can be identified based on time series representations. Experiments using public IoT traffic datasets indicated that the proposed method could identify 21 IoTs devices with an accuracy of 86.9%. Therefore, we can conclude that traffic analysis using two-stage clustering is effective for improving the clustering quality, device identification, and implementation in distributed environments.
      Citation: Future Internet
      PubDate: 2023-12-31
      DOI: 10.3390/fi16010017
      Issue No: Vol. 16, No. 1 (2023)
       
  • Future Internet, Vol. 16, Pages 18: Controllable Queuing System with
           Elastic Traffic and Signals for Resource Capacity Planning in 5G Network
           Slicing

    • Authors: Irina Kochetkova, Kseniia Leonteva, Ibram Ghebrial, Anastasiya Vlaskina, Sofia Burtseva, Anna Kushchazli, Konstantin Samouylov
      First page: 18
      Abstract: Fifth-generation (5G) networks provide network slicing capabilities, enabling the deployment of multiple logically isolated network slices on a single infrastructure platform to meet specific requirements of users. This paper focuses on modeling and analyzing resource capacity planning and reallocation for network slicing, specifically between two providers transmitting elastic traffic, such during as web browsing. A controller determines the need for resource reallocation and plans new resource capacity accordingly. A Markov decision process is employed in a controllable queuing system to find the optimal resource capacity for each provider. The reward function incorporates three network slicing principles: maximum matching for equal resource partitioning, maximum share of signals resulting in resource reallocation, and maximum resource utilization. To efficiently compute the optimal resource capacity planning policy, we developed an iterative algorithm that begins with maximum resource utilization as the starting point. Through numerical demonstrations, we show the optimal policy and metrics of resource reallocation for two services: web browsing and bulk data transfer. The results highlight fast convergence within three iterations and the effectiveness of the balanced three-principle approach in resource capacity planning for 5G network slicing.
      Citation: Future Internet
      PubDate: 2023-12-31
      DOI: 10.3390/fi16010018
      Issue No: Vol. 16, No. 1 (2023)
       
  • Future Internet, Vol. 15, Pages 387: Secure Video Communication Using
           Multi-Equation Multi-Key Hybrid Cryptography

    • Authors: Youcef Fouzar, Ahmed Lakhssassi, Ramakrishna Mundugar
      First page: 387
      Abstract: The safeguarding of intellectual property and maintaining privacy for video content are closely linked to the effectiveness of security protocols employed in internet streaming platforms. The inadequate implementation of security measures by content providers has resulted in security breaches within entertainment applications, hence causing a reduction in the client base. This research aimed to enhance the security measures employed for video content by implementing a multi-key approach for encryption and decryption processes. The aforementioned objective was successfully accomplished through the use of hybrid methodologies, the production of dynamic keys, and the implementation of user-attribute-based techniques. The main aim of the study was to improve the security measures associated with the process of generating video material. The proposed methodology integrates a system of mathematical equations and a pseudorandom key within its execution. This novel approach significantly augments the degree of security the encryption mechanism provides. The proposed methodology utilises a set of mathematical equations that are randomly employed to achieve encryption. Using a random selection procedure contributes to the overall enhancement of the system’s security. The suggested methodology entails the division of the video into smaller entities known as chunks. Following this, every segment is subjected to encryption using unique keys that are produced dynamically in real-time. The proposed methodology is executed via Android platforms. The transmitter application is tasked with the responsibility of facilitating the streaming of the video content, whereas the receiver application serves the purpose of presenting the video to the user. A careful study was conducted to compare and contrast the suggested method with other similar methods that were already in use. The results of the study strongly support the safety and dependability of the procedure that was made available.
      Citation: Future Internet
      PubDate: 2023-11-29
      DOI: 10.3390/fi15120387
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 388: Protecting Hybrid ITS Networks: A
           Comprehensive Security Approach

    • Authors: Ricardo Severino, José Simão, Nuno Datia, António Serrador
      First page: 388
      Abstract: Cooperative intelligent transport systems (C-ITS) continue to be developed to enhance transportation safety and sustainability. However, the communication of vehicle-to-everything (V2X) systems is inherently open, leading to vulnerabilities that attackers can exploit. This represents a threat to all road users, as security failures can lead to privacy violations or even fatalities. Moreover, a high fatality rate is correlated with soft-mobility road users. Therefore, when developing C-ITS systems, it is important to broaden the focus beyond connected vehicles to include soft-mobility users and legacy vehicles. This work presents a new approach developed in the context of emerging hybrid networks, combining intelligent transport systems operating in 5.9 GHz (ITS-G5) and radio-mobile cellular technologies. Two protocols were implemented and evaluated to introduce security guarantees (such as privacy and integrity) in communications within the developed C-ITS hybrid environment. As a result, this work securely integrates G5-connected ITS stations and soft-mobility users through a smartphone application via cellular networks. Commercial equipment was used for this goal, including on-board and roadside units. Computational, transmission and end-to-end latency were used to assess the system’s performance. Implemented protocols introduce an additional 11% end-to-end latency in hybrid communications. Moreover, workflows employing hybrid communications impose, on average, an extra 28.29 ms of end-to-end latency. The proposal shows promise, as it reaches end-to-end times below the latency requirements imposed in most C-ITS use cases.
      Citation: Future Internet
      PubDate: 2023-11-30
      DOI: 10.3390/fi15120388
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 389: Knowledge Distillation-Based GPS
           Spoofing Detection for Small UAV

    • Authors: Yingying Ren, Ryan D. Restivo, Wenkai Tan, Jian Wang, Yongxin Liu, Bin Jiang, Huihui Wang, Houbing Song
      First page: 389
      Abstract: As a core component of small unmanned aerial vehicles (UAVs), GPS is playing a critical role in providing localization for UAV navigation. UAVs are an important factor in the large-scale deployment of the Internet of Things (IoT) and cyber–physical systems (CPS). However, GPS is vulnerable to spoofing attacks that can mislead a UAV to fly into a sensitive area and threaten public safety and private security. The conventional spoofing detection methods need too much overhead, which stops efficient detection from working in a computation-constrained UAV and provides an efficient response to attacks. In this paper, we propose a novel approach to obtain a lightweight detection model in the UAV system so that GPS spoofing attacks can be detected from a long distance. With long-short term memory (LSTM), we propose a lightweight detection model on the ground control stations, and then we distill it into a compact size that is able to run in the control system of the UAV with knowledge distillation. The experimental results show that our lightweight detection algorithm runs in UAV systems reliably and can achieve good performance in GPS spoofing detection.
      Citation: Future Internet
      PubDate: 2023-11-30
      DOI: 10.3390/fi15120389
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 390: ICN-Based Enhanced Content Delivery
           for CDN

    • Authors: Lei Gao, Xiaoyong Zhu
      First page: 390
      Abstract: With the rapid growth of internet traffic, the traditional host-to-host TCP/IP architecture is subject to many service limitations faced with content-oriented applications. Various novel network architectures have been proposed to solve these limitations, among which Information-Centric Networking (ICN) is one of the most prominent. ICN features the decoupling of content (service) from the physical devices storing (providing) it through location-independent naming, and offers inherent enhancement to network performance, such as multicast and in-network caching. ICN in-network caching has been extensively studied, and we believe that it may also be the main incentive for ISPs to deploy ICN. A CDN (content delivery network) is a typical content-oriented network paradigm that aims to provide the fast delivery of content. In this paper, we leverage the advantages of the in-network caching of ICN to enhance the content delivery efficiency of CDN by integrating ICN as a service. First, we present our design of a content delivery network enhanced with ICN, called IECDN. Additionally, we formulate a mathematical model to optimize the performance of our proposed design and conduct a series of evaluations. The results indicate that our proposed design provides significant performance gains while reducing bandwidth consumption and shows better resilience to traffic surge.
      Citation: Future Internet
      PubDate: 2023-11-30
      DOI: 10.3390/fi15120390
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 391: A Comprehensive Survey Exploring the
           Multifaceted Interplay between Mobile Edge Computing and Vehicular
           Networks

    • Authors: Ali Pashazadeh, Giovanni Nardini, Giovanni Stea
      First page: 391
      Abstract: In recent years, the need for computation-intensive applications in mobile networks requiring more storage, powerful processors, and real-time responses has risen substantially. Vehicular networks play an important role in this ecosystem, as they must support multiple services, such as traffic monitoring or sharing of data involving different aspects of the vehicular traffic. Moreover, new resource-hungry applications have been envisaged, such as autonomous driving or in-cruise entertainment, hence making the demand for computation and storage resources one of the most important challenges in vehicular networks. In this context, Mobile Edge Computing (MEC) has become the key technology to handle these problems by providing cloud-like capabilities at the edge of mobile networks to support delay-sensitive and computation-intensive tasks. In the meantime, researchers have envisaged use of onboard vehicle resources to extend the computing capabilities of MEC systems. This paper presents a comprehensive review of the most recent works related to MEC-assisted vehicular networks, as well as vehicle-assisted MEC systems. We illustrate the MEC system architecture and discuss its deployment in vehicular environments, as well as the key technologies to realize this integration. After that, we review the recent literature by identifying three different areas, i.e.: (i) MEC providing additional resources to vehicles (e.g., for task offloading); (ii) MEC enabling innovative vehicular applications (e.g., platooning), and (iii) vehicular networks providing additional resources to MEC systems. Finally, we discuss open challenges and future research directions, addressing the possible interplays between MEC systems and vehicular networks.
      Citation: Future Internet
      PubDate: 2023-11-30
      DOI: 10.3390/fi15120391
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 392: Recent Advances in
           Information-Centric Networks (ICNs)

    • Authors: José Carlos López-Ardao, Miguel Rodríguez-Pérez, Sergio Herrería-Alonso
      First page: 392
      Abstract: The great success of the Internet has been essentially based on the simplicity and versatility of its TCP/IP architecture, which imposes almost no restrictions on either the underlying network technology or on the data being transmitted [...]
      Citation: Future Internet
      PubDate: 2023-12-01
      DOI: 10.3390/fi15120392
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 393: Information Security Applications in
           Smart Cities: A Bibliometric Analysis of Emerging Research

    • Authors: Thiago Poleto, Thyago Celso Cavalcante Nepomuceno, Victor Diogho Heuer de Carvalho, Ligiane Cristina Braga de Oliveira Friaes, Rodrigo Cleiton Paiva de Oliveira, Ciro José Jardim Figueiredo
      First page: 393
      Abstract: This paper aims to analyze the intellectual structure and research fronts in application information security in smart cities to identify research boundaries, trends, and new opportunities in the area. It applies bibliometric analyses to identify the main authors and their influences on information security and the smart city area. Moreover, this analysis focuses on journals indexed in Scopus databases. The results indicate that there is an opportunity for further advances in the adoption of information security policies in government institutions. Moreover, the production indicators presented herein are useful for the planning and implementation of information security policies and the knowledge of the scientific community about smart cities. The bibliometric analysis provides support for the visualization of the leading research technical collaboration networks among authors, co-authors, countries, and research areas. The methodology offers a broader view of the application information security in smart city areas and makes it possible to assist new research that may contribute to further advances. The smart cities topic has been receiving much attention in recent years, but to the best of our knowledge, there is no research on reporting new possibilities for advances. Therefore, this article may contribute to an emerging body of literature that explores the nature of application information security and smart cities research productivity to assist researchers in better understanding the current emerging of the area.
      Citation: Future Internet
      PubDate: 2023-12-01
      DOI: 10.3390/fi15120393
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 394: Envisioning Digital Practices in the
           Metaverse: A Methodological Perspective

    • Authors: Luca Sabatucci, Agnese Augello, Giuseppe Caggianese, Luigi Gallo
      First page: 394
      Abstract: Researchers are exploring methods that exploit digital twins as all-purpose abstractions for sophisticated modelling and simulation, bringing elements of the real world into the virtual realm. Digital twins are essential elements of the digital transformation of society, which mostly benefit manufacturing, smart cities, healthcare contexts, and in general systems that include humans in the loop. As the metaverse concept continues to evolve, the line separating the virtual and the real will progressively fade away. Considering the metaverse’s goal to emulate our social reality, it becomes essential to examine the aspects that characterise real-world interaction practices and explicitly model both physical and social contexts. While the unfolding metaverse may reshape these practices in distinct ways from their real-world counterparts, our position is that it is essential to incorporate social theories into the modelling processes of digital twins within the metaverse. In this work, we discuss our perspective by introducing a digital practice model inspired by the theory of social practice. We illustrate this model by exploiting the scenario of a virtual grocery shop designed to help older adults reduce their social isolation.
      Citation: Future Internet
      PubDate: 2023-12-06
      DOI: 10.3390/fi15120394
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 395: A Learning Game-Based Approach to
           Task-Dependent Edge Resource Allocation

    • Authors: Zuopeng Li, Hengshuai Ju, Zepeng Ren
      First page: 395
      Abstract: The existing research on dependent task offloading and resource allocation assumes that edge servers can provide computational and communication resources free of charge. This paper proposes a two-stage resource allocation method to address this issue. In the first stage, users incentivize edge servers to provide resources. We formulate the incentive problem in this stage as a multivariate Stackelberg game, which takes into account both computational and communication resources. In addition, we also analyze the uniqueness of the Stackelberg equilibrium under information sharing conditions. Considering the privacy issues of the participants, the research is extended to scenarios without information sharing, where the multivariable game problem is modeled as a partially observable Markov decision process (POMDP). In order to obtain the optimal incentive decision in this scenario, a reinforcement learning algorithm based on the learning game is designed. In the second stage, we propose a greedy-based deep reinforcement learning algorithm that is aimed at minimizing task execution time by optimizing resource and task allocation strategies. Finally, the simulation results demonstrate that the algorithm designed for non-information sharing scenarios can effectively approximate the theoretical Stackelberg equilibrium, and its performance is found to be better than that of the other three benchmark methods. After the allocation of resources and sub-tasks by the greedy-based deep reinforcement learning algorithm, the execution delay of the dependent task is significantly lower than that in local processing.
      Citation: Future Internet
      PubDate: 2023-12-07
      DOI: 10.3390/fi15120395
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 396: PROFEE: A Probabilistic-Feedback
           Based Speed Rate Adaption for IEEE 802.11bc

    • Authors: Javier Gomez, Jose Jaime Camacho-Escoto, Luis Orozco-Barbosa, Diego Rodriguez-Torres
      First page: 396
      Abstract: WiFi is a widely used wireless technology for data transmission. WiFi can also play a crucial role in simultaneously broadcasting content to multiple devices in multimedia transmission for venues such as classrooms, theaters, and stadiums, etc. Broadcasting allows for the efficient dissemination of information to all devices connected to the network, and it becomes crucial to ensure that the WiFi network has sufficient capacity to transmit broadcast multimedia content without interruptions or delays. However, using WiFi for broadcasting presents challenges that can impact user experience, specifically the difficulty of obtaining real-time feedback from potentially hundreds or thousands of users due to potential collisions of feedback messages. This work focuses on providing accurate feedback to the Access Point about the percentage of users not receiving broadcast traffic correctly so it can adjust its Modulation and Coding Scheme (MCS) while transmitting broadcast multimedia content to many users. The proposed method is comprised of two sequential algorithms. In order to reduce the probability of a collision after transmitting each message, an algorithm searches for the best probability value for users to transmit ACK/NACK messages, depending on whether messages are received correctly or not. This feedback allows the Access Point to estimate the number of STAs correctly/incorrectly receiving the messages being transmitted. A second algorithm uses this estimation so the Access Point can select the best MCS while maintaining the percentage of users not receiving broadcast content correctly within acceptable margins, thus providing users with the best possible content quality. We implemented the proposed method in the ns-3 simulator, and the results show it yields quick, reliable feedback to the Access Point that was then able to adjust to the best possible MCS in only a few seconds, regardless of the user density and dimensions of the scenario.
      Citation: Future Internet
      PubDate: 2023-12-09
      DOI: 10.3390/fi15120396
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 397: Methodological Approach for
           Identifying Websites with Infringing Content via Text Transformers and
           Dense Neural Networks

    • Authors: Aldo Hernandez-Suarez, Gabriel Sanchez-Perez, Linda Karina Toscano-Medina, Hector Manuel Perez-Meana, Jose Portillo-Portillo, Jesus Olivares-Mercado
      First page: 397
      Abstract: The rapid evolution of the Internet of Everything (IoE) has significantly enhanced global connectivity and multimedia content sharing, simultaneously escalating the unauthorized distribution of multimedia content, posing risks to intellectual property rights. In 2022 alone, about 130 billion accesses to potentially non-compliant websites were recorded, underscoring the challenges for industries reliant on copyright-protected assets. Amidst prevailing uncertainties and the need for technical and AI-integrated solutions, this study introduces two pivotal contributions. First, it establishes a novel taxonomy aimed at safeguarding and identifying IoE-based content infringements. Second, it proposes an innovative architecture combining IoE components with automated sensors to compile a dataset reflective of potential copyright breaches. This dataset is analyzed using a Bidirectional Encoder Representations from Transformers-based advanced Natural Language Processing (NLP) algorithm, further fine-tuned by a dense neural network (DNN), achieving a remarkable 98.71% accuracy in pinpointing websites that violate copyright.
      Citation: Future Internet
      PubDate: 2023-12-09
      DOI: 10.3390/fi15120397
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 398: Enabling Technologies for
           Next-Generation Smart Cities: A Comprehensive Review and Research
           Directions

    • Authors: Shrouk A. Ali, Shaimaa Ahmed Elsaid, Abdelhamied A. Ateya, Mohammed ElAffendi, Ahmed A. Abd El-Latif
      First page: 398
      Abstract: The concept of smart cities, which aim to enhance the quality of urban life through innovative technologies and policies, has gained significant momentum in recent years. As we approach the era of next-generation smart cities, it becomes crucial to explore the key enabling technologies that will shape their development. This work reviews the leading technologies driving the future of smart cities. The work begins by introducing the main requirements of different smart city applications; then, the enabling technologies are presented. This work highlights the transformative potential of the Internet of things (IoT) to facilitate data collection and analysis to improve urban infrastructure and services. As a complementary technology, distributed edge computing brings computational power closer to devices, reducing the reliance on centralized data centers. Another key technology is virtualization, which optimizes resource utilization, enabling multiple virtual environments to run efficiently on shared hardware. Software-defined networking (SDN) emerges as a pivotal technology that brings flexibility and scalability to smart city networks, allowing for dynamic network management and resource allocation. Artificial intelligence (AI) is another approach for managing smart cities by enabling predictive analytics, automation, and smart decision making based on vast amounts of data. Lastly, the blockchain is introduced as a promising approach for smart cities to achieve the required security. The review concludes by identifying potential research directions to address the challenges and complexities brought about by integrating these key enabling technologies.
      Citation: Future Internet
      PubDate: 2023-12-09
      DOI: 10.3390/fi15120398
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 399: Addressing the Gaps of IoU Loss in 3D
           Object Detection with IIoU

    • Authors: Niranjan Ravi, Mohamed El-Sharkawy
      First page: 399
      Abstract: Three-dimensional object detection involves estimating the dimensions, orientations, and locations of 3D bounding boxes. Intersection of Union (IoU) loss measures the overlap between predicted 3D box and ground truth 3D bounding boxes. The localization task uses smooth-L1 loss with IoU to estimate the object’s location, and the classification task identifies the object/class category inside each 3D bounding box. Localization suffers a performance gap in cases where the predicted and ground truth boxes overlap significantly less or do not overlap, indicating the boxes are far away, and in scenarios where the boxes are inclusive. Existing axis-aligned IoU losses suffer performance drop in cases of rotated 3D bounding boxes. This research addresses the shortcomings in bounding box regression problems of 3D object detection by introducing an Improved Intersection Over Union (IIoU) loss. The proposed loss function’s performance is experimented on LiDAR-based and Camera-LiDAR-based fusion methods using the KITTI dataset.
      Citation: Future Internet
      PubDate: 2023-12-11
      DOI: 10.3390/fi15120399
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 400: A Survey on Blockchain-Based
           Federated Learning

    • Authors: Lang Wu, Weijian Ruan, Jinhui Hu, Yaobin He
      First page: 400
      Abstract: Federated learning (FL) and blockchains exhibit significant commonality, complementarity, and alignment in various aspects, such as application domains, architectural features, and privacy protection mechanisms. In recent years, there have been notable advancements in combining these two technologies, particularly in data privacy protection, data sharing incentives, and computational performance. Although there are some surveys on blockchain-based federated learning (BFL), these surveys predominantly focus on the BFL framework and its classifications, yet lack in-depth analyses of the pivotal issues addressed by BFL. This work aims to assist researchers in understanding the latest research achievements and development directions in the integration of FL with blockchains. Firstly, we introduced the relevant research in FL and blockchain technology and highlighted the existing shortcomings of FL. Next, we conducted a comparative analysis of existing BFL frameworks, delving into the significant problems in the realm of FL that the combination of blockchain and FL addresses. Finally, we summarized the application prospects of BFL technology in various domains such as the Internet of Things, Industrial Internet of Things, Internet of Vehicles, and healthcare services, as well as the challenges that need to be addressed and future research directions.
      Citation: Future Internet
      PubDate: 2023-12-12
      DOI: 10.3390/fi15120400
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 401: Distributed Denial of Service
           Classification for Software-Defined Networking Using Grammatical Evolution
           

    • Authors: Evangelos D. Spyrou, Ioannis Tsoulos, Chrysostomos Stylios
      First page: 401
      Abstract: Software-Defined Networking (SDN) stands as a pivotal paradigm in network implementation, exerting a profound influence on the trajectory of technological advancement. The critical role of security within SDN cannot be overstated, with distributed denial of service (DDoS) emerging as a particularly disruptive threat, capable of causing large-scale disruptions. DDoS operates by generating malicious traffic that mimics normal network activity, leading to service disruptions. It becomes imperative to deploy mechanisms capable of distinguishing between benign and malicious traffic, serving as the initial line of defense against DDoS challenges. In addressing this concern, we propose the utilization of traffic classification as a foundational strategy for combatting DDoS. By categorizing traffic into malicious and normal streams, we establish a crucial first step in the development of effective DDoS mitigation strategies. The deleterious effects of DDoS extend to the point of potentially overwhelming networked servers, resulting in service failures and SDN server downtimes. To investigate and address this issue, our research employs a dataset encompassing both benign and malicious traffic within the SDN environment. A set of 23 features is harnessed for classification purposes, forming the basis for a comprehensive analysis and the development of robust defense mechanisms against DDoS in SDN. Initially, we compare GenClass with three common classification methods, namely the Bayes, K-Nearest Neighbours (KNN), and Random Forest methods. The proposed solution improves the average class error, demonstrating 6.58% error as opposed to the Bayes method error of 32.59%, KNN error of 18.45%, and Random Forest error of 30.70%. Moreover, we utilize classification procedures based on three methods based on grammatical evolution, which are applied to the aforementioned data. In particular, in terms of average class error, GenClass exhibits 6.58%, while NNC and FC2GEN exhibit average class errors of 12.51% and 15.86%, respectively.
      Citation: Future Internet
      PubDate: 2023-12-13
      DOI: 10.3390/fi15120401
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 402: Energy Consumption Reduction in
           Wireless Sensor Network-Based Water Pipeline Monitoring Systems via Energy
           Conservation Techniques

    • Authors: Valery Nkemeni, Fabien Mieyeville, Pierre Tsafack
      First page: 402
      Abstract: In wireless sensor network-based water pipeline monitoring (WWPM) systems, a vital requirement emerges: the achievement of low energy consumption. This primary goal arises from the fundamental necessity to ensure the sustained operability of sensor nodes over extended durations, all without the need for frequent battery replacement. Given that sensor nodes in such applications are typically battery-powered and often physically inaccessible, maximizing energy efficiency by minimizing unnecessary energy consumption is of vital importance. This paper presents an experimental study that investigates the impact of a hybrid technique, incorporating distributed computing, hierarchical sensing, and duty cycling, on the energy consumption of a sensor node in prolonging the lifespan of a WWPM system. A custom sensor node is designed using the ESP32 MCU and nRF24L01+ transceiver. Hierarchical sensing is implemented through the use of LSM9DS1 and ADXL344 accelerometers, distributed computing through the implementation of a distributed Kalman filter, and duty cycling through the implementation of interrupt-enabled sleep/wakeup functionality. The experimental results reveal that combining distributed computing, hierarchical sensing and duty cycling reduces energy consumption by a factor of eight compared to the lone implementation of distributed computing.
      Citation: Future Internet
      PubDate: 2023-12-14
      DOI: 10.3390/fi15120402
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 403: Federated Learning for Intrusion
           Detection Systems in Internet of Vehicles: A General Taxonomy,
           Applications, and Future Directions

    • Authors: Jadil Alsamiri, Khalid Alsubhi
      First page: 403
      Abstract: In recent years, the Internet of Vehicles (IoV) has garnered significant attention from researchers and automotive industry professionals due to its expanding range of applications and services aimed at enhancing road safety and driver/passenger comfort. However, the massive amount of data spread across this network makes securing it challenging. The IoV network generates, collects, and processes vast amounts of valuable and sensitive data that intruders can manipulate. An intrusion detection system (IDS) is the most typical method to protect such networks. An IDS monitors activity on the road to detect any sign of a security threat and generates an alert if a security anomaly is detected. Applying machine learning methods to large datasets helps detect anomalies, which can be utilized to discover potential intrusions. However, traditional centralized learning algorithms require gathering data from end devices and centralizing it for training on a single device. Vehicle makers and owners may not readily share the sensitive data necessary for training the models. Granting a single device access to enormous volumes of personal information raises significant privacy concerns, as any system-related problems could result in massive data leaks. To alleviate these problems, more secure options, such as Federated Learning (FL), must be explored. A decentralized machine learning technique, FL allows model training on client devices while maintaining user data privacy. Although FL for IDS has made significant progress, to our knowledge, there has been no comprehensive survey specifically dedicated to exploring the applications of FL for IDS in the IoV environment, similar to successful systems research in deep learning. To address this gap, we undertake a well-organized literature review on IDSs based on FL in an IoV environment. We introduce a general taxonomy to describe the FL systems to ensure a coherent structure and guide future research. Additionally, we identify the relevant state of the art in FL-based intrusion detection within the IoV domain, covering the years from FL’s inception in 2016 through 2023. Finally, we identify challenges and future research directions based on the existing literature.
      Citation: Future Internet
      PubDate: 2023-12-14
      DOI: 10.3390/fi15120403
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 404: Blockchain in Agriculture to Ensure
           Trust, Effectiveness, and Traceability from Farm Fields to Groceries

    • Authors: Arvind Panwar, Manju Khari, Sanjay Misra, Urvashi Sugandh
      First page: 404
      Abstract: Despite its status as one of the most ancient sectors worldwide, agriculture continues to be a fundamental cornerstone of the global economy. Nevertheless, it faces obstacles such as a lack of trust, difficulties in tracking, and inefficiencies in managing the supply chain. This article examines the potential of blockchain technology (BCT) to alter the agricultural industry by providing a decentralized, transparent, and unchangeable solution to meet the difficulties it faces. The initial discussion provides an overview of the challenges encountered by the agricultural industry, followed by a thorough analysis of BCT, highlighting its potential advantages. Following that, the article explores other agricultural uses for blockchain technology, such as managing supply chains, verifying products, and processing payments. In addition, this paper examines the constraints and challenges related to the use of blockchain technology in agriculture, including issues such as scalability, legal frameworks, and interoperability. This paper highlights the potential of BCT to transform the agricultural industry by offering a transparent and secure platform for managing the supply chain. Nevertheless, it emphasizes the need for involving stakeholders, having clear legislation, and possessing technical skills in order to achieve effective implementation. This work utilizes a systematic literature review using the PRISMA technique and applies meta-analysis as the research methodology, enabling a thorough investigation of the present information available. The results emphasize the significant and positive effect of BCT on agriculture, emphasizing the need for cooperative endeavors among governments, industry pioneers, and technology specialists to encourage its extensive implementation and contribute to the advancement of a sustainable and resilient food system.
      Citation: Future Internet
      PubDate: 2023-12-16
      DOI: 10.3390/fi15120404
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 405: Securing Network Traffic
           Classification Models against Adversarial Examples Using Derived Variables
           

    • Authors: James Msughter Adeke, Guangjie Liu, Junjie Zhao, Nannan Wu, Hafsat Muhammad Bashir
      First page: 405
      Abstract: Machine learning (ML) models are essential to securing communication networks. However, these models are vulnerable to adversarial examples (AEs), in which malicious inputs are modified by adversaries to produce the desired output. Adversarial training is an effective defense method against such attacks but relies on access to a substantial number of AEs, a prerequisite that entails significant computational resources and the inherent limitation of poor performance on clean data. To address these problems, this study proposes a novel approach to improve the robustness of ML-based network traffic classification models by integrating derived variables (DVars) into training. Unlike adversarial training, our approach focuses on enhancing training using DVars, introducing randomness into the input data. DVars are generated from the baseline dataset and significantly improve the resilience of the model to AEs. To evaluate the effectiveness of DVars, experiments were conducted using the CSE-CIC-IDS2018 dataset and three state-of-the-art ML-based models: decision tree (DT), random forest (RF), and k-neighbors (KNN). The results show that DVars can improve the accuracy of KNN under attack from 0.45% to 0.84% for low-intensity attacks and from 0.32% to 0.66% for high-intensity attacks. Furthermore, both DT and RF achieve a significant increase in accuracy when subjected to attack of different intensity. Moreover, DVars are computationally efficient, scalable, and do not require access to AEs.
      Citation: Future Internet
      PubDate: 2023-12-16
      DOI: 10.3390/fi15120405
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 406: Decentralized Storage with Access
           Control and Data Persistence for e-Book Stores

    • Authors: Keigo Ogata, Satoshi Fujita
      First page: 406
      Abstract: The e-book services we use today have a serious drawback in that we will no longer be able to read the books we have purchased when the service is terminated. One way to solve this problem is to build a decentralized system that does not depend on a specific company or organization by combining smart contracts running on the Ethereum blockchain and distributed storage such as an IPFS. However, a simple combination of existing technologies does not make the stored e-book data persistent, so the risk of purchased e-books becoming unreadable remains. In this paper, we propose a decentralized distributed storage called d-book-repository, which has both access management function and data durability for purchased e-books. This system uses NFTs as access rights to realize strict access control by preventing clients who do not have NFTs from downloading e-book data. In addition, e-book data stored on storage nodes in the distributed storage is divided into shards using Reed–Solomon codes, and each storage node stores only a single shard, thereby preventing the creation of nodes that can restore the entire content from locally stored data. The storage of each shard is not handled by a single node but by a group of nodes, and the shard is propagated to all nodes in the group using the gossip protocol, where erasure codes are utilized to increase the resilience against node departure. Furthermore, an incentive mechanism to encourage participation as a storage node is implemented using smart contracts. We built a prototype of the proposed system on AWS and evaluated its performance. The results showed that both downloading and uploading 100 MB of e-book data (equivalent to one comic book) were completed within 10 s using an instance type of m5.xlarge. This value is only 1.3 s longer for downloading and 2.2 s longer for uploading than the time required for a simple download/upload without access control, confirming that the overhead associated with the proposed method is sufficiently small.
      Citation: Future Internet
      PubDate: 2023-12-18
      DOI: 10.3390/fi15120406
      Issue No: Vol. 15, No. 12 (2023)
       
  • Future Internet, Vol. 15, Pages 407: HeFUN: Homomorphic Encryption for
           Unconstrained Secure Neural Network Inference

    • Authors: Duy Tung Khanh Nguyen, Dung Hoang Duong, Willy Susilo, Yang-Wai Chow, The Anh Ta
      First page: 407
      Abstract: Homomorphic encryption (HE) has emerged as a pivotal technology for secure neural network inference (SNNI), offering privacy-preserving computations on encrypted data. Despite active developments in this field, HE-based SNNI frameworks are impeded by three inherent limitations. Firstly, they cannot evaluate non-linear functions such as ReLU, the most widely adopted activation function in neural networks. Secondly, the permitted number of homomorphic operations on ciphertexts is bounded, consequently limiting the depth of neural networks that can be evaluated. Thirdly, the computational overhead associated with HE is prohibitively high, particularly for deep neural networks. In this paper, we introduce a novel paradigm designed to address the three limitations of HE-based SNNI. Our approach is an interactive approach that is solely based on HE, called iLHE. Utilizing the idea of iLHE, we present two protocols: ReLU, which facilitates the direct evaluation of the ReLU function on encrypted data, tackling the first limitation, and HeRefresh, which extends the feasible depth of neural network computations and mitigates the computational overhead, thereby addressing the second and third limitations. Based on HeReLU and HeRefresh protocols, we build a new framework for SNNI, named HeFUN. We prove that our protocols and the HeFUN framework are secure in the semi-honest security model. Empirical evaluations demonstrate that HeFUN surpasses current HE-based SNNI frameworks in multiple aspects, including security, accuracy, the number of communication rounds, and inference latency. Specifically, for a convolutional neural network with four layers on the MNIST dataset, HeFUN achieves 99.16% accuracy with an inference latency of 1.501 s, surpassing the popular HE-based framework CryptoNets proposed by Gilad-Bachrach, which achieves 98.52% accuracy with an inference latency of 3.479 s.
      Citation: Future Internet
      PubDate: 2023-12-18
      DOI: 10.3390/fi15120407
      Issue No: Vol. 15, No. 12 (2023)
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 44.220.62.183
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-
JournalTOCs
 
 
  Subjects -> COMMUNICATIONS (Total: 518 journals)
    - COMMUNICATIONS (446 journals)
    - DIGITAL AND WIRELESS COMMUNICATION (31 journals)
    - HUMAN COMMUNICATION (19 journals)
    - MEETINGS AND CONGRESSES (7 journals)
    - RADIO, TELEVISION AND CABLE (15 journals)

DIGITAL AND WIRELESS COMMUNICATION (31 journals)

Showing 1 - 31 of 31 Journals sorted alphabetically
Ada : A Journal of Gender, New Media, and Technology     Open Access   (Followers: 22)
Advances in Image and Video Processing     Open Access   (Followers: 24)
Communications and Network     Open Access   (Followers: 13)
E-Health Telecommunication Systems and Networks     Open Access   (Followers: 3)
EURASIP Journal on Wireless Communications and Networking     Open Access   (Followers: 14)
Future Internet     Open Access   (Followers: 87)
Granular Computing     Hybrid Journal  
IEEE Transactions on Wireless Communications     Hybrid Journal   (Followers: 26)
IEEE Wireless Communications Letters     Hybrid Journal   (Followers: 42)
IET Wireless Sensor Systems     Open Access   (Followers: 18)
International Journal of Communications, Network and System Sciences     Open Access   (Followers: 9)
International Journal of Digital Earth     Hybrid Journal   (Followers: 15)
International Journal of Embedded and Real-Time Communication Systems     Full-text available via subscription   (Followers: 6)
International Journal of Interactive Communication Systems and Technologies     Full-text available via subscription   (Followers: 2)
International Journal of Machine Intelligence and Sensory Signal Processing     Hybrid Journal   (Followers: 3)
International Journal of Mobile Computing and Multimedia Communications     Full-text available via subscription   (Followers: 2)
International Journal of Satellite Communications and Networking     Hybrid Journal   (Followers: 39)
International Journal of Wireless and Mobile Computing     Hybrid Journal   (Followers: 8)
International Journal of Wireless Networks and Broadband Technologies     Full-text available via subscription   (Followers: 2)
International Journals Digital Communication and Analog Signals     Full-text available via subscription   (Followers: 2)
Journal of Digital Information     Open Access   (Followers: 181)
Journal of Interconnection Networks     Hybrid Journal   (Followers: 1)
Journal of the Southern Association for Information Systems     Open Access   (Followers: 2)
Mobile Media & Communication     Hybrid Journal   (Followers: 10)
Nano Communication Networks     Hybrid Journal   (Followers: 5)
Psychology of Popular Media Culture     Full-text available via subscription   (Followers: 1)
Signal, Image and Video Processing     Hybrid Journal   (Followers: 11)
Ukrainian Information Space     Open Access  
Vehicular Communications     Full-text available via subscription   (Followers: 4)
Vista     Open Access   (Followers: 4)
Wireless Personal Communications     Hybrid Journal   (Followers: 6)
Similar Journals
Similar Journals
HOME > Browse the 73 Subjects covered by JournalTOCs  
SubjectTotal Journals
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 44.220.62.183
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-