Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Connected vehicles have software that must be updated to fix vulnerabilities or add new functionalities. While over-the-air updates prevent car owners from bringing their vehicles to a service center, they introduce significant security risks. This paper proposes a vehicular over-the-air update architecture combining the two most adopted trusted execution environment solutions: Intel SGX at the server and ARM TrustZone at the client. The main contribution is the protection of software updates from attackers that manipulate the entire operating system at both ends aiming to reverse engineering the software or introducing a malicious behavior. The implementation uses a device with OP-TEE and a software repository implemented with CACIC-DevKit. The paper also extends our previous work by evaluating an alternative server implementation using Gramine-SGX. Our experiments reveal that the impact of the TEE is negligible, even for small software block transfers. Compared with CACIC-DevKit, Gramine-SGX doubles the latency, despite the development simplification. This indicates that CACIC-DevKit better suits a high mobility scenario, such as vehicular networks, where the connection with the server may be short term. PubDate: 2025-05-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In the realm of federated learning, a collaborative yet decentralized approach to machine learning, communication efficiency is a critical concern, particularly under constraints of limited bandwidth and resources. This paper evaluates FedSNIP, a novel method that leverages the SNIP (Single-shot Network Pruning based on Connection Sensitivity) technique within this context. By utilizing SNIP, FedSNIP effectively prunes neural networks, converting numerous weights to zero and resulting in sparser weight representations. This substantial reduction in weight density significantly decreases the volume of parameters that need to be communicated to the server, thereby reducing the communication overhead. Our experiments on the CIFAR-10 and UCI-HAR dataset demonstrate that FedSNIP not only lowers the data transmission between clients and the server but also maintains competitive model accuracy, comparable to conventional federated learning models. Additionally, we analyze various compression algorithms applied after pruning, specifically evaluating the compressed sparse row, coordinate list, and compressed sparse column formats to identify the most efficient approach. Our results show that compressed sparse row not only compresses the data more effectively and quickly but also achieves the highest reduction in data size, making it the most suitable format for enhancing the efficiency of federated learning, particularly in scenarios with restricted communication capabilities. PubDate: 2025-05-05
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Reconfigurable Intelligent Surfaces (RIS) enhance wireless communication by dynamically controlling electromagnetic waves. When combined with Non-Orthogonal Multiple Access (NOMA), RIS optimizes spectrum usage, allowing multiple users to share the same frequency via power domain multiplexing. This integration improves network capacity, spectral efficiency, and signal quality while reducing interference. Adding vibration-based energy harvesting to RIS with NOMA enables sustainable, autonomous operation in power-limited environments. This paper examines the potential of integrating RIS, NOMA, and vibration energy harvesting to advance sustainable, efficient wireless communication. PubDate: 2025-04-29
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Transferring massive datasets in data-intensive science (DIS) systems often relies on physical WAN infrastructure for network connectivity. This infrastructure is typically provided by various National Research and Education Networks (NRENs), including ESnet, GÉANT, Internet2, and RNP. Studying these systems presents significant challenge due to their complexity, scale, and the numerous factors influencing data transport. Traditionally, network performance studies focus on a single bottleneck. In contrast, the Quantitative Theory of Bottlenecks Structures (QTBS) provides a mathematical framework that analyzes performance through the network’s entire bottleneck structure, offering valuable insights for optimizing and understanding overall network performance. This paper tackles such challenges by employing QTBS and by deploying and evaluating a virtual infrastructure for data transport within a national-scale WAN. Our approach focuses on three key aspects: (i) assessing flow completion times related to bandwidth allocation for interdependent transfers within a network slice, (ii) evaluating the performance of TCP congestion control algorithms (BBR versus Cubic) for data transport, and (iii) conducting QTBS analysis to compute flow allocation shares, ultimately aiming for an optimal design. Results show BBR outperforming Cubic in scenarios with high number of threads and data volume and the high influence of the number of threads. PubDate: 2025-04-24
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Hadoop Distributed File System (HDFS) is known for its specialized strategies and policies tailored to enhance replica placement. This capability is critical for ensuring efficient and reliable access to data replicas, particularly as HDFS operates best when data are evenly distributed within the cluster. In this paper, we build upon earlier practical evaluations and conduct a thorough analysis of the replica balancing process in HDFS, focusing on two critical performance metrics: stability and efficiency. We evaluated these aspects alongside balancing operational cost by contrasting them with conventional HDFS solutions and employing a novel dynamic architecture for data replica balancing. On top of that, we delve into the optimizations in data locality brought about by effective replica balancing and their benefits for data-intensive applications, including enhanced read performance. Our findings reveal the extent to which data imbalance in HDFS directly affects the file system and highlight the struggles of the default replica placement policy in maintaining cluster balance. We examined the real but intricate and temporary effectiveness of on-demand balancing, underscoring the importance of regular and adaptable balancing interventions. This reaffirms the significance of context-aware replica balancing, as provided by the proposed dynamic architecture, not only for maintaining data equilibrium but also for ensuring efficient system performance. PubDate: 2025-04-23
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This paper proposes a crosstalk-aware inter-core (XT) circuit reallocation algorithm for spatial division multiplexed elastic optical networks (SDM-EON). Unlike previous studies that utilize reallocation primarily for spectral defragmentation, this work focuses on circuit reallocation to mitigate XT, thereby reducing or preventing network blocking. The algorithm is triggered whenever a request is blocked, classifying it as a reactive approach. The push-pull and fast-switching techniques are employed for data traffic migration, ensuring seamless transition without service interruption. Furthermore, the proposed method is evaluated against other algorithms designed to mitigate inter-core crosstalk, considering the NSFNET, EON, and JPN network topologies. In terms of bandwidth blocking probability, the results demonstrate a reduction of at least 65%, with a maximum of 0.25% of active circuits reallocated per process. PubDate: 2025-04-22
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The emerging field of quantum internet offers multiple applications, enabling quantum communication across diverse networks. However, the current entanglement networks exhibit complex processes, characterized by variable entanglement generation rates, limited quantum memory capacity, and susceptibility to decoherence rates. Addressing these issues, we propose a two-stage routing system that harnesses the power of reinforcement learning (RL). The first stage focuses on identifying the most efficient routes for quantum data transmission. The second stage concentrates on establishing these routes and improving how and when to apply entanglement swapping and purification. Our extensive evaluations across various network sizes and configurations reveal that our method not only sustains superior end-to-end route fidelity but also achieves significantly higher request success rates compared to traditional methods. These findings highlight the efficacy of our approach in managing the complex dynamics of quantum networks, ensuring robust and scalable quantum communication. Our method’s adaptability to changing network conditions and its proactive management of quantum resources make an important contribution to quantum network efficiency. PubDate: 2025-04-22
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Federated learning (FL) allows devices to train a machine learning model collaboratively without compromising data privacy. In wireless networks, FL presents challenges due to limited resources and the unstable nature of transmission channels that can cause delays and errors that compromise the consistency of global model updates. Furthermore, efficient allocation of communication resources is crucial in Internet of Things (IoT) environments, where devices often have limited energy capacity. This work introduces a novel FL algorithm called DFed-w$$_{\text {Opt}}^{\text {DP}}$$, designed for wireless networks within the IoT framework. This algorithm incorporates a device selection mechanism that evaluates the quality of device data distribution and connection quality with the aggregate server. By optimizing the power allocation for each device, DFed-w$$_{\text {Opt}}^{\text {DP}}$$ minimizes overall energy consumption while enhancing the success rate of transmissions. The simulation results demonstrate that DFed-w$$_{\text {Opt}}^{\text {DP}}$$ effectively operates with low transmission power while preserving the accuracy of the global model compared to other algorithms. PubDate: 2025-04-21
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Spatial division multiplexing elastic optical networks (SDM-EONs) based on multicore fibers (MCFs) are a technology that can handle the Internet’s growing traffic demand. However, SDM-EONs present challenges in their implementation, such as the physical layer impairments (PLI) and the spectrum fragmentation. This paper proposes the fragmentation-aware and PLI-aware algorithm (FXAA) to solve the core and spectrum assignment problem in MCF-based SDM-EONs. The FXAA implements a low-cost PLI-aware mechanism to select lightpaths with low inter- and intra-core impairment incidence, ensuring the quality of transmission (QoT) of the network lightpaths. In addition, FXAA clusters the lightpaths with the same number of frequency slots to reduce spectrum fragmentation. The numerical results show that compared with the other nine algorithms proposed in the literature, FXAA achieves a gain of circuit blocking probability of at least 33.36%, a gain of bandwidth blocking probability of at least 17.99%, and an increase in spectral utilization of at least 1.08%. PubDate: 2025-04-16
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Services that require low latency are increasing, and edge computing that processes workloads in servers located geographically close to the user is being researched. To offload workloads from user devices to edge servers with low latency, packets must be forwarded with low latency on a general-purpose server, and performance-oriented methods are widely used at the expense of higher power consumption, including a busy-polling method in receiving packets, such as the Data Plane Development Kit (DPDK). However, in today’s large-scale services with many servers, even a slight increase in power consumption on each server results in wasting tremendous power. In this paper, we design and implement a packet-processing system on a general-purpose server that can achieve power saving while maintaining low latency. To avoid wasting power caused by busy polling, a receiving thread in the proposed system can sleep when no packet arrives and be woken up without delays by a hardware interrupt context of packet incoming. In addition, to enhance the power-saving effect of sleep, we design and implement a CPU idle control method that enables CPU cores used by receiving threads to enter an appropriate C-state in accordance with traffic load. We evaluate the proposed system in an environment that simulates a virtualized Radio Access Network (vRAN) system, which has strict latency requirements of network processing on a general-purpose server. The evaluation results demonstrate that the proposed system can reduce power consumption compared with a busy-polling system and the average latency degradation was only a few microseconds. PubDate: 2025-04-03
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: 5G vehicle-to-everything (V2X) connectivity plays a fundamental role in enabling advanced vehicular networks within intelligent transportation systems (ITS). However, challenges arising from limited resources, such as unreliable connections between vehicles and the substantial signaling overhead in centralized resource distribution methods, impede the efficiency of V2X communication systems, especially in safety-critical applications. This study critically explores the limitations of centralized resource management in 5G-V2X, focusing on issues of resource scarcity and allocation inefficiencies. In response to these challenges, our approach focuses on optimizing resource utilization within the constraints of limited resources. The article introduces innovative strategies to enhance V2X service satisfaction, emphasizing the efficient allocation of resources for different service classes. Simulations showcase the impact of our tailored approach on resource utilization and satisfaction rates, shedding light on potential improvements in scenarios with constrained resources. PubDate: 2025-04-02
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Precision agriculture (PA) and plant disease detection (PDD) are essential for farm crops’ life quality and crop yield. Unfortunately, current PDD algorithms are trained and deployed with perfect plant images. This is impractical since PA sensor networks (PANs) transfer imperfect data due to wireless communication imperfections, such as channel estimation and noise, as well as hardware imperfections and noise. To capture the influence of channel imperfections and combat its effect, this work considers on- and/or offsite PDD implementation using plant image data transferred over multi-path imperfect PAN. Here, both traditional decode-and-forward (DF) data routing and channel effect considering machine learning data autoencoder multi-path routing are used for image data transmission. The multi-path DF data routing considers equal gain combining (EGC) and maximum ratio combining (MRC) techniques at the destination gateway for data decoding. In addition, a PDD deep learning algorithm is developed to predict whether or not a farm plant is diseased, using the noisy image data captured by the multi-path data routing PAN. From the PAN-PDD integrated system simulation, the proposed ML multi-path PAN-PDD algorithms (i.e., EGC and MRC) are compared to the ML single-path PAN-PDD algorithm and the traditional single-path PAN-PDD system. The simulation results showed that the multi-path approach performed fairly well over the other DF PAN-PDD systems. Incorporating the channel effects in designing an intelligent wireless data transfer solution/technique improves the communication system performance in PDD implementation. PubDate: 2025-04-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Emotion recognition (ER) is a pivotal discipline in the field of contemporary human–machine interaction. Its primary objective is to explore and advance theories, systems, and methodologies that can effectively recognize, comprehend, and interpret human emotions. This research investigates both unimodal and bimodal strategies for ER using advanced feature embeddings for audio and text data. We leverage pretrained models such as ImageBind for speech and RoBERTa, alongside traditional TF-IDF embeddings for text, to achieve accurate recognition of emotional states. A variety of machine learning (ML) and deep learning (DL) algorithms were implemented to evaluate their performance in speaker dependent (SD) and speaker independent (SI) scenarios. Additionally, three feature fusion methods, early fusion, majority voting fusion, and stacking ensemble fusion, were employed for the bimodal emotion recognition (BER) task. Extensive numerical simulations were conducted to systematically address the complexities and challenges associated with both unimodal and bimodal ER. Our most remarkable findings demonstrate an accuracy of $$86.75\%$$ in the SD scenario and $$64.04\%$$ in the SI scenario on the IEMOCAP database for the proposed BER system. PubDate: 2025-04-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This study presents the covert performance analysis of an intelligent omni-surface (IOS)-assisted non-orthogonal multiple access (NOMA) network, considering the effects of correlated channels and imperfections in successive interference cancellation (SIC). Recent advancements in device miniaturization have enabled the integration of a large number of IOS elements within a small area. As a result, the channels are no longer independent and exhibit some correlation. This introduces new challenges in channel modeling and analysis. By deriving the statistics of the correlated channel gains, we propose suitable distribution models for the warden (Willie) and the covert user (Bob) under different levels of correlation. The validity of these distribution models is confirmed using quantile-quantile (Q-Q) plots. We analytically derive Willie’s detection error probability (DEP), assuming the worst-case scenario where an optimal detection threshold is employed. Next, we calculate the optimal NOMA power allocation coefficients and phase shift matrices of the IOS to maximize the covert rate. Simulation results show that in the high-SNR regime, the covert rate of the proposed IOS-NOMA network increases by 4 bps/Hz each time the inter-element spacing is halved. Additionally, we derive a closed-form expression for the covert outage probability, validated through Monte Carlo simulations. PubDate: 2025-03-31
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This paper, as an extended version of a communication presented at the ISIVC’2024 conference, deals with security issues in the software-defined networks (SDN); it introduces a Distributed Denial of Service (DDoS) detection system leveraging deep learning (DL) features. The main objective is to enhance SDN security by accurately classifying DDoS attacks, improving efficiency, particularly for zero-day attack detection, and enabling targeted mitigation strategies. Our contribution focuses on refining a hybrid DL model with a novel architecture that applies algorithms simultaneously to distinguish the normal SDN traffic and five carefully selected other classes covering various attack kinds, using an optimized CIC-DDoS2019 dataset for more efficient classification. Compared to the conference paper, the model has been reinforced by the use of attention mechanisms and transformer architectures in addition to layers’ adjustments and hyper-parameters re-settings. Additionally, the previously used training and testing data have been combined and split into three sets: 70% for training, 15% for validation (continuous partial evaluation), and 15% for final testing. The resulting solution (hybrid DNN-LSTM) demonstrated continuous exponential improvement of validation accuracy during the training step, recording a higher value near 99% and achieving a final testing accuracy of 98.84%. The improved model is suitable for real-world SDN systems, with its deployment, potential challenges, and practical benefits discussed. PubDate: 2025-03-29
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This research paper introduces a reconfigurable and compact microstrip patch antenna designed and optimized for sub-6 GHz frequency bands, aligned with advancements toward millimeter-wave applications. The proposed antenna operates within the 2412–2484 MHz band for WiFi and the 3300–3800 MHz band for 5 G mobile phone communications. The antenna features a circular patch structure with compact dimensions, facilitating integration into miniature components and devices for wireless applications. To achieve frequency reconfigurability, a PIN diode was used as the switching technique. The antenna dimensions were optimized and simulated using CST and HFSS software. The simulation results were validated through measurements of the manufactured antenna. The antenna was fabricated on an FR4 epoxy substrate with a relative permittivity of 4.4 and a thickness of 1.6 mm. PubDate: 2025-03-25
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This manuscript proposes a novel generalized Reed-Solomon coded cooperative differential spatial modulation (GRSCC-DSM) scheme for small-scale wireless communication over the quasi-static Rayleigh fading channels. Specifically, two generalized Reed-Solomon (GRS) codes are distributed at the source and relay terminals, where the relay’s information depends on the message from the source. Moreover, the differential spatial modulation (DSM) technique is employed in the proposed scheme to obtain more diversity gains. Compared to the optimal maximum likelihood (ML) detection for the DSM, a reduced-complexity Euclidean distance superposition (EDS) detection algorithm is proposed to optimize the system complexity with minimal effect on performance. Besides, based on the unique algebraic structure of the GRS code and the consecutive relay symbol selection, a joint decoding algorithm is developed at the destination to further improve the bit-error-rate (BER) performance. The Monte-Carlo simulation results indicate that the proposed GRSCC-DSM scheme utilizing the novel joint decoding outperforms the corresponding non-cooperative counterpart by over 1.8 dB gain as well as the existing comparable coded cooperative schemes by more than 1.6 dB gains under identical conditions in high signal-to-noise-ratio (SNR) regions. PubDate: 2025-03-25
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Accurate prediction of mobility coordinates (x and y) is essential for effective transportation planning, urban development, and mobile network optimization. This study presents Tri-Sequence Temporal Network (TriSeqNet), an innovative architecture that synergizes the capabilities of bidirectional long short-term memory (BiLSTM), residual gated recurrent units (Residual GRU), and temporal convolutional networks (TCN) to concurrently predict x and y coordinates. Our approach outperforms existing methods by leveraging the combined strengths of these advanced neural network models. The performance of TriSeqNet is evaluated using traditional metrics such as mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE), as well as the coefficient of determination (R2) and explained variance (EV). This comprehensive evaluation framework demonstrates the robustness and accuracy of the proposed model in various predictive scenarios. PubDate: 2025-03-22
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Digital watermarking is a technique for protecting digital content from unauthorized copying and distribution by embedding a unique identifier into the content. In this research paper, we propose a method for embedding digital watermarks using the multi-resolution wavelet transform (MWT) and discrete cosine transform (DCT) for the detection of copyright infringement. The proposed method uses MWT to decompose the original image into different sub-bands; and then, the DCT is applied to embed the watermark into the high-frequency sub-bands. The synergy of MWT and DCT presents a formidable defense against cropping attacks, making it a vital tool for applications that require secure and resilient information embedding in images. The proposed method is evaluated using various images and compared with other watermarking techniques. Experimental results show that the proposed method is robust against extreme cropping attacks even in the presence of noise. PubDate: 2025-03-13
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Mobile Cloud Computing (MCC) overcomes various challenges related to processing capacity, storage space, and battery life. However, accessing cloud services in MCC introduces additional complexities due to the involvement of third parties. Among the existing mechanisms, authentication is the most widely used method for securely accessing information. Single-factor authentication, for example, the use of passwords and two-factor authentication, is not sufficient to ensure better security. Thus, the multi-factor authentication proposal ensures a higher level of protection by extending the single-factor or two-factor authentication. To address this, in this work, we propose an architecture for multiple-factor mutual authentication applications. We introduce a new factor, which is the one-time password (OTP) calculated using an image chosen by the mobile user. Additionally, we propose a new multiple-factor mutual authentication scheme based on smart card technology. To prove the security and safety of the corresponding proposed protocol, we conducted a formal security analysis using the AVISPA (Automated Validation of Internet Security Protocols and Applications) tool. The performance analysis demonstrates the efficiency of our proposed scheme in terms of computational overhead compared to related existing schemes. The experimental results are also developed to evaluate and validate the effectiveness of the proposed scheme using the MATLAB environment. PubDate: 2025-03-11