Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Drones can be used to create wireless communication networks in swarms using Artificial intelligence (AI). Their mobility and line-of-sight capability have made them key solutions for civil and military applications. AI is also developing rapidly nowadays and is being successfully applied due to the huge amount of data available. This has led to the integration of AI into networks and its application to solve problems associated with drone swarms. Since AI systems have to process huge amounts of information in real time, this leads to increased data packet loss and possible loss of communication with the control center. This article is devoted to the calculation of packet losses and the impact of traffic parameters on the data exchange in swarms. Original swarm models were created with the help of MATLAB and NetCracker packages. Dependences of data packet losses on the transaction size are calculated for different drone number in a swarm using NetCracker software. Data traffic with different parameters and statistical distribution laws was considered. The effect of different distances to drones on the base station workload has been simulated. Data transmission in a swarm was studied using MATLAB software depending on the signal-to-noise ratio, nonlinearity levels of base station amplifier, signal modulation types, base station antenna diameters, and signal phase offsets. The data obtained allows foresee the operation of drone communication channels in swarms. PubDate: 2024-08-22
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Cellular networks are moving towards increasing heterogeneity by deploying more small cells into macro base station (MBS) to meet rapidly growing traffic demands. To leverage the advantages of these small cells, mobile users should be offloaded onto small base stations (BSs), which will typically be lightly populated and can give a higher data rate by presenting the mobile users with many more channels than the MBS. Likewise, a more balanced cell association will lessen the pressure on the MBS, allowing it to serve its remaining users more effectively. This paper addresses the cell association challenge for Quality of Service (QoS) provisioning in terms of throughput and load-balancing for 5G and future generation networks. This problem is quite challenging because BSs have varying backhaul capacities and users have varying QoS needs. Most of the previous studies are based on reference signal received power (RSRP), signal to interference and noise ratio (SINR) or its variants and most importantly majority of them are not load-aware. Therefore, a modified load-aware biased cell association scheme based on distance is proposed to attain better QoS provisioning in terms of throughput and load-balancing. Simulation results depict that the proposed load-aware-based method outperforms conventional cell association schemes based on RSRP and its variants, and in terms of throughput and load-balancing. Furthermore, the algorithm’s complexity has been assessed through a comparison and analysis of computational time, demonstrating better performance compared to state-of-the-art techniques. PubDate: 2024-08-18
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Due to the limited onboard resources on Micro Aerial Vehicles (MAVs), the poor real-time performance has always been an urgent problem to be solved in the practical applications of visual inertial odometry (VIO). Therefore, a lightweight omnidirectional visual-inertial odometry (LOVIO) for MAVs based on improved keyframe tracking and marginalization was proposed. In the front-end processing of LOVIO, wide field-of-view (FOV) images are captured by an omnidirectional camera, frames are tracked by semi-direct method combining of direct method with rapidity and feature-based method with accuracy. In the back-end optimization, the Hessian matrix corresponding to the error optimization equation is stepwise marginalized, so the high-dimensional matrix is decomposed and the operating efficiency is improved. Experimental results on the dataset TUM-VI show that LOVIO can significantly reduce running time consumption without loss of precision and robustness, that means LOVIO has better real-time and practicability for MAVs. PubDate: 2024-08-12
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Signal recognition is a key technology in wireless networks, with broad applications in both military and civilian fields. Accurately recognizing the modulation scheme of an incoming unknown signal can significantly enhance the performance of communication systems. As global digitization and intelligence advance, the rapid development of wireless communication imposes higher standards for signal recognition: (1) Accurate and efficient recognition of various modulation modes, and (2) Lightweight recognition compatible with intelligent hardware. To meet these demands, we have designed a hybrid signal recognition model based on a convolutional neural network and a gated recurrent unit (CnGr). By integrating spatial and temporal modules, we enhance the multi-dimensional extraction of the original signal, significantly improving recognition accuracy. Additionally, we propose a lightweight signal recognition method that combines pruning and depthwise separable convolution. This approach effectively reduces the network size while maintaining recognition accuracy, facilitating deployment and implementation on edge devices. Extensive experiments demonstrate that our proposed method significantly improves recognition accuracy and reduces the model size without compromising performance. PubDate: 2024-08-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The growing number of mobile stations requires relaying protocols that can efficiently transmit large volumes of data with minimal computational complexity. Systems that combine joint symbol and channel estimation with Amplify-and-Forward Multiway Relay (MWR) systems provide a highly effective solution to this problem. Thus, this study introduces a new Nested PARAFAC-based MWR system model as its primary contribution. Then, a non-iterative semi-blind receiver is designed to allow simultaneous estimation of symbols and channels. This computationally efficient approach is validated using Monte Carlo computational simulations, showing that the proposed receiver can achieve lower bit error rate values at lower computational complexity than some of its state-of-the-art competitors. PubDate: 2024-08-04
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This research focuses on how to improve benefit of downlink non-orthogonal multiple access (NOMA) concept in a cellular wireless system. The issues related with a user grouping in NOMA and an allocation of power among users are crucial to maximize the system performance and efficiency. Power allocation at transmitter side, as well as successive interference cancellation (SIC) at receiver side, are key operations which time consumption and computational complexity increase with increase of number of users in a NOMA group. In addition, accommodating more users in a group requires an increase in transmit power. As a result of trade-off between practicability and capacity gain provided by NOMA, we examine a user grouping scheme in which the number of users in a group does not exceed three. Namely, the idea is to avoid orthogonal multiple access (OMA) users and additional communication resources employing hybrid three-users NOMA grouping and NOMA pairing. The application of power allocation algorithm maximizes achievable sum rate (ASR) of NOMA group with restriction that individual user rates in NOMA must be higher than in the case of OMA system providing user fairness. Under these constraints, expressions for optimal values of power allocation coefficients are obtained in closed-form. Computer simulations are carried out for the system with perfect SIC over Rayleigh fading channels taking into account the influence of interferences from nearby base stations (BSs). The results demonstrate the effectiveness of the proposed framework for user grouping and power allocation compared with conventional NOMA pairing algorithm and OMA transmission. PubDate: 2024-08-02
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Maximizing the communication coverage with the minimum number of unmanned aerial vehicles (UAVs) in a telecommunication system is investigated in this paper. In particular, the problem of maximizing the coverage area in stadium environments using UAVs is modeled mathematically as a multi-objective optimization problem. While the problem is solved using state-of-the-art solvers, to address the problem complexity and achieve the results for real-time applications, we propose a heuristic algorithm. The performance evaluation done in three crowding levels demonstrates that the performance with the heuristic algorithm is comparable to the mathematical model in terms of the number of coverage users. Moreover, the running time is significantly smaller in the proposed heuristic algorithm. This shows the efficiency of the model and solution. Moreover, we compare the heuristic algorithm with the non-dominated sorting genetic algorithm (NSGAII). The results of the paper show that the use of the heuristic algorithm speeds up the processing and decision making, and at the same time maximizes the communication coverage in stadium environments. PubDate: 2024-08-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In the ever-evolving landscape of wireless communication systems, including fifth-generation (5G) networks and beyond (B5G), accurate Modulation and Coding Scheme (MCS) prediction is crucial for optimizing data transmission efficiency and quality of service. Traditional MCS selection methods rely on predefined rules and heuristics, offering transparency and control but lacking adaptability in changing wireless conditions. The emergence of Machine Learning (ML) has brought transformative capabilities, particularly in MCS prediction. ML leverages data-driven models, promising improved accuracy and adaptability in dynamic wireless environments. This paper marks a novel endeavor in this domain, as it explores and evaluates a range of machine learning (ML) techniques for predicting MCS in orthogonal frequency-division multiplexing (OFDM) systems, representing the first such investigation in this field. Additionally, it introduces a specialized Deep Neural Network (DNN) architecture with two hidden layers for MCS prediction, guided by performance metrics such as accuracy, precision, recall, and F1-score. The examined ML methods include Artificial Neural Networks (ANN), Support Vector Machine (SVM), Random Forest (RF), and Bagging with k-NN (B-kNN). These methods undergo thorough training and evaluation using a dataset generated from simulations of non-standalone 5G networks. The study incorporates physical layer measurements and employs a ray-tracing path loss prediction model for comprehensive environmental characterization. Also, advanced data mining techniques preprocess raw data, addressing model underfitting and overfitting challenges. Finally, performance evaluation results reveal that the ANN with two hidden layers achieves the highest accuracy at 98.71%, while RF and B-kNN methods attain the lowest accuracy, below 88.65%. SVM and ANN models, with one and four hidden layers, respectively, demonstrate comparable MCS prediction accuracy, ranging from 97.02 to 97.30%. PubDate: 2024-08-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In order to reduce network energy consumption and prolong the network lifetime in wireless sensor networks, a data fusion algorithm named CFLDF is proposed. Firstly, upon completion of the arrangement of network nodes, network clustering is achieved using fuzzy c-means optimized by the improved butterfly optimization algorithm, and a data fusion model is established on the clustering structure. Then, reliable data is sent to the cluster head by the nodes with the assistance of a fuzzy logic controller, and data fusion is performed by the cluster head using a fuzzy logic algorithm. Finally, cluster heads transmit the fused data to the base station. Finally, the fused data is transmitted to the base station by the cluster heads. Simulation experiments are conducted to evaluate the CFLDF algorithm against the LEACH, LEACH-C, and SEECP algorithms. The results demonstrate that network energy consumption is effectively reduced and the network lifetime is extended by the CFLDF algorithm. PubDate: 2024-08-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Inter-satellite optical wireless communication (IsOWC) is frequently utilized for fast data transmission between various far away satellites in free space. Thousands of users are connected via broad-area applications that use network topologies with many frequencies per optical fiber and a range of transmission rates and capacities. Multiple frequencies being transmitted simultaneously on a fiber having a nanometer range (1300–1600) that is a powerful feature of an optical communications link. When coupled with optical amplifiers, the WDM idea creates communication lines that provide rapid user-to-user communication throughout national borders. A well-known drawback of free space optics (FSO) or Inter-Satellite Link (ISL) is degradation in optical signal power, in addition to its appealing advantages. The traditional system is improved in order to solve this issue. Using the optisystem-15 simulator, we analyzed the IsOWC link's capability for fast data transfer based on subjective factors including received power, Q-factor and BER. 16-channels multiplexed with modulation technique transmitters to the system are explained by the projected model. The link has been used at various data rates, distances, and external Match-Zehnder modulation. Applied a series hybrid optical amplifier of erbium-doped fiber amplifier (EDFA) and a laser amplifier that exists prior to and after the OWC line with the given parameters. Different OWC ranges with a loop control of 3 loops that made total link length 1600 km, and searching different data rates to show their impact on system efficiency, which showed a great enhancement in results that are crucial to the current inter-satellite OWC link. Graphical abstract PubDate: 2024-08-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Massive multiple-input multiple-output (MMIMO) is essential to modern wireless communication systems, like 5G and 6G, but it is vulnerable to active eavesdropping attacks. One type of such attack is the pilot contamination attack (PCA), where a malicious user copies pilot signals from an authentic user during uplink, intentionally interfering with the base station’s (BS) channel estimation accuracy. In this work, we propose to use a Decision Tree (DT) algorithm for PCA detection at the BS in a multi-user system. We present a methodology to generate training data for the DT classifier and select the best DT according to their depth. Then, we simulate different scenarios that could be encountered in practice and compare the DT to a classical technique based on likelihood ratio testing (LRT) submitted to the same scenarios. The results revealed that a DT with only one level of depth is sufficient to outperform the LRT. The DT shows a good performance regarding the probability of detection in noisy scenarios and when the malicious user transmits with low power, in which case the LRT fails to detect the PCA. We also show that the reason for the good performance of the DT is its ability to compute a threshold that separates PCA data from non-PCA data better than the LRT’s threshold. Moreover, the DT does not necessitate prior knowledge of noise power or assumptions regarding the signal power of malicious users, prerequisites typically essential for LRT and other hypothesis testing methodologies. PubDate: 2024-08-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Data transmission from sensor nodes is the biggest problem for IoT networks. Overusing communication power shortens node lifespans. Thus, network issues including QoS, security, network heterogeneity, congestion avoidance, reliable routing, and energy savings must be addressed. Routing protocols are essential for delivering data between organizations. Information gathering and consolidation require data aggregation to minimize traffic congestion, operating expenses, energy usage, and network lifespan. IoT data aggregation makes route planning dependable, energy-efficient, and difficult. Disjoint & Scalable Multipath Routing (D &SMR) is a new routing system developed using NS2 simulation. The method estimates delivery success using decision trees and neural networks. We evaluate characteristics such as (D &SMR) routing scheme predictability, node popularity, power consumption, speed, and location while training the model. Simulation results show that (D &SMR) outperforms a reliable routing system in terms of delivery success, lost messages, overhead, and hop count. The proposed hybrid routing method involves cluster construction and intra- and inter-cluster routing. The study found that (D &SMR) beats previous research in network resilience, packet transmission efficiency, end-to-end latency, and energy usage. PubDate: 2024-08-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Wireless ad hoc networks play a pivotal role in wireless communication systems. MANETs find extensive applications across various domains, encompassing real-time information exchange, network partitioning, rescue operations, interpersonal communication, and data sharing. MANET works as dynamic wireless networks without a fixed infrastructure in which nodes freely join or leave the network at any time. The absence of fixed infrastructure coupled with openness characteristics of MANET poses significant security issues. This paper proposes a technique called as Anti-blackhole, Gray-hole, and Flooding attack-Ad-hoc On-Demand Distance Vector (ABGF-AODV) to identify and thwart the impact of attacks in MANETs. Through extensive evaluation utilizing the NS-2 simulator, the performance of the proposed protocol is thoroughly examined. The results showcase the robustness of the ABGF-AODV protocol against various attacks, yielding better performance as compared with existing state of art technique. PubDate: 2024-08-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Tuna swarm optimization algorithm (TSO) is an innovative swarm intelligence algorithm that possesses the advantages of having a small number of adjustable parameters and being straightforward to implement, but the TSO exhibits drawbacks including low computational accuracy and susceptibility to local optima. To solve the shortcomings of TSO, a TSO variant based on behavioral evaluation and simplex strategy is proposed by this study, named SITSO. Firstly, the behavior evaluation mechanism is used to change the updating mechanism of TSO, thereby improving the convergence speed and calculation accuracy of TSO. Secondly, the simplex method enhances the exploitation capability of TSO. Then, simulations of different dimensions of the CEC2017 standard functional test set are performed and compared with a variety of existing mature algorithms to verify the performance of all aspects of the SITSO. Finally, numerous simulation experiments are conducted to address the optimization of wireless sensor network coverage. Based on the experimental results, SITSO outperforms the remaining six comparison algorithms in terms of performance. PubDate: 2024-08-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, we optimize the instantaneous and average throughput when wireless communications uses Reconfigurable Intelligent Surfaces (RIS) and the source harvests power from sun using a Photo-Voltaic (PV) system. Instantaneous throughput optimization offers up to 6 dB gain versus average throughput optimization. Average throughput optimization offers a higher throughput than any packet length. Performance gain is up to 1 dB gain using average throughout optimization versus other values of packet length. PubDate: 2024-08-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Slotted ALOHA is a well-known multiple access control protocol that devices use to access a radio channel in wireless networks. Age of information (AoI) is a key network performance indicator that represents the elapsed time between the time data is generated and the time data is received. Threshold-based ALOHA, a variant of the slotted ALOHA, uses a threshold and a transmission probability to send data and meet its requirement of average AoI (AAoI), in which however energy consumption is not discussed. It is crucial that a device operates energy-efficiently to reduce energy consumption, which is conducive to extending the life of the device and reducing greenhouse gas emissions. Therefore, this paper proposes a modified threshold ALOHA in low-duty-cycle wireless networks, called LBG-threshold-ALOHA, in which a device looks up whether sensing data is generated in prior N slots before its data generation/transmission. Instead of generating data in each transmission, the LBG-threshold ALOHA uses data generated in prior N slots to send, which reduces the activity of data generation and thus yields lower energy consumption. The characteristic of lower energy consumption is beneficial to produce a lower AAoI on a network constrained by low energy consumption. Simulation results show that, under an upper limit on the mean energy consumption, the LBG-threshold-ALOHA yields a lower minimum AAoI compared to the original threshold ALOHA. PubDate: 2024-08-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, we propose a novel watermarking method in order to enhance the security in biometric voice/speech transmission systems basing on sub-sampling, discrete cosine transform (DCT) and adaptive watermark embedding. To improve imperceptibility, we use sub-sampling and adaptive embedding in DCT high energy coefficients. We employ a significant watermark represented in a biometric unique fingerprint. Introducing bits, only in high energy fraction, provides us with further agreeing element depicted by decreasing running time, both in embedding and extraction processes, which can help minimizing hardware consumption. Achieved results reveal the stability and flexibility of our proposed scheme and confirm its robustness against additive noise. In addition, we enhanced our previous published approaches and the expected limitations of our proposed model will appear in the hardware implementation where the attacks will be more considered. Another limitation will be the number of quantization bits used for high energy DCT coefficients. All these may affect the speaker secured identification and verification system. PubDate: 2024-08-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract For 6G communications, the Ultra Massive Multiple Input Multiple Output (UM-MIMO) systems with Intelligent Reflecting Surface (IRS) assistance are capable since they can efficiently get beyond the limitations of restricted blockage and coverage. However, in the far field, a robust THz channel sparsity is unfavorable to spatial multiplexing, whereas excessive UM-MIMO and IRS dimensions extend the near field region. To address these issues, a hybrid beamforming IRS assisted UM-MIMO THz system with Deep Siamese Capsule Network is designed with the cascaded channel. The near and far field codebook-based beamforming is developed to model the proposed communication channel. The channel estimation is done based on the deep siamese capsule adaptive beluga whale neural network. The simulation results of the bit error rate, Normalized Mean Square Error (NMSE), spectral efficiency, sum rate, data rate, normalized channel gain, beamforming gain, and array gain loss shows that the proposed system achieves reliable performances compared with existing techniques. The suggested approach also demonstrates the outstanding adaptability to various network configurations and good scalability. The method provides a better channel estimation accuracy and less complexity which shows an NMSE of − 11.2 dB at an SNR of 10 dB. PubDate: 2024-08-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Multimedia content represents a significant portion of the traffic in computer networks, and COVID-19 has only made this portion bigger, as it now represents an even more significant part of the traffic. This overhead can, however, be reduced when many users access the same content. In this context, Wi-Fi, which is the most popular Radio Access Technology, introduced the Group Addressed Transmission Service (GATS) with the amendment IEEE 802.11aa. GATS defines a set of policies aiming to make multicast traffic more robust and efficient. However, Wi-Fi is constantly evolving, and as it improves and greater bandwidths and data rates become available, it is necessary to reevaluate the behavior of mechanisms introduced in past amendments. This is also the case with GATS, whose policies have different behaviors and adapt better to different channel conditions. These policies have been evaluated in the past on High Throughput networks. Still, none of the evaluations provided insights into the behavior of GATS policies in Very-High Throughput (VHT) physical layers in a realistic manner. This is extremely relevant as a greater available bandwidth can impact the decisions of the GATS policy configuration. Thus, in this work, we present an evaluation of the IEEE 802.11aa amendment with a VHT physical layer in a realistic scenario that uses Minstrel as a rate adaptation algorithm simulated in NS-3. PubDate: 2024-08-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Device-to-device (D2D) communications, regarded as a crucial technology for the Beyond Fifth-Generation (B5G) wireless networks, provide substantial benefits including elevated spectrum effectiveness, improved coverage, and traffic offloading. The mode selection and channel allocation play an important role in ensuring the data rate and enhancing user experience in D2D communications. The dynamic switching of communication modes by D2D User Equipments (UEs) depends on the extent of shared resources among D2D pairs. In this paper, we examine the issue of joint mode selection and resource allocation for D2D communication within a cellular network in an uplink scenario. The main objective of this study is to optimize the overall sum rates of the network, while simultaneously guaranteeing the Quality of Service (QoS) requirements. Three communication modes are considered which are Direct Mode (MD), Relay-assisted Mode (RM), and Local route Mode (LM). In addition, each D2D pair has the option to be allocated either a dedicated channel or a reused channel. We present an innovative approach for the simultaneous determination of mode selection and channel allocation in D2D communication. The proposed approach is based on a greedy strategy and a modified many-to-many matching technique that effectively selects the best communication mode and assigns the optimal channel for each D2D pair, respectively. The simulation results illustrate that the presented approach exhibits a significant improvement in the network performance when compared to the benchmark algorithms. The proposed scheme is evaluated with perfect and imperfect Channel State Information (CSI) conditions. PubDate: 2024-08-01