Followed Journals
Journal you Follow: 0
 
Sign Up to follow journals, search in your chosen journals and, optionally, receive Email Alerts when new issues of your Followed Journals are published.
Already have an account? Sign In to see the journals you follow.
Similar Journals
Journal Cover
IEEE Transactions on Emerging Topics in Computational Intelligence
Number of Followers: 1  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Online) 2471-285X
Published by IEEE Homepage  [228 journals]
  • IEEE Transactions on Emerging Topics in Computational Intelligence
    • Abstract: Provides a listing of current staff, committee members and society officers.
      PubDate: Feb. 2020
      Issue No: Vol. 4, No. 1 (2020)
       
  • IEEE Transactions on Emerging Topics in Computational Intelligence
    • Abstract: Provides a listing of current committee members and society officers.
      PubDate: Feb. 2020
      Issue No: Vol. 4, No. 1 (2020)
       
  • IEEE Transactions on Emerging Topics in Computational Intelligence
    • Abstract: Provides a listing of current committee members and society officers.
      PubDate: Feb. 2020
      Issue No: Vol. 4, No. 1 (2020)
       
  • Guest Editorial: Special Issue on Computational Intelligence for
           Communications and Sensing
    • Pages: 1 - 4
      Abstract: The papers in this special section focuses on computational intelligence for communication and sensing systems. As billions of phones, appliances, drones, traffic lights, security systems, environmental sensors, radars, and other radio-connected sensing and communication devices sum into a rapidly growing Internet of Things (IoT), many challenges such as spectrum allocation and efficiency, energy efficiency, security, have emerged as urgent topics to be solved. For example, 5G wireless communications will be deployed in the 28 GHz, 37 GHz, 39GHzfrequency band, which may co-exist with radars and other sensing devices.
      PubDate: Feb. 2020
      Issue No: Vol. 4, No. 1 (2020)
       
  • Intelligent Spectrum Resource Allocation Based on Joint Optimization in
           Heterogeneous Cognitive Radio
    • Authors: Xin Liu;Min Jia;
      Pages: 5 - 12
      Abstract: In conventional cognitive radio (CR), only when a primary user (PU) is detected to be absent, the idle spectrum can be accessed by a secondary user (SU), in addition, the allocation processes for spectrum resources of CR such as sensing time and transmission power are often independent. In this paper, to improve the SU's throughput in a heterogeneous CR, an intelligent spectrum resource allocation based on joint optimization is proposed. The proposed scheme allows the SU to access the PU’ s spectrum not only in overlay mode but also in underlay mode. The SU accesses the spectrum with full power when PU is absent and with controlled power when PU is present. The transmission rates in the cases of perfect transmission, false alarm transmission, spectrum sharing transmission and interference transmission are analyzed, respectively, which are then used to achieve the average aggregate throughput of the SU. We have formulated spectrum resource allocation as a joint optimization problem of subchannel transmission power and spectrum sensing time, which can make the SU achieve the maximal throughput subject to the limits of minimal detection probability, maximal total power and maximal interference power for the SU. We propose an alternating diretion optimization (ADO) based joint optimization algorithm to solve the joint optimization problem. The simulation results have indicated that the proposed heterogeneous CR is superior to the traditional CR.
      PubDate: Feb. 2020
      Issue No: Vol. 4, No. 1 (2020)
       
  • Sparse Learning of Higher-Order Statistics for Communications and Sensing
    • Authors: Zhuo Sun;Song Kong;Wenbo Wang;
      Pages: 13 - 22
      Abstract: Signal processing based higher-order statistics (HOS) has been acting as a potential important tool on variety of target identification and information sensing fields. While a concise or compact expression of HOS is needed to ease the burden of data acquisition and computational complexity, sparse representation of HOS could be the optimum solution to this problem. In this paper, we formulate the issue of sparse representation of HOS by categorizing them into three cases according to the discriminative sparsity: strictly sparse, structure-based sparse and structure-based compressible. The corresponding algorithms of sparse representation for the three types of HOS are designed individually. For strictly sparse HOS, we mainly address on how to build the linear relationship between one-dimensional time domain samples and high-dimensional HOS and reduce the computational complexity of equivalent sensing matrix. Autocorrelation and four-order statistic are taken as examples to illustrate proposed sparse decomposing method for structure-based sparse HOS by exploiting their intra-structure properties. The sparse representation of structure-based compressible HOS are approximated with a joint decomposing algorithm using eigenvalue and single-value decomposition approaches. In addition, we have integrate our proposed sparse representation of HOS into compressive sensing framework to verify the feasibility and performance of sparse representation solutions.
      PubDate: Feb. 2020
      Issue No: Vol. 4, No. 1 (2020)
       
  • Pedestrian Retrieval Using Generated Samples and Multistream Layer in
           Sensor Networks
    • Authors: Shuang Liu;Xiaolong Hao;
      Pages: 23 - 31
      Abstract: In this paper, we propose a novel loss function named the hybrid quadruplet loss (HQL) to utilize the generated samples for pedestrian retrieval in sensor networks. The proposed HQL employs a set of quadruplets in order to maintain an appropriate margin between the real sample and the generated sample, reduce the intra-class variations and enlarge the inter-class variations. By this way, the generalization of the deep model could be improved. Furthermore, to identify the extremely similar pedestrians, we propose a novel multistream layer to mine imperceptible information from different aspects. The proposed multistream layer utilizes various filters with different morphologies to capture discriminative features in multiple scales, and it is flexible to follow any convolutional layer. Experiments on the three large-scale pedestrian retrieval databases (Market1501, CUHK03, and DukeMTMC-reID) demonstrate that the proposed method outperforms other state-of-the-art methods.
      PubDate: Feb. 2020
      Issue No: Vol. 4, No. 1 (2020)
       
  • Tensor Deep Learning Model for Heterogeneous Data Fusion in Internet of
           Things
    • Authors: Wei Wang;Min Zhang;
      Pages: 32 - 41
      Abstract: With the rapid evolvement of the Internet and data acquisition technology as well as the continuous advancement of science and technology, the amount of data in many fields has reached the level of terabyte or petabyte and most data collection comes from the Internet of Things (IoT). The rapid advancement of IoT big data has provided valuable opportunities for the development of people in all areas of society. At the same time, it has also brought severe challenges to various types of current information processing systems. Effectively using the big data technology, discovering the hidden laws in big data, tapping the potential value of big data, and predicting the development trend of things to allocate resources more reasonably will promote the overall development of society. However, most of the IoT big data are presented as heterogeneous data, with high dimensions, different forms of expression, and a lot of redundant information. The current machine learning model works in vector space, which makes it impossible to gain big data features because vectors cannot simulate the highly nonlinear distribution of IoT big data. This paper presents a deep learning calculation model called tensor deep learning (TDL), which further improves big data feature learning and high-level feature fusion. It uses tensors to model the complexity of multisource heterogeneous data and extends the vector space data to the tensor space, when feature extraction in the tensor space is included. To fully understand the underlying data distribution, the tensor distance is adopted as the average square sum error term of the output layer reconstruction error. Based on the conventional back-propagation algorithm, this study proposes a high-order back-propagation algorithm to extend the data from the linear space to multiple linear space and train the parameters of the proposed model. Then, to evaluate its performance, the proposed TDL model is compared with the stacked auto encoder and the mu-timodal deep learning model. Furthermore, experiments are performed on two representative datasets, namely CUAVE and STL-10. Experimental results show that the proposed model not only excels in heterogeneous data fusion but also provides a higher recognition accuracy than the conventional deep learning model or the multimodal learning model for big data.
      PubDate: Feb. 2020
      Issue No: Vol. 4, No. 1 (2020)
       
  • Utility- and Fairness-Based Spectrum Allocation of Cellular Networks by an
           Adaptive Particle Swarm Optimization Algorithm
    • Authors: Xiu Zhang;Xin Zhang;Zhou Wu;
      Pages: 42 - 50
      Abstract: In wireless and mobile communication, wireless spectrum is a limited and scarce resource. Properly allocating spectrum resource to different users is an important goal to meet the communication needs of people. The spectrum allocation problem in cellular network scenarios is studied in this paper. The network utility performance is assessed by two forms. One is no fairness consideration for linked users in the network, and the other is considering fairness for linked users. The spectrum allocation problem is represented as a maximization optimization problem. To solve the utility- and fairness-based spectrum allocation optimization, an adaptive particle swarm optimization is proposed based on sawtooth wave propagation technique (SAPSO). The SAPSO algorithm adapts all parameters in a sawtooth manner to accelerate convergence process. Simulation results show that the SAPSO algorithm is more efficient to solve spectrum allocation problem than the compared algorithms. Moreover, the results show that utilization with fairness among all linked users is a more suitable measure to improve quality of service for cellular networks.
      PubDate: Feb. 2020
      Issue No: Vol. 4, No. 1 (2020)
       
  • Person Re-Identification Based on Heterogeneous Part-Based Deep Network in
           Camera Networks
    • Authors: Zhong Zhang;Meiyan Huang;
      Pages: 51 - 60
      Abstract: In this paper, we propose a new deep learning model named heterogeneous part-based deep network for person re-identification in camera networks, which simultaneously learns the alignment and discrimination for parts of pedestrian images. Concretely, several parts are obtained through the uniform partition on the convolutional layer for each pedestrian image. Then, we present part-aligned distances to perform alignment by searching the shortest local distances between image parts in a certain range. Meanwhile, we utilize the batch hard triplet loss and cross-entropy loss to learn more discriminative part-based features in different aspects. Experiments are conducted on three challenging datasets, Market-1501, CUHK03, and DukeMTMC-reID, and we achieve 94.0%, 64.3%, and 83.6% rank-1 accuracy and 81.2%, 58.2%, and 68.0% mAP, outperforming the state-of-the-art methods by a large margin.
      PubDate: Feb. 2020
      Issue No: Vol. 4, No. 1 (2020)
       
  • Indoor WLAN Intelligent Target Intrusion Sensing Using Ray-Aided
           Generative Adversarial Network
    • Authors: Mu Zhou;Yixin Lin;Nan Zhao;Qing Jiang;Xiaolong Yang;Zengshan Tian;
      Pages: 61 - 73
      Abstract: An indoor target intrusion sensing technique has been used in many fields, such as smart home management, security monitoring, counter-terrorism, and disaster relief. At the same time, with the wide deployment of wireless local area network (WLAN) and general support of the IEEE 802.11 protocol by various mobile devices, the target intrusion sensing can be realized based on the existing WLAN infrastructure without requiring the target to carry any special device. However, the existing indoor WLAN target intrusion sensing approaches usually depend on the radio map construction with huge labor and time cost, which is the major barrier of current systems. In response to this compelling problem, we propose the new ray-aided generative adversarial model (RaGAM) to automatically construct the radio map, which is used for the indoor WLAN intelligent target intrusion sensing and localization. To achieve the low labor and time cost, the RaGAM uses the adaptive-depth ray tree based the quasi three-dimensional ray-tracing model to depict the difference of WLAN signals between the silence and intrusion environments with the purpose of constructing the synthetic radio map. Considering the gap between the synthetic and actual radio maps, we modify the conventional generative adversarial network by the joint synthetic and unsupervised learning (or called S+U learning) from the actual unlabeled received signal strength (RSS) data to improve the precision of the proposed ray-tracing model, and consequently obtain the refined radio map. After that, the statistical characteristics of the refined radio map are utilized to construct the training set for the probabilistic neural network (PNN), and then by using the well-trained PNN to classify the newly collected RSS data, the target intrusion sensing, and localization are achieved. The experimental results show that the proposed approach cannot only perform well in terms of computation cost and the ray-tracing accuracy, but also sense -he target intrusion states accurately.
      PubDate: Feb. 2020
      Issue No: Vol. 4, No. 1 (2020)
       
  • Soil PH Measurement Based on Compressive Sensing and Deep Image Prior
    • Authors: Jie Ren;Jing Liang;Yuanyuan Zhao;
      Pages: 74 - 82
      Abstract: Soil quality is vital in agriculture. People often use sensor networks to obtain the soil data of a piece of land. Sometimes, people detect soil data by using one-dimensional (1-D) ultrawideband (UWB) signals, which is too energy consuming. Hence, we want an energy-saving model for this condition. Compressed sensing (CS) is a feasible model to save energy. As soon as CS was put forward, it attracted wide attention. However, how to design a suitable sparse dictionary is still a problem until now. Aiming at different situations, designers should change their sparse dictionary. Unfortunately, soil data are always changing because of the variance of weather and environment. Therefore, if there is a computational intelligent CS algorithm which is suitable for variant signals, the problem is solved. In this paper, we proposed a deep learning model of CS, which can avoid designing a sparse dictionary. We combine deep learning and CS together and regard the output of networks as the recovery signals, because deep learning's end-to-end structure solves the above-mentioned problem. We use deep image prior, which is derived from deep convolutional generative adversarial networks to recover 1-D signals of soil data. Meanwhile, we used the mean square error between the recovery signals and the original signals of soil echoes to verify our algorithm. Finally, we classified PH values with recovery signals based on Xgboost because of its excellent classification ability. The experiment results show that our model's accuracy is much higher than those by traditional CS algorithms with a relatively small number of sensors. Also, our model has a better robustness.
      PubDate: Feb. 2020
      Issue No: Vol. 4, No. 1 (2020)
       
  • Deep Learning Based Hotspot Prediction and Beam Management for Adaptive
           Virtual Small Cell in 5G Networks
    • Authors: Yanan Liu;Xianbin Wang;Gary Boudreau;Akram Bin Sediq;Hatem Abou-zeid;
      Pages: 83 - 94
      Abstract: To meet the extremely stringent but diverse requirements of 5G, cost-effective network deployment and traffic-aware adaptive utilization of network resources are becoming essential. In this paper, a hotspot prediction based virtual small cell (VSC) operation scheme is adopted to improve both the cost efficiency and operational efficiency of 5G networks. This paper focuses on how to predict the hotspots by using deep learning, and then demonstrates how the predictions can be leveraged to support adaptive beamforming and VSC operation. We first leverage the feature extraction capabilities of deep learning and exploit use of a long short-term memory (LSTM) neural network to achieve hotspot prediction for the potential formation of the VSCs. To support the operation of VSCs, large-scale antenna array enabled hybrid beamforming is adaptively adjusted for highly directional transmission to cover these hotspot-based VSCs. Within each VSC, an appropriate user equipment is selected as a cell head to collect the intra-cell traffic in the unlicensed band and relays the aggregated traffic to the macro-cell base station by using the licensed band. Our simulation results illustrate that the proposed LSTM-based method can extract spatial and temporal traffic features of hotspot with higher accuracy, compared with some existing deep and non-deep learning approaches. Numerical results also show that VSCs with hotspot prediction and hybrid beamforming can improve the energy efficiency dramatically with flexible deployment and low latency, compared with the scenario of the convolutional fixed small cells.
      PubDate: Feb. 2020
      Issue No: Vol. 4, No. 1 (2020)
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 3.229.118.253
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-