Subjects -> COMPUTER SCIENCE (Total: 2336 journals)
    - ANIMATION AND SIMULATION (33 journals)
    - ARTIFICIAL INTELLIGENCE (124 journals)
    - AUTOMATION AND ROBOTICS (114 journals)
    - CLOUD COMPUTING AND NETWORKS (73 journals)
    - COMPUTER ARCHITECTURE (11 journals)
    - COMPUTER ENGINEERING (11 journals)
    - COMPUTER GAMES (22 journals)
    - COMPUTER PROGRAMMING (23 journals)
    - COMPUTER SCIENCE (1368 journals)
    - COMPUTER SECURITY (57 journals)
    - DATA BASE MANAGEMENT (20 journals)
    - DATA MINING (50 journals)
    - E-BUSINESS (21 journals)
    - E-LEARNING (28 journals)
    - ELECTRONIC DATA PROCESSING (23 journals)
    - IMAGE AND VIDEO PROCESSING (42 journals)
    - INFORMATION SYSTEMS (104 journals)
    - INTERNET (106 journals)
    - SOCIAL WEB (60 journals)
    - SOFTWARE (36 journals)
    - THEORY OF COMPUTING (10 journals)

COMPUTER SCIENCE (1368 journals)            First | 1 2 3 4 5 6 7 | Last

Showing 201 - 400 of 872 Journals sorted alphabetically
Computational Molecular Bioscience     Open Access   (Followers: 2)
Computational Optimization and Applications     Hybrid Journal   (Followers: 10)
Computational Particle Mechanics     Hybrid Journal   (Followers: 1)
Computational Research     Open Access   (Followers: 1)
Computational Science and Discovery     Full-text available via subscription   (Followers: 2)
Computational Science and Techniques     Open Access  
Computational Statistics     Hybrid Journal   (Followers: 14)
Computational Statistics & Data Analysis     Hybrid Journal   (Followers: 37)
Computational Toxicology     Hybrid Journal   (Followers: 1)
Computer     Full-text available via subscription   (Followers: 123)
Computer Aided Surgery     Open Access   (Followers: 5)
Computer Applications in Engineering Education     Hybrid Journal   (Followers: 6)
Computer Communications     Hybrid Journal   (Followers: 17)
Computer Engineering and Applications Journal     Open Access   (Followers: 5)
Computer Journal     Hybrid Journal   (Followers: 9)
Computer Methods in Applied Mechanics and Engineering     Hybrid Journal   (Followers: 27)
Computer Methods in Biomechanics and Biomedical Engineering     Hybrid Journal   (Followers: 13)
Computer Methods in Biomechanics and Biomedical Engineering : Imaging & Visualization     Hybrid Journal  
Computer Methods in the Geosciences     Full-text available via subscription   (Followers: 2)
Computer Music Journal     Hybrid Journal   (Followers: 23)
Computer Physics Communications     Hybrid Journal   (Followers: 8)
Computer Reviews Journal     Open Access   (Followers: 1)
Computer Science - Research and Development     Hybrid Journal   (Followers: 8)
Computer Science and Engineering     Open Access   (Followers: 20)
Computer Science and Information Technology     Open Access   (Followers: 14)
Computer Science Education     Hybrid Journal   (Followers: 17)
Computer Science Journal     Open Access   (Followers: 25)
Computer Science Review     Hybrid Journal   (Followers: 12)
Computer Standards & Interfaces     Hybrid Journal   (Followers: 4)
Computer Supported Cooperative Work (CSCW)     Hybrid Journal   (Followers: 11)
Computer-aided Civil and Infrastructure Engineering     Hybrid Journal   (Followers: 10)
Computer-Aided Design and Applications     Hybrid Journal   (Followers: 9)
Computers     Open Access   (Followers: 2)
Computers & Chemical Engineering     Hybrid Journal   (Followers: 11)
Computers & Education     Hybrid Journal   (Followers: 239)
Computers & Electrical Engineering     Hybrid Journal   (Followers: 9)
Computers & Geosciences     Hybrid Journal   (Followers: 30)
Computers & Industrial Engineering     Hybrid Journal   (Followers: 6)
Computers & Mathematics with Applications     Full-text available via subscription   (Followers: 11)
Computers & Structures     Hybrid Journal   (Followers: 39)
Computers and Composition     Hybrid Journal   (Followers: 12)
Computers and Electronics in Agriculture     Hybrid Journal   (Followers: 8)
Computers and Geotechnics     Hybrid Journal   (Followers: 13)
Computers in Biology and Medicine     Hybrid Journal   (Followers: 10)
Computers in Industry     Hybrid Journal   (Followers: 8)
Computers in Libraries     Full-text available via subscription   (Followers: 76)
Computers in the Schools     Hybrid Journal   (Followers: 10)
Computers, Environment and Urban Systems     Hybrid Journal   (Followers: 15)
Computerworld Magazine     Free   (Followers: 2)
Computing     Hybrid Journal   (Followers: 1)
Computing and Software for Big Science     Hybrid Journal   (Followers: 2)
Computing and Visualization in Science     Hybrid Journal   (Followers: 6)
Computing in Science & Engineering     Full-text available via subscription   (Followers: 34)
Computing Letters     Hybrid Journal   (Followers: 1)
Computing Reviews     Full-text available via subscription   (Followers: 1)
Concurrency and Computation: Practice & Experience     Hybrid Journal  
Connection Science     Hybrid Journal  
Control Engineering Practice     Hybrid Journal   (Followers: 46)
Cryptologia     Hybrid Journal   (Followers: 3)
CSI Transactions on ICT     Hybrid Journal   (Followers: 2)
Cuadernos de Documentación Multimedia     Open Access  
Current Computer-Aided Drug Design     Hybrid Journal   (Followers: 1)
Current Science     Open Access   (Followers: 117)
Cyber-Physical Systems     Hybrid Journal  
Cyberspace : Jurnal Pendidikan Teknologi Informasi     Open Access  
DAIMI Report Series     Open Access  
Data     Open Access   (Followers: 4)
Data & Policy     Open Access   (Followers: 5)
Data Science and Engineering     Open Access   (Followers: 4)
Data-Centric Engineering     Open Access   (Followers: 2)
Datenbank-Spektrum     Hybrid Journal   (Followers: 1)
Datenschutz und Datensicherheit - DuD     Hybrid Journal  
Decision Analytics     Open Access   (Followers: 7)
Decision Support Systems     Hybrid Journal   (Followers: 19)
Design Journal : An International Journal for All Aspects of Design     Hybrid Journal   (Followers: 33)
Developments in Clay Science     Full-text available via subscription   (Followers: 1)
Developments in Mineral Processing     Full-text available via subscription   (Followers: 3)
Digital Biomarkers     Open Access   (Followers: 1)
Digital Chinese Medicine     Open Access   (Followers: 1)
Digital Creativity     Hybrid Journal   (Followers: 10)
Digital Culture and Society     Hybrid Journal   (Followers: 16)
Digital Experiences in Mathematics Education     Hybrid Journal   (Followers: 1)
Digital Finance : Smart Data Analytics, Investment Innovation, and Financial Technology     Hybrid Journal  
Digital Geography and Society     Open Access  
Digital Government : Research and Practice     Open Access   (Followers: 2)
Digital Health     Open Access   (Followers: 9)
Digital Journalism     Hybrid Journal   (Followers: 6)
Digital Medicine     Open Access   (Followers: 1)
Digital Medievalist     Open Access   (Followers: 3)
Digital Studies / Le champ numerique     Open Access  
Digital War     Hybrid Journal   (Followers: 2)
Digitale Welt : Das Wirtschaftsmagazin zur Digitalisierung     Hybrid Journal   (Followers: 2)
Digitális Bölcsészet / Digital Humanities     Open Access   (Followers: 1)
Discours     Open Access   (Followers: 1)
Discourse & Communication     Hybrid Journal   (Followers: 24)
Discover Internet of Things     Open Access   (Followers: 1)
Discrete and Continuous Models and Applied Computational Science     Open Access   (Followers: 1)
Discrete Event Dynamic Systems     Hybrid Journal   (Followers: 3)
Discrete Mathematics & Theoretical Computer Science     Open Access   (Followers: 1)
Discrete Optimization     Full-text available via subscription   (Followers: 8)
Displays     Hybrid Journal  
Distributed and Parallel Databases     Hybrid Journal   (Followers: 4)
e-learning and education (eleed)     Open Access   (Followers: 46)
Ecological Indicators     Hybrid Journal   (Followers: 20)
Ecological Informatics     Hybrid Journal   (Followers: 4)
Ecological Management & Restoration     Hybrid Journal   (Followers: 15)
Ecosystems     Hybrid Journal   (Followers: 33)
ECTI Transactions on Computer and Information Technology (ECTI-CIT)     Open Access  
Educação, Formação & Tecnologias     Open Access  
Education and Information Technologies     Hybrid Journal   (Followers: 72)
Educational Philosophy and Theory     Hybrid Journal   (Followers: 12)
Educational Psychology in Practice: theory, research and practice in educational psychology     Hybrid Journal   (Followers: 19)
Educational Research and Evaluation: An International Journal on Theory and Practice     Hybrid Journal   (Followers: 7)
Educational Theory     Hybrid Journal   (Followers: 9)
Egyptian Informatics Journal     Open Access   (Followers: 5)
Electronic Commerce Research and Applications     Hybrid Journal   (Followers: 6)
Electronic Design     Partially Free   (Followers: 123)
Electronic Letters on Computer Vision and Image Analysis     Open Access   (Followers: 6)
Elektron     Open Access  
Empirical Software Engineering     Hybrid Journal   (Followers: 8)
Energy for Sustainable Development     Hybrid Journal   (Followers: 12)
Engineering & Technology     Hybrid Journal   (Followers: 22)
Engineering Applications of Computational Fluid Mechanics     Open Access   (Followers: 19)
Engineering Economist, The     Hybrid Journal   (Followers: 5)
Engineering Optimization     Hybrid Journal   (Followers: 19)
Engineering With Computers     Hybrid Journal   (Followers: 5)
Enl@ce: Revista Venezolana de Información, tecnología y conocimiento     Open Access   (Followers: 1)
Enterprise Information Systems     Hybrid Journal   (Followers: 5)
Entertainment Computing     Hybrid Journal   (Followers: 3)
Environmental and Ecological Statistics     Hybrid Journal   (Followers: 7)
Environmental Communication: A Journal of Nature and Culture     Hybrid Journal   (Followers: 21)
EPJ Data Science     Open Access   (Followers: 9)
ESAIM: Control Optimisation and Calculus of Variations     Full-text available via subscription   (Followers: 2)
Ethics and Information Technology     Hybrid Journal   (Followers: 75)
eTransportation     Open Access   (Followers: 2)
EURASIP Journal on Bioinformatics and Systems Biology     Open Access   (Followers: 7)
EURASIP Journal on Embedded Systems     Open Access   (Followers: 10)
EURO Journal on Computational Optimization     Hybrid Journal   (Followers: 5)
EuroCALL Review     Open Access  
European Food Research and Technology     Hybrid Journal   (Followers: 9)
European Journal of Combinatorics     Full-text available via subscription   (Followers: 4)
European Journal of Computational Mechanics     Hybrid Journal   (Followers: 1)
European Journal of Information Systems     Hybrid Journal   (Followers: 176)
European Journal of Law and Technology     Open Access   (Followers: 23)
European Journal of Political Theory     Hybrid Journal   (Followers: 27)
European Research in Telemedicine / La Recherche Européenne en Télémédecine     Full-text available via subscription   (Followers: 2)
Evolutionary Computation     Hybrid Journal   (Followers: 12)
FAC Journal of Systems and Control     Hybrid Journal   (Followers: 1)
Fibreculture Journal     Open Access   (Followers: 10)
Finite Fields and Their Applications     Full-text available via subscription   (Followers: 4)
Fixed Point Theory and Applications     Open Access   (Followers: 1)
Focus on Catalysts     Full-text available via subscription  
Focus on Pigments     Full-text available via subscription   (Followers: 4)
Focus on Powder Coatings     Full-text available via subscription   (Followers: 5)
Forensic Science International: Digital Investigation     Full-text available via subscription   (Followers: 523)
Formal Aspects of Computing     Hybrid Journal   (Followers: 3)
Formal Methods in System Design     Hybrid Journal   (Followers: 6)
Forschung     Hybrid Journal   (Followers: 1)
Foundations and Trends® in Communications and Information Theory     Full-text available via subscription   (Followers: 6)
Foundations and Trends® in Databases     Full-text available via subscription   (Followers: 3)
Foundations and Trends® in Human-Computer Interaction     Full-text available via subscription   (Followers: 8)
Foundations and Trends® in Information Retrieval     Full-text available via subscription   (Followers: 110)
Foundations and Trends® in Networking     Full-text available via subscription   (Followers: 1)
Foundations and Trends® in Signal Processing     Full-text available via subscription   (Followers: 9)
Foundations and Trends® in Theoretical Computer Science     Full-text available via subscription   (Followers: 1)
Foundations of Computational Mathematics     Hybrid Journal  
Foundations of Computing and Decision Sciences     Open Access  
Frontiers in Computational Neuroscience     Open Access   (Followers: 23)
Frontiers in Computer Science     Open Access   (Followers: 2)
Frontiers in Digital Health     Open Access   (Followers: 3)
Frontiers in Digital Humanities     Open Access   (Followers: 8)
Frontiers in ICT     Open Access  
Frontiers in Neuromorphic Engineering     Open Access   (Followers: 2)
Frontiers in Research Metrics and Analytics     Open Access   (Followers: 6)
Frontiers of Computer Science in China     Hybrid Journal   (Followers: 3)
Frontiers of Environmental Science & Engineering     Hybrid Journal   (Followers: 2)
Frontiers of Information Technology & Electronic Engineering     Hybrid Journal  
Fuel Cells Bulletin     Full-text available via subscription   (Followers: 7)
Functional Analysis and Its Applications     Hybrid Journal   (Followers: 3)
Future Computing and Informatics Journal     Open Access  
Future Generation Computer Systems     Hybrid Journal   (Followers: 3)
Geo-spatial Information Science     Open Access   (Followers: 7)
Geoforum Perspektiv     Open Access  
GeoInformatica     Hybrid Journal   (Followers: 8)
Geoinformatics FCE CTU     Open Access   (Followers: 7)
Government Information Quarterly     Hybrid Journal   (Followers: 33)
Granular Computing     Hybrid Journal  
Graphics and Visual Computing     Open Access  
Grey Room     Hybrid Journal   (Followers: 18)
Group Dynamics : Theory, Research, and Practice     Full-text available via subscription   (Followers: 15)
Groups Complexity Cryptology     Hybrid Journal   (Followers: 2)
HardwareX     Open Access  
Harvard Data Science Review     Open Access   (Followers: 3)
Health Services Management Research     Hybrid Journal   (Followers: 17)
Healthcare Technology Letters     Open Access  
High Frequency     Hybrid Journal  
Higher-Order and Symbolic Computation     Partially Free   (Followers: 1)
Home Cultures     Full-text available via subscription   (Followers: 7)
Home Health Care Management & Practice     Hybrid Journal   (Followers: 3)
Howard Journal of Communications     Hybrid Journal  

  First | 1 2 3 4 5 6 7 | Last

Similar Journals
Journal Cover
Future Generation Computer Systems
Journal Prestige (SJR): 0.844
Citation Impact (citeScore): 5
Number of Followers: 3  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 0167-739X - ISSN (Online) 0167-739X
Published by Elsevier Homepage  [3294 journals]
  • Parallelization of large-scale drug–protein binding experiments
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Dimitrios Michail, Antonios Makris, Iraklis Varlamis, Mark Sawyer The pharmaceutical industry invests billions of dollars on a yearly basis for new drug research. Part of this research is focused on the repositioning of established drugs to new disease indications and is based on “drug promiscuity”, or in plain words, on the ability of certain drugs to bind multiple proteins. The increased cost of wet-lab experiments makes the in-silico alternatives a promising solution. In order to find similar protein targets for an existing drug, it is necessary to analyse the protein and drug structures and find potential similarities. The latter is a highly demanding in computational resources task. However, algorithmic advances in conjunction with increased computational resources can leverage this task and increase the success rate of drug discovery with significantly smaller cost. The current work proposes several algorithms that implement the protein similarity task in a parallel high-performance computing environment, solve several load imbalance and memory management issues and take maximum advantage of the available resources. The proposed optimizations achieve better memory and CPU balancing and faster execution times. Several parts of the previously linear processing pipeline, which used different software packages, have been re-engineered in order to improve process parallelization. Experimental results, on a high-performance computing environment with up to 1024 cores and 2048GB of memory, demonstrate the effectiveness of our approach, which scales well to large amounts of protein pairs.
       
  • Partially policy-hidden attribute-based broadcast encryption with secure
           delegation in edge computing
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Hu Xiong, Yanan Zhao, Li Peng, Hao Zhang, Kuo-Hui Yeh The rapid growth of data has successfully promoted the development of edge computing, which is used for processing the data at the edge of network. The emergence of edge computing compensates for the network delay caused by massive data uploads to the cloud. However, the issues of data security and privacy protection still need to be resolved. In this paper, we propose an efficient ciphertext-policy attribute-based encryption (CP-ABE) scheme that for the first time simultaneously achieves partially hidden policy, direct revocation, and verifiable outsourced decryption. Specifically, in our scheme, the concept of partially hidden policy is introduced to protect private information in an access policy. In addition, after a revocation is successfully executed, the revoked users will not be able to access the message without affecting any other non-revoked users. Our new scheme leverages the outsourcing technique to minimize the overhead required of the user. We demonstrate that our scheme is secure under the Decisional (q−1) Diffie–Hellman assumption and the Decisional Bilinear Diffie–Hellman assumption, as well as evaluating its performance using simulations.
       
  • Privacy preservation in blockchain based IoT systems: Integration issues,
           prospects, challenges, and future research directions
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Muneeb Ul Hassan, Mubashir Husain Rehmani, Jinjun Chen Modern Internet of Things (IoT) systems are paving their path for a revolutionized world in which majority of our objects of everyday use will be interconnected. These objects will be able to link and communicate with each other and their surroundings in order to automate majority of our tasks. This interconnection of IoT nodes require security, seamless authentication, robustness and easy maintenance services. In order to provide such salient features, blockchain comes out as a viable solution. The decentralized nature of blockchain has resolved many security, maintenance, and authentication issues of IoT systems. Therefore, an immense increase in applications of blockchain-based IoT systems can be seen from the past few years. However, blockchain-based IoT network is public, so transactional details and encrypted keys are open and visible to everybody in that network. Thus, any adversary can infer critical information of users from this public infrastructure. In this paper, we discuss the privacy issues caused due to integration of blockchain in IoT applications by focusing over the applications of our daily use. Furthermore, we discuss implementation of five privacy preservation strategies in blockchain-based IoT systems named as anonymization, encryption, private contract, mixing, and differential privacy. Finally, we discuss challenges, and future directions for research in privacy preservation of blockchain-based IoT systems. This paper can serve as a basis of development of future privacy preservation strategies to address several privacy problems of IoT systems operating over blockchain.
       
  • Multi-fidelity deep neural networks for adaptive inference in the internet
           of multimedia things
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Sam Leroux, Steven Bohez, Elias De Coninck, Pieter Van Molle, Bert Vankeirsbilck, Tim Verbelen, Pieter Simoens, Bart Dhoedt Internet of Things (IoT) infrastructures are more and more relying on multimedia sensors to provide information about the environment. Deep neural networks (DNNs) could extract knowledge from this audiovisual data but they typically require large amounts of resources (processing power, memory and energy). If all limitations of the execution environment are known beforehand, we can design neural networks under these constraints. An IoT setting however is a very heterogeneous environment where the constraints can change rapidly. We propose a technique allowing us to deploy a variety of different networks at runtime, each with a specific complexity-accuracy trade-off but without having to store each network independently. We train a sequence of networks of increasing size and constrain each network to contain the parameters of all smaller networks in the sequence. We only need to store the largest network to be able to deploy each of the smaller networks. We experimentally validate our approach on different benchmark datasets for image recognition and conclude that we can build networks that support multiple trade-offs between accuracy and computational cost.
       
  • A novel electrocardiogram feature extraction approach for cardiac
           arrhythmia classification
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Leandro B. Marinho, Navar de M.M. Nascimento, João Wellington M. Souza, Mateus Valentim Gurgel, Pedro P. Rebouças Filho, Victor Hugo C. de Albuquerque In this work, we propose a novel approach to detect cardiac arrhythmias in electrocardiograms (ECG). The proposal focuses on different feature extractors and machine learning methods. The feature extraction techniques evaluated were Fourier, Goertzel, Higher Order Statistics (HOS), and Structural Co-Occurrence Matrix (SCM). As far as the authors know, this is the first time that SCM has been applied to the feature extraction task with ECG signals. Four well-known classifiers, commonly referred to in the literature (Support Vector Machine, Multi-Layer Perceptron, Bayesian, and Optimum-Path Forest) were tested and we compared our results with six classical feature extraction methods. Furthermore, the Association for the Advancement of Medical Instrumentation protocol was adopted and we made use of the MIT-BIH Arrhythmia Database for producing reliable results for clinical analysis. The confidence level to identify heart dysrhythmia in our results was 2% greater than other approaches in the literature. The proposed system is 1.3% more accurate than the best approach reported to date, and is 106 times faster. blackIn short, it is clinical reliable to use HOS for describing types of arrhythmia, since achieved 94.3% of accuracy.
       
  • DDoS detection and defense mechanism based on cognitive-inspired computing
           in SDN
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Jie Cui, Mingjun Wang, Yonglong Luo, Hong Zhong Software-defined networking (SDN) provides a promising architecture for future networks, and can provide advantages as central control programmability and global view. However, it faces numerous security challenges. Distributed denial of service (DDoS) is a security threat to SDN. Most existing schemes only perform DDoS attack detection and do not address how to defend and recover after detecting DDoS. In this paper, a DDoS attack detection and defense mechanism based on cognitive-inspired computing with dual address entropy is proposed. The flow table characteristics of the switch are extracted, and a DDoS attack model is built by incorporating the support vector machine classification algorithm. This mechanism can realize real-time detection and defense at the preliminary stage of the DDoS attack and can restore normal communication in time. The experiment shows that our mechanism not only detects attacks quickly but also has a high detection rate and low false positive rate. More importantly, it can take appropriate defense and recovery measures in the time after the attack has been identified.
       
  • Soft-biometrics encoding conditional GAN for synthesis of NIR periocular
           images
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Juan E. Tapia, Claudia Arellano Soft-Biometric information, such as gender, has great potential for applications in security, forensics and marketing. Unfortunately, there are few gender-labelled databases available which make state of the art techniques, such as deep learning, difficult to use. An alternative source of data to train these algorithms are synthetic images. Methods based on Generative Adversarial Network are widely used for generating synthetic images. However, low features, such as gender, are not preserved in the images generated by these methods. In this paper, a novel GAN-based algorithm that preserves gender information while generating synthetic images is presented. It uses a latent vector that encodes gender information within the conditional GAN algorithm. Resulting synthetic images were tested using a gender classifier algorithm (CNN). Experiments demonstrate that the proposed method can be a useful tool for the synthesis of gender-labelled images to be used in training Deep Learning gender-classification algorithms. As an additional contribution a novel person-disjoint gender labelled dataset is presented (UNAB-Gender).
       
  • Decentralized attribute-based conjunctive keyword search scheme with
           online/offline encryption and outsource decryption for cloud computing
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Qian Xu, Chengxiang Tan, Wenye Zhu, Ya Xiao, Zhijie Fan, Fujia Cheng In recent years, the increasing popularity of cloud computing has led to a trend that data owners prefer to outsource their data to the clouds for the enjoyment of the on-demand storage and computing services. For security and privacy concerns, fine-grained access control and secure data retrieval for the outsourced data is of critical importance. Attribute-based keyword search (ABKS) scheme, as a cryptographic primitive which explores the notion of public key encryption with keyword search (PEKS) into the context of attribute-based encryption (ABE), can enable the data owner to flexibly share his data to a specified group of users satisfying the access policy and meanwhile, maintain the confidentiality and searchable properties of the sensitive data. However, in most of the previous ABKS schemes, the decryption service is not provided, and a fully trusted central authority is required, which is not practical in the scenario that the access policy is written over attributes or credentials issued across different trust domains and organizations. Moreover, the efficiency of storage and computation is also the bottleneck of implementation of ABKS scheme. In this paper, for the first time, we propose a decentralized ABKS scheme with conjunctive keyword search for the cloud storage system. Besides the multi-keyword search in the decentralized setting, our scheme outsources the undesirable costly operations of decryption to the cloud without degrading the user’s privacy. Furthermore, the encryption phase is also divided into two phases, an offline pre-computation phase which is independent with the plaintext message, access policy, and keyword set, and can be performed at any time when the data owner’s device is otherwise not in use, and an online encryption phase which only incurs very little computation costs. Security analysis indicates that our scheme is provably secure in the random oracle model. The asymptotic complexity comparison and simulation results also show that our scheme achieves high computation efficiency.
       
  • Analysis of Doddington zoo classification for user dependent template
           update: Application to keystroke dynamics recognition
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Abir Mhenni, Estelle Cherrier, Christophe Rosenberger, Najoua Essoukri Ben Amara Biometric authentication systems are increasingly considered in different access control applications. Regarding that users have completely different interactions with these authentication systems, several techniques have been developed in the literature to model distinctive users categories. Doddington zoo is a biometric menagerie that defines and labels user groups with animal species to reflect their behavior with the biometric systems. This menagerie was developed for different biometric modalities including keystroke dynamics. The present study proposes a user dependent adaptive strategy based on the Doddington zoo, for the recognition of the user’s keystroke dynamics. The novelty of the proposed approach lies in applying an adaptive strategy specific to the characteristics of each user of the Doddington zoo menagerie aiming to solve the intra-class variation problems. The obtained results demonstrate competitive performances on significant keystroke dynamics datasets WEBGREYC and CMU.
       
  • Integrity verification of Docker containers for a lightweight cloud
           environment
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Marco De Benedictis, Antonio Lioy Virtualisation techniques are growing in popularity and importance, given their application to server consolidation and to cloud computing. Remote Attestation is a well-known technique to assess the software integrity of a node. It works well with physical platforms, but not so well with virtual machines hosted in a full virtualisation environment (such as the Xen hypervisor or Kernel-based Virtual Machine) and it is simply not available for a lightweight virtualisation environment (such as Docker). On the contrary, the latter is increasingly used, especially in lightweight cloud platforms, because of its flexibility and limited overhead as compared to virtual machines. This paper presents a solution for security monitoring of a lightweight cloud infrastructure, which exploits Remote Attestation to verify the software integrity of cloud applications during their whole life-cycle. Our solution leverages mainstream tools and architectures, like the Linux Integrity Measurement Architecture, the OpenAttestation platform and the Docker container engine, making it practical and readily available in a real-world scenario. Compared to a standard Docker deployment, our solution enables run-time verification of container applications at the cost of a limited overhead.
       
  • Energy production predication via Internet of Thing based machine learning
           system
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Pedro P. Rebouças Filho, Samuel L. Gomes, Navar M. Mendonça e Nascimento, Cláudio M.S. Medeiros, Fatma Outay, Victor Hugo C. de Albuquerque Wind energy is an interesting source of alternative energy to complement the Brazilian energy matrix. However, one of the great challenges lies in managing this resource, due to its uncertainty behavior. This study addresses the estimation of the electric power generation of a wind turbine, so that this energy can be used efficiently and sustainable. Real wind and power data generated in set of wind turbines installed in a wind farm in Ceará State, Brazil, were used to obtain the power curve from a wind turbine using logistic regression, integrated with Nonlinear Autoregressive neural networks to forecast wind speeds. In our system the average error in power generation estimate is of 29 W for 5 days ahead forecast. We decreased the error in the manufacturer’s power curve in 63%, with a logics regression approach, providing a 2.7 times more accurate estimate. The results have a large potential impact for the wind farm managers since it could drive not only the operation and maintenance but management level of energy sells.
       
  • Edge computing: A survey
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Wazir Zada Khan, Ejaz Ahmed, Saqib Hakak, Ibrar Yaqoob, Arif Ahmed In recent years, the Edge computing paradigm has gained considerable popularity in academic and industrial circles. It serves as a key enabler for many future technologies like 5G, Internet of Things (IoT), augmented reality and vehicle-to-vehicle communications by connecting cloud computing facilities and services to the end users. The Edge computing paradigm provides low latency, mobility, and location awareness support to delay-sensitive applications. Significant research has been carried out in the area of Edge computing, which is reviewed in terms of latest developments such as Mobile Edge Computing, Cloudlet, and Fog computing, resulting in providing researchers with more insight into the existing solutions and future applications. This article is meant to serve as a comprehensive survey of recent advancements in Edge computing highlighting the core applications. It also discusses the importance of Edge computing in real life scenarios where response time constitutes the fundamental requirement for many applications. The article concludes with identifying the requirements and discuss open research challenges in Edge computing.
       
  • Wavelet OFDM with Overlap FDE for non-Gaussian channels in precoded NOMA
           based systems
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Arsla Khan, Saud Khan, Sobia Baig, Hafiz Muhammad Asif, Soo Young Shin Non-orthogonal multiple access (NOMA) is considered as a prominent multiple access technique for fifth generation (5G) systems mainly because of its high spectral efficiency (SE). NOMA based system employs superposition coding (SC) and successive interference cancellation (SIC) at the transmitter and receiver side respectively. Classical multi-carrier NOMA adopts orthogonal frequency division multiplexing (OFDM) as a multi-carrier modulation (MCM) technique; however, high peak to average power ratio (PAPR) and cyclic prefix (CP) inclusion makes OFDM an inadequate choice for NOMA. To improve SE of the OFDM-NOMA, wavelet transform (WT) has been proposed for NOMA. This article proposes new physical layer techniques based on OFDM-NOMA and Wavelet OFDM (WOFDM) -NOMA for tactile internet and investigates their performance in the presence of Gaussian and non-Gaussian channels. Overlap frequency domain equalization (OFDE) is proposed as a robust equalization technique to improve the performance of SIC in the presence of Gaussian and non-Gaussian channels for OFDM-NOMA and WOFDM-NOMA. Moreover, to enhance the equalization efficiency of the OFDE, linear precoding (LP) in the form of Walsh Hadamard transform (WHT) is also recommended for OFDM and WOFDM based NOMA transceivers which improves the reliability of the network. Performance of the proposed precoded OFDM-NOMA and precoded WOFDM-NOMA with OFDE is analyzed in the presence of additive white Gaussian noise (AWGN) and impulse noise for Rayleigh fading channel. Link level performance of the presented structures for downlink NOMA is evaluated in terms of bit error rate (BER), PAPR, SE and computational complexity (CC). Furthermore, performance of the proposed architectures is also compared in the presence of impulse noise having different intensities. Simulation results via computer show that the proposed precoded WOFDM-NOMA with OFDE transceiver incorporates low latency through waveform shaping and enhanced reliability through precoding and efficient equalization. Thus, proposed advanced physical layer not only performs better than its counterparts but presence of OFDE enhances SIC performance even in the presence of impulse noise at the expense of slight elevation in CC.
       
  • Tac-U: A traffic balancing scheme over licensed and unlicensed bands for
           Tactile Internet
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Yuhan Su, Xiaozhen Lu, Lianfen Huang, Xiaojiang Du, Mohsen Guizani The low-frequency licensed spectrum resources have become scarce due to the various applications such as haptic communications supported by the next generation (5G) networks. Therefore, we have to improve the utilization of wireless resources and provide an effective radio resource management solution for the application of Tactile Internet. On the other hand, the Long Term Evolution Advanced (LTE-A) networks will become an integral part of the 5G ecosystem. Therefore, exploring the potential of LTE-A networks for haptic communications will become an important step toward realizing the Tactile Internet. In this paper, we investigate the utilization of LTE in unlicensed bands (LTE-U) technology for resource-constrained tactile networks, which enables a portion of the haptic communications to be used in unlicensed bands. Moreover, we propose Tac-U, which is an LTE-U technology-based traffic balancing scheme for Tactile Internet. We consider the real-time channel interference and traffic conditions of licensed and unlicensed bands to model a mathematical Tactile Internet, in which the LTE-U coexists with Wi-Fi and LTE, and formulate the system utility to evaluate the performance of the proposed traffic balancing scheme. Simulation results show that the proposed scheme can quickly calculate the optimal parameters, and significantly increase the sum utility of the system.
       
  • Business process outsourcing to cloud containers: How to find the optimal
           deployment'
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Khouloud Boukadi, Rima Grati, Molka Rekik, Hanêne Ben-Abdallah Containers are a new service model that empowers cloud computing by offering horizontally scalable systems while bypassing high-performance challenges of traditional hypervisors. In the business process management context, Containers-as-a-Service can be used to outsource business processes to the cloud and allow an enterprise to bundle its processes and data in a simpler and more performance-oriented manner. To profit from containers, an enterprise must however have a means to identify the optimal resource allocation. Towards this end, we propose a system architecture for optimal containers-based deployment of business processes. The proposed system architecture relies on our extension of ContainerCloudSim simulator to estimate the execution time of business processes deployed according to the CaaS model. In addition, it encloses a business process deployment optimizer. To develop this latter, we examine a linear program and a genetic algorithm to find out the optimal deployment of a business process on cloud containers. We show experimentally the effective performance of containers-based versus VM-based deployment, and linear program versus the First-Fit container strategy and the genetic algorithm.
       
  • STRATFram: A framework for describing and evaluating elasticity strategies
           for service-based business processes in the cloud
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Aicha Ben Jrad, Sami Bhiri, Samir Tata In the recent years, growing attention has been paid to the concept of Cloud Computing as a new computing paradigm for executing and handling operations/processes in an efficient and cost-effective way. Cloud Computing’s elasticity and its flexibility in service delivery have been the most important features behind this attention which encourage companies to migrate their operation/processes to the cloud to ensure the required QoS while using resources and reduce their expenses. Elasticity management has been considered as a pivotal issue among IT community that works on finding the right tradeoffs between QoS levels and operational costs by developing novel methods and mechanisms. However, controlling process elasticity and defining non-trivial elasticity strategies are challenging issues. Also, despite the growing attention paid to the cloud and its elasticity property in particular, there is still a lack of solutions that support the evaluation of elasticity strategies used to ensure the elasticity of processes at service-level. In this paper, we present a framework for describing and evaluating elasticity strategies for Service-based Business Processes (SBP), called StratFram. It is composed of a set of domain-specific languages designed to generalize the use of the framework and to facilitate the description of evaluation elements that are needed to evaluate elasticity strategies before using them in real cloud environment. Using StratFram, SBP holders are allowed to define: (i) an elasticity model with specific elasticity capabilities on which they want to define and evaluate their elasticity strategies, (ii) a SBP model for which the elasticity strategies will be defined and evaluated, (iii) a set of elasticity strategies based on the elasticity capabilities of the defined elasticity model and for the provided SBP model, and (iv) a simulation configuration which identifies simulation properties/elements. The evaluation of elasticity strategies consists in providing a set of plots that allows the analysis and the comparison of strategies. Our contributions and developments provide Cloud tenants with facilities to choose elasticity strategies that fit to their business processes and usage behaviors.
       
  • Extensible persistence as a service for containers
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Mohamed Mohamed, Robert Engel, Amit Warke, Shay Berman, Heiko Ludwig Micro-services and containers are becoming the de-facto way of building software and deploying workloads in an agile manner. Following these paradigms leads eventually to replacing the old fashioned monolithic applications by compositions of single-purpose, independent services communicating with each other through light-weight mechanisms. These services are often deployed in containers which have proven to be a suitable runtime platform for micro-services. In the container landscape, there are different container frameworks enabling elastic horizontal scaling for the workloads that are designed as so-called 12-factor applications.1 These frameworks were built to accommodate stateless services and require that any state need to be maintained out-of-band of the platform. Support for stateful services in a scalable way is not well covered, particularly in scenarios where state is accessed from workloads in different deployment platforms. It is noteworthy that these workloads cover a wide spectrum of legacy software applications. In order to efficiently support these kinds of workloads, persistent storage such as filesystem or block storage is necessary to maintain the data and make it available for the containers. Managing different types of persistent storage across heterogeneous container orchestrators is a challenging task that people tried to resolve in a specific way for each storage backend or container orchestrator. In this paper, we present the Ubiquity framework, which provides seamless access to persistent storage across different container orchestrators (Cloud Foundry, OpenShift, Kubernetes, Docker and Mesos). Ubiquity is extensible to other container frameworks and different types of file and block storage systems, which can be managed independently of the container orchestrator. Ubiquity makes it easy to onboard stateful services in heterogeneous container environments and based on heterogeneous storage backends, extending the efficiency gains of containerization.
       
  • Bicycle traffic and its interaction with motorized traffic in an
           agent-based transport simulation framework
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Dominik Ziemke, Simon Metzler, Kai Nagel Cycling as an inexpensive, healthy, and efficient mode of transport for everyday traveling is becoming increasingly popular. While many cities are promoting cycling, it is rarely included in transport models and systematic policy evaluation procedures. The purpose of this study is to extend the agent-based transport simulation framework MATSim to be able to model bicycle traffic more realistically. The network generation procedure is enriched to include attributes that are relevant for cyclists (e.g. road surfaces, slopes). Travel speed computations, plan scoring, and routing are enhanced to take into account these infrastructure attributes. The scoring, i.e. the evaluation of simulated daily travel plans, is furthermore enhanced to account for traffic events that emerge in the simulation (e.g. passings by cars), which have an additional impact on cyclists’ decisions. Inspired by an evolutionary computing perspective, a randomizing router was implemented to enable cyclists to find realistic routes. It is discussed in detail why this approach is both feasible in practical terms and also conceptually consistent with MATSim’s co-evolutionary simulation approach. It is shown that meaningful simulation results are obtained for an illustrative scenario, which indicates that the developed methods will make real-world scenarios more realistic in terms of the representation of bicycle traffic. Based on the exclusive reliance on open data, the approach is spatially transferable.
       
  • Performance evaluation of a Fog-assisted IoT solution for e-Health
           applications
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Pedro H. Vilela, Joel J.P.C. Rodrigues, Petar Solic, Kashif Saleem, Vasco Furtado Cloud Computing has been a predominant approach for the development of Internet of Things (IoT) solutions. However, to meet the requirements of real-time and latency-sensitive applications in healthcare, a new computing paradigm that follows a Cloud computing approach, called Fog Computing, demonstrates to be an effective tool by extending the Cloud resources to the edge of the network. This work studies the contribution of the Fog Computing paradigm applied to healthcare, highlighting its main benefits regarding latency, network usage, and power consumption. Based on these parameters, a Fog-assisted health monitoring system is proposed and its performance evaluation and demonstration is carried out. The results demonstrates the potential enhancement of this approach to minimise data traffic in the core of the network because data is analysed locally and, also, enhancing security on health information that can be kept locally, enhancing data security and providing better insights of patient’s health status.
       
  • Smart devices in the social loops: Criteria and algorithms for the
           creation of the social links
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Luigi Atzori, Claudia Campolo, Bin Da, Roberto Girau, Antonio Iera, Giacomo Morabito, Salvatore Quattropani Empowering smart devices with the capability of establishing social links with other peers is gaining momentum. Indeed, it is a feature that has the great potentials to: foster devices collaboration; speed up and make more effective the discovery of services; and evaluate the trust of devices by exploiting the relevant functionalities based on opinions of peers in the social networks. Whereas several architectural solutions have been proposed together with different principles to create the devices social network, how to establish and maintain the devices social links has been often overlooked. This problem is addressed in this paper that proposes an architectural solution to manage social links established among devices, identifies which information should be processed to trigger the creation of the social links, and defines the algorithm solutions to keep updated the social network among the devices. In the conceived approach the NSP (Network Service Provider) is responsible for relationship management on behalf of the devices owners and makes this information available to third parties for the deployment of value-added services that exploit the devices social network. An experimental platform has been developed to evaluate the feasibility of the proposal and to analyze the traffic and computational overhead incurred to establish social relationships among devices under different settings.
       
  • A proposal for bridging application layer protocols to HTTP on IoT
           solutions
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Mauro A.A. da Cruz, Joel J.P.C. Rodrigues, Pascal Lorenz, Petar Solic, Jalal Al-Muhtadi, Victor Hugo C. Albuquerque In the Internet of Things (IoT), data is handled and stored by software known as middleware (located on a server). IoT devices send such data through an application layer protocol that may be different from those supported by the middleware. This paper proposes an application layer gateway, called MiddleBridge, that translates Constrained Application Layer Protocol (CoAP), Message Queuing, Queuing Telemetry Transport Protocol (MQTT), Data Distribution Service (DDS), and Websockets messages into HTTP. MiddleBridge can be deployed on any computer with Java virtual machine because all servers are embedded in its code, enabling IoT gadgets to transmit data to any REST endpoint seamlessly. With the proposed approach, devices can send a smaller message to an intermediary (MiddleBridge), which restructures it and forwards to a middleware, reducing the time that a device spends transmitting. The created graphical user interface allows users to configure messages conversion and forwarding in runtime. The efficiency of such approach is evaluated through the packet size and response times considering the data sent to Orion context broker (a Fiware project). Results show that packet size that is sent by an IoT device through MiddleBridge is 17 times smaller than sending a straight HTTP request to the server and significantly reduces the transmission time.
       
  • LISA: Visible light based initialization and SMS based authentication of
           constrained IoT devices
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Toni Perković, Mario Čagalj, Tonko Kovačević Future healthcare systems, smart homes, power managementsystems will all involve a large number of smart inter-connected wireless devices (such as wireless sensor and IoT nodes). One of the major challenges to securing these systems presents loading initial cryptographic credentials into a relatively large number of wireless devices. This challenge is further aggravated by the fact that many of these technologies and systems involve low-cost and highly interface constrained devices — lacking usual wired interfaces, displays, keypads, and alike.In this paper we propose two novel multichannel key deployment schemes for wireless (sensor) networks, LISA and LISAT, that only require a presence of a light source device, such as a multi-touch screen, tablet or a smartphone device. Both key deployment schemes are secret key-based multichannel protocols and are suitable for interface/resource constrained wireless devices. We use one-way visible light channel (VLC) of multitouch screens (the flashing displays) to initialize sensor devices in a secure, usable and scalable way. More interestingly, LISAT protocol is suited for initialization of geographically remote devices via a trusted third party device. We provide formal analysis of the proposed protocols using the automated verification tool. From the user’s perspective of protocol execution, device initialization is quite simple as user simply places the devices on the top of the multitouch screen after which the remaining process is fully automatized. Indeed, through the experiments with 34 users we showed that our solution is user-friendly and has a good performance concerning the time consumption.
       
  • An integrated approach for multi-objective optimisation and MCDM of energy
           internet under uncertainty
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Zhaoxi Hong, Yixiong Feng, Zhiwu Li, Yong Wang, Hao Zheng, Zhongkai Li, Jianrong Tan Energy Internet represents a critical breakthrough which is regarded as the synthetic product of information communication technologies and energy technologies. It is highly conducive to providing possible solutions for transforming energy management methods, reducing carbon emissions and achieving sustainable development. The extensive attention of scholars has been drawn to the progress of Energy Internet. However, the multi-objective optimisation and multi-criteria decision making (MCDM) for Energy Internet considering uncertainty still has some problems. (1) The actual number of optimisation objectives for Energy Internet design is more than two so that it is difficult to adopt conventional intelligence algorithms to solve it. (2) Most intelligence algorithms for double-objective optimisation model are not desirable. (3) The MCDM process of Energy Internet is not considered systematically. To solve these problems, an integrated approach combining improved great deluge algorithm (GDA), evidence reasoning (ER), interval algorithm, and fuzzy grey correlation analysis for multi-objective optimisation and MCDM of Energy Internet considering uncertainty is proposed in this paper. The practicality and effectiveness of the proposed approach are illustrated by a visual case study of Energy Internet design.
       
  • Container-based fog computing architecture and energy-balancing scheduling
           algorithm for energy IoT
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Juan Luo, Luxiu Yin, Jinyu Hu, Chun Wang, Xuan Liu, Xin Fan, Haibo Luo The traditional architecture of fog computing is for one data center and multiple fog nodes. It is unable to adapt to the current development of private cloud. In addition, virtual machines used for cloud computing, are also used for fog computing as the resource unit, cannot satisfy the requirement of fog computing. Furthermore, the limited capacity of battery power has been one of the major constraints when considering cloud-to-fog-to-sensor pattern in the scenario of Energy Internet. We propose a multi-cloud to multi-fog architecture and design two kinds of service models by employing containers to improve the resource utilization of fog nodes and reduce the service delay. According to the two service models, we present a task scheduling algorithm for energy balancing. The algorithm is based on the transmission energy consumption of terminal devices and uses a dynamic threshold strategy to schedule requests in real time, thereby guaranteeing the energy balancing of terminal devices without increasing the transmission delay. Experimental results show that our proposed service models and scheduling algorithm can reduce service latency, improve fog node efficiency, and prolong WSNs life cycle through energy balancing.
       
  • Evaluation and efficiency comparison of evolutionary algorithms for
           service placement optimization in fog architectures
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Carlos Guerrero, Isaac Lera, Carlos Juiz This study compares three evolutionary algorithms for the problem of fog service placement: weighted sum genetic algorithm (WSGA), non-dominated sorting genetic algorithm II (NSGA-II), and multiobjective evolutionary algorithm based on decomposition (MOEA/D). A model for the problem domain (fog architecture and fog applications) and for the optimization (objective functions and solutions) is presented. Our main concerns are related to optimize the network latency, the service spread and the use of the resources. The algorithms are evaluated with a random Barabasi–Albert network topology with 100 devices and with two experiment sizes of 100 and 200 application services. The results showed that NSGA-II obtained the highest optimizations of the objectives and the highest diversity of the solution space. On the contrary, MOEA/D was better to reduce the execution times. The WSGA algorithm did not show any benefit with regard to the other two algorithms.
       
  • Cloud service recommendation based on unstructured textual information
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Yuanchun Jiang, Dandan Tao, Yezheng Liu, Jianshan Sun, Haifeng Ling With the rapid growth of cloud service platform, it is more and more difficult for users to discover cloud services satisfying their personalized demands. The unstructured textual information, such as the service descriptive texts and service tags, contain rich features of cloud services and are useful for cloud service platforms to construct personalized service recommendation. This paper proposes a two-stage model for cloud service recommendation by integrating the information of service descriptive texts and service tags. In the first stage, we propose a Hierarchical Dirichlet Processes (HDP) model to cluster cloud services into an optimal number of clusters based on descriptive texts. In the second stage, we propose a Personalized PageRank algorithm based on service tags to rank and recommend cloud services in each cluster. Our experiments on a real data set show that the proposed two-stage model can segment cloud services well and obtain accurate recommendation results compared with the baseline methods.
       
  • A novel directional and non-local-convergent particle swarm optimization
           based workflow scheduling in cloud–edge environment
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Ying Xie, Yuanwei Zhu, Yeguo Wang, Yongliang Cheng, Rongbin Xu, Abubakar Sadiq Sani, Dong Yuan, Yun Yang With the increasing popularity of Internet of Things (IoT), edge computing has become the key driving force to provide computing resources, storage and network services closer to the edge on the basis of cloud computing. Workflow scheduling in such distributed environment is regarded as an NP-hard problem, and the existing approaches may not work well for task scheduling with multiple optimization goals in complex applications. As an intelligent algorithm, particle swarm optimization (PSO) has the advantages of fewer parameters, simpler algorithm and faster convergence speed, which is widely applied to workflow scheduling. However, there are also some shortcomings such as easy to fall into local optimum and sometimes difficult to obtain real optimal solution. To address this issue, first, the scheduling problem of workflow applications and objective function based on two optimized factors are clearly formalized, which can provide a theoretical foundation for workflow scheduling strategy. Then this paper proposes a novel directional and non-local-convergent particle swarm optimization (DNCPSO) that employs non-linear inertia weight with selection and mutation operations by directional search process, which can reduce the makespan and cost dramatically and obtain a compromising result. The results of simulation experiments based on various real and random workflow examples show that our DNCPSO can achieve better performance than other classical and improved algorithms, which sufficiently demonstrate the effectiveness and efficiency of DNCPSO.
       
  • Multi-objective resource allocation for Edge Cloud based robotic workflow
           in smart factory
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Mahbuba Afrin, Jiong Jin, Ashfaqur Rahman, Yu-Chu Tian, Ambarish Kulkarni Multi-robotic services are widely used to enhance the efficiency of Industry 4.0 applications including emergency management in smart factory. The workflow of these robotic services consists of data hungry, delay sensitive and compute intensive tasks. Generally, robots are not enriched in computational power and storage capabilities. It is thus beneficial to leverage the available Cloud resources to complement robots for executing robotic workflows. When multiple robots and Cloud instances work in a collaborative manner, optimal resource allocation for the tasks of a robotic workflow becomes a challenging problem. The diverse energy consumption rate of both robot and Cloud instances, and the cost of executing robotic workflow in such a distributed manner further intensify the resource allocation problem. Since the tasks are inter-dependent, inconvenience in data exchange between local robots and remote Cloud also degrade the service quality. Therefore, in this paper, we address simultaneous optimization of makespan, energy consumption and cost while allocating resources for the tasks of a robotic workflow. As a use case, we consider resource allocation for the robotic workflow of emergency management service in smart factory. We design an Edge Cloud based multi-robot system to overcome the limitations of remote Cloud based system in exchanging delay sensitive data. The resource allocation for robotic workflow is modelled as a constrained multi-objective optimization problem and it is solved through a multi-objective evolutionary approach, namely, NSGA-II algorithm. We have redesigned the NSGA-II algorithm by defining a new chromosome structure, pre-sorted initial population and mutation operator. It is further augmented by selecting the minimum distant solution from the non-dominated front to the origin while crossing over the chromosomes. The experimental results based on synthetic workload demonstrate that our augmented NSGA-II algorithm outperforms the state-of-the-art works by at least 18% in optimizing makespan, energy and cost attributes on various scenarios.
       
  • A secure IoT cloud storage system with fine-grained access control and
           decryption key exposure resistance
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Shengmin Xu, Guomin Yang, Yi Mu, Ximeng Liu Internet of Things (IoT) cloud provides a practical and scalable solution to accommodate the data management in large-scale IoT systems by migrating the data storage and management tasks to cloud service providers (CSPs). However, there also exist many data security and privacy issues that must be well addressed in order to allow the wide adoption of the approach. To protect data confidentiality, attribute-based cryptosystems have been proposed to provide fine-grained access control over encrypted data in IoT cloud. Unfortunately, the existing attributed-based solutions are still insufficient in addressing some challenging security problems, especially when dealing with compromised or leaked user secret keys due to different reasons. In this paper, we present a practical attribute-based access control system for IoT cloud by introducing an efficient revocable attribute-based encryption scheme that permits the data owner to efficiently manage the credentials of data users. Our proposed system can efficiently deal with both secret key revocation for corrupted users and accidental decryption key exposure for honest users. We analyze the security of our scheme with formal proofs, and demonstrate the high performance of the proposed system via experiments.
       
  • Data-driven main color map feature learning, design and simulation for
           smart ethnic cloth
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Tao Hu, Chunxia Xiao How to protect and develop traditional ethnic cloth culture are the key problems of current societal interest. Through earlier research of smart techniques of pattern layout and texture appearance simulation of traditional Ethnic cloth, a data-driven intelligent design and simulation model based on main color feature learning is proposed in this paper. We employ a combination-based design technique, which uses pattern elements data and skeletons data to design a digital layout for ethnic cloth. We use Octree to quantize the color map for the designed ethnic cloth layout and extract main color map based on k-means clustering. Using a cubic convolution interpolation algorithm with yarn structure template, we render each region that is segmented through the main color map. Then, we can generate a good representation of the texture appearance of the designed layout which is shown as a realistic fabric material. Finally, the designed layout will be transferred to intelligent loom produced based on industrial Internet of things. We design several traditional Ethic cloths (Tujia brocade) layouts and simulate their textures based on our method to analyze its applicability and validity. We also compare the design and simulation results with previously proposed algorithms, which indicate that our model can design complex patterns and simulate exquisite material of Tujia brocade.
       
  • An adaptive multi-level caching strategy for Distributed Database System
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Feng Lu, Ziqian Shi, Lin Gu, Hai Jin, Laurence Tianruo Yang The Internet of Things (IoT) has become a promising technology for addressing societal and industrial challenges by connecting geo-distributed and diverse devices to create smart systems worldwide. Nowadays, Distributed Database Systems (DDBS) are widely used in Internet of Things (IoT) frameworks to improve performance of massive data access. However, most recent caching techniques focus on collaborative caching and neglect the unbalanced workloads on hot data nodes, which may potentially become a bottleneck of latency-sensitive IoT services. To tackle this issue, we propose an adaptive multi-level caching strategy to solve the performance bottleneck caused by hot data access and therefore improve data access performance in DDBS. The essential idea of our strategy is to dynamically adjust cache resource allocation and cache size in different data nodes according to realtime access rate. In this way, more resources could be allocated to hot data node to speed up queries and eliminate the bottleneck. In addition, instead of the traditional Least Recently Used (LRU) algorithm, an efficient cache replacement algorithm is also proposed and implemented in our DDBS. The testbed experimental results indicate that the performance of DDBS with our adaptive multi-level caching strategy can be significantly improved by 20% compared with traditional strategies.
       
  • The intelligent obstacle sensing and recognizing method based on D–S
           evidence theory for UGV
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Yibing Zhao, Feng Ding, Jining Li, Lie Guo, Wanfeng Qi Unmanned Ground Vehicles (UGVs) have been widely used in Internet of Things, military applications and outer space exploration. Reliable intelligent obstacle recognition is the key requirement and significant challenge for UGVs. To perform robust obstacle recognition under all conditions, data processing and fusing will be necessary based on multiple sensors. This paper carries out obstacle sensing and recognizing research by extracting obstacle feature evidence from camera and laser scanner. The feature evidence extracted from camera includes S/V in HSV color space. The laser feature evidences are related to penetrability value feature and λ3∕λ2 feature based on covariance matrix. Then, the key step is to calculate the basic probability assignment based on these feature evidences. By applying Dempster’s combination rules the current obstacle is recognized based on D–S evidence theory. The test results show the validity of this new method and its application in cross-country environment perception, also give references to Internet of Vehicles including vehicle safety control and assistance driving technology, also the related algorithm can be potentially used in the Internet of Things or big data processing.
       
  • Deep reinforcement learning with its application for lung cancer detection
           in medical Internet of Things
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Zhuo Liu, Chenhui Yao, Hang Yu, Taihua Wu Recently, deep reinforcement learning has achieved great success by integrating deep learning models into reinforcement learning algorithms in various applications such as computer games and robots. Specially, it is promising for computer-aided diagnosis and treatment to combine deep reinforcement learning with medical big data generated and collected from medical Internet of Things. In this paper, we focus on the potential of the deep reinforcement learning for lung cancer detection as many people are suffering from the lung tumor and about 1.8 million patients died from lung cancer in 2018. Early detection and diagnosis of lung tumor can significantly improve the treatment effect and prolong survival. In this work, we present several representative deep reinforcement learning models that are potential to use for lung cancer detection. Furthermore, we summarize the common types of lung cancer and the main characteristics of each type. Finally, we point out the open challenges and possible future research directions of applying deep reinforcement learning to lung cancer detection, which is expected to promote the evolution of smart medicine with medical Internet of Things.
       
  • CL-ADSP: Cross-Layer Adaptive Data Scheduling Policy in Mobile Ad-hoc
           Networks
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Varun Kumar Sharma, Lal Pratap Verma, Mahesh Kumar Multi-Path Transmission Control Protocol (MPTCP), leading transport layer protocol, radically utilizes multi-path communication technology. MPTCP’s vital objective is to exploit the multi-interface capability available at the end-host. Nevertheless, the protocol majorly suffers from lower channel utilization because of its abrupt congestion window (cwnd) growth policy hence, experiences soaring throughput degradation. Further, multiple available paths for transmission severely degrades MPTCP’s performance as higher re-ordering is required at the receiver end because of dissimilar wireless path characteristics (e.g., bandwidth, loss rate (either due to congestion or unreliability of wireless channel) and delay). Thereby, it leads to the serious rise in both the issues of buffer-blocking and un-necessary retransmissions. Although, a few solutions have been suggested that effectively schedule and balance the load over available qualified paths. But, most of these solutions were not able to handle the redundant cwnd growth adaptations. This paper suggests applying a Cross-Layer based Adaptive Data Scheduling Policy (CL-ADSP) to MPTCP, which takes care of dissimilar path characteristics and unreasonable cwnd growth adaptations effectively. CL-ADSP uses a delay-variation-based adaptive fast retransmission policy. Moreover, CL-ADSP dynamically exploits the average MAC-layer retries as a parameter which appropriately shows the early sign of network overload. The simulation results exemplify the effectiveness of CL-ADSP concerning better throughput, improved file transfer time with lesser End-To-End (ETE) delay and number of timeout packets. In fact, when compared with MPTCP, CL-ADSP offers 24% better throughput with 36% lesser timeout packets, while, 10% improved packet delivery ratio with 2.25% lesser average ETE delay results. CL-ADSP also offers 23% improved file transfer time and 24% lesser normalized routing load over MPTCP.
       
  • Block withholding game among bitcoin mining pools
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Alireza Toroghi Haghighat, Mehdi Shajari Although mining pools smooth out Bitcoin rewards and make it more predictable for an individual miner, they concentrate power to the pool’s operator. There are multiple huge mining pools, and each of them may possess up to 30% of the total computation power of the Bitcoin network (the same applies to some other altcoins). Putting such enormous computation power in the hands of pool operators provides the necessary incentive for them to misuse their power over the network. One way to misuse this power is to launch a block withholding attack against other mining pools. Indeed, this ability starts a block withholding game among the pool operators. Some researchers have analyzed such a game. However, their analyses were limited because they considered simple scenarios, e.g., a single-shot game between only two mining pools. In this paper, we first demonstrate that the block withholding game is a stochastic game with finitely many states and actions. Then, we use a reinforcement learning method to analyze this game. Our simulation results show that in the recent four years, by launching a block withholding attack some pools had the potential to reach to the majority (51%) of the total computation power of the network with much lower initial computation power (even with less than 25% of the total computation power of the network).
       
  • Trust enforcement through self-adapting cloud workflow orchestration
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Hadeel T. El-Kassabi, M. Adel Serhani, Rachida Dssouli, Alramzana N. Navaz Providing runtime intelligence of a workflow in a highly dynamic cloud execution environment is a challenging task due the continuously changing cloud resources. Guaranteeing a certain level of workflow Quality of Service (QoS) during the execution will require continuous monitoring to detect any performance violation due to resource shortage or even cloud service interruption. Most of orchestration schemes are either configuration, or deployment dependent and they do not cope with dynamically changing environment resources. In this paper, we propose a workflow orchestration, monitoring, and adaptation model that relies on trust evaluation to detect QoS performance degradation and perform an automatic reconfiguration to guarantee QoS of the workflow. The monitoring and adaptation schemes are able to detect and repair different types of real time errors and trigger different adaptation actions including workflow reconfiguration, migration, and resource scaling. We formalize the cloud resource orchestration using state machine that efficiently captures different dynamic properties of the cloud execution environment. In addition, we use validation model checker to validate our model in terms of reachability, liveness, and safety properties. Extensive experimentation is performed using a health monitoring workflow we have developed to handle dataset from Intelligent Monitoring in Intensive Care III (MIMICIII) and deployed over Docker swarm cluster. A set of scenarios were carefully chosen to evaluate workflow monitoring and the different adaptation schemes we have implemented. The results prove that our automated workflow orchestration model is self-adapting, self-configuring, react efficiently to changes and adapt accordingly while supporting high level of Workflow QoS.
       
  • A novel method for parallel indexing of real time geospatial big data
           generated by IoT devices
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Suresh V. Limkar, Rakesh Kumar Jha IoT produces a huge amount of big data as it comprises billions of devices that are interconnected with each other through internet. Today’s majority of the big data part is about geospatial data and every year it increases rapidly. In order to process such massive real time geospatial big data, we must have scalable, efficient indexing method. R Tree and its variants have emerged as most efficient, widely accepted and have adopted indexing method for the management and processing of geospatial data. Current literature on parallel construction of R Tree indexes of geospatial data has disadvantages, that all the methods considered only two dimensional geospatial data and all are based on MapReduce framework. As the number of dimension increases, complexity of index creation is also increases along with this MapReduce framework has lots of disadvantages such as, it works only on static data, consumes a lot of disk space and time, which leads to high latency and fault tolerance of the entire system. In order to overcome these issues, a novel method for parallel construction of R Tree and its variants, use of the Apache Spark (in-memory and on-disk computation) based on the IoT Zetta platform is proposed. The main purpose of using Apache Spark is to index real time geospatial data for continuously updating the position of aircraft in real time while indexing it in R tree and its variants, so that spatial range query can fetch real time results and Apache Spark is much faster as compared to MapReduce framework. The extensive experimental results show that our parallel generated R tree and its variants retains similar properties as of sequential generated R tree and its variants with the excellent scalability and reducing a significant amount of time for the construction of index, index updating & executing spatial range query over geospatial data by exploiting the latest parallelism framework.
       
  • Wi-Fi frequency selection concept for effective coverage in collapsed
           structures
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Muhammad Faizan Khan, Guojun Wang, Md Zakirul Alam Bhuiyan Currently, there is a growing tendency of research on leveraging Wi-Fi signals as sensors for various IoT applications. We observe that most studies deal with the behavior of wireless signals for regular structures in indoor environments. But there is a lack of investigation on Wi-Fi signals behavior for a complex structure like collapsed buildings. This study investigates the possible Wi-Fi signal penetration for collapsed structures. We consider two objectives of this study. Firstly, there is a need to figure out the most common collapsed environments and then to select the best frequency selection concept from available license-free ISM bands to have better coverage. We achieve the first goal through a field survey of an earthquake hit area followed by a debris model constituent of brick and concrete materials. To address the second objective, we have employed two methods for selection of the best frequency. Firstly, we use theoretical bijective soft set approach to select the best available concept and then a modified path loss termed as PL-Collapsed has been introduced to estimate the losses encountered to the wireless signal under debris while providing the best possible coverage with available ISM bands for Wi-Fi. We then compare signal coverage for Wi-Fi bands with each other to finally select the possibly best frequency. Simulation results show that newly introduced Wi-Fi Halow operating at sub 1GHz has relatively better coverage which is also cross-verified by the theoretical bijective soft-set approach. This study triggers the application of wireless signals under collapsed structures where there is a need to rescue the lives. Furthermore, this also prompts new dimensions in IoTs hence emphasizing more ubiquitous systems.
       
  • Software defined deduplicated replica management in scale-out storage
           systems
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Muthukumar Murugan, Krishna Kant, Ajaykrishna Raghavan, David H.C. Du Given the unabated growth of storage in data centers, its energy footprint continues to grow, which makes storage energy management a crucial issue. Furthermore, with the emerging trend of underprovisioning of power and cooling infrastructures in large facilities, it is important to flexibly adapt the entire infrastructure of a client, including its storage system, to changing energy limitations. In this paper, we presentan energy-adaptive framework called flexStore that provides a flexible mechanism to specify and control the energy vs. performance tradeoffs. These mechanisms are defined and enforced by a software layer called “Policy Engine” that controls the number of active copies of deduplicated data chunks in storage containers based on energy availability. The mechanism synchronizes data chunks in inactive storage containers with those in the active containers, in the background to enable them to be put into service quickly when needed. We evaluate flexStore with different workloads on a sample data center environment and demonstrate the effectiveness of its control mechanisms in adapting to the performance and energy constraints.
       
  • Machine-Learning based analysis and classification of Android malware
           signatures
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Ignacio Martín, José Alberto Hernández, Sergio de los Santos Multi-scanner Antivirus (AV) systems are often used for detecting Android malware since the same piece of software can be checked against multiple different AV engines. However, in many cases the same software application is flagged as malware by few AV engines, and often the signatures provided contradict each other, showing a clear lack of consensus between different AV engines. This work analyzes more than 80 thousand Android applications flagged as malware by at least one AV engine, with a total of almost 260 thousand malware signatures. In the analysis, we identify 41 different malware families, we study their relationships and the relationships between the AV engines involved in such detections, showing that most malware cases belong to either Adware abuse or really dangerous Harmful applications, but some others are unspecified (or Unknown). With the help of Machine Learning and Graph Community Algorithms, we can further combine the different AV detections to classify such Unknown apps into either Adware or Harmful risks, reaching F1-score above 0.84.
       
  • A framework and a performance assessment for serverless MapReduce on AWS
           Lambda
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): V. Giménez-Alventosa, Germán Moltó, Miguel Caballer MapReduce is one of the most widely used programming models for analysing large-scale datasets, i.e. Big Data. In recent years, serverless computing and, in particular, Functions as a Service (FaaS) has surged as an execution model in which no explicit management of servers (e.g. virtual machines) is performed by the user. Instead, the Cloud provider dynamically allocates resources to the function invocations and fine-grained billing is introduced depending on the execution time and allocated memory, as exemplified by AWS Lambda. In this article, a high-performant serverless architecture has been created to execute MapReduce jobs on AWS Lambda using Amazon S3 as the storage backend. In addition, a thorough assessment has been carried out to study the suitability of AWS Lambda as a platform for the execution of High Throughput Computing jobs. The results indicate that AWS Lambda provides a convenient computing platform for general-purpose applications that fit within the constraints of the service (15 min of maximum execution time, 3008 MB of RAM and 512 MB of disk space) but it exhibits an inhomogeneous performance behaviour that may jeopardise adoption for tightly coupled computing jobs.
       
  • EMA-RPL: Energy and mobility aware routing for the Internet of Mobile
           Things
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Maha Bouaziz, Abderrezak Rachedi, Abdelfettah Belghith, Marion Berbineau, Saad Al-Ahmadi Internet of Mobile Things (IoMT) is a new paradigm of the Internet of Things (IoT) where devices are inherently mobile. While mobility enables innovative applications and allows new services, it remains a challenging issue as it causes disconnection of nodes and intermittent connectivity, which negatively impacts the network operation and performance. In addition, energy saving presents a real challenge for networks with limited resources. In this context, a new energy efficient and mobility aware routing protocol called EMA-RPL is proposed based on the well-known Routing Protocol for Low power and Lossy Networks (the RPL standard). Unlike the RPL which is basically designed for static devices, EMA-RPL enables better sustaining of connectivity of mobile nodes and conserving energy. The proposed protocol integrates an enhanced mobility detection method through a continuous control of the distance between the mobile node and its attachment, a novel point of attachment prediction based on the new location of the mobile node, and an efficient replacement strategy preserving the mobile node energy. EMA-RPL overcomes and mitigates problems caused by the mobility of nodes. Simulations using Cooja/Contiki show that EMA-RPL outperforms both the RPL and its mobility aware variant (MRPL) in terms of handover delay, data loss rate, signaling cost and energy consumption.
       
  • State and runtime-aware scheduling in elastic stream computing systems
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Dawei Sun, Shang Gao, Xunyun Liu, Fengyun Li, Xinqi Zheng, Rajkumar Buyya State and runtime-aware scheduling is one of the problems that is hard to resolve in elastic big data stream computing systems, as the state of each vertex is different, and the arrival rate of data streams fluctuates over time. A state and runtime-aware scheduling framework should be able to dynamically adapt to the fluctuation of the arrival rate of data streams and be aware of vertex states and resource availability. Currently, there is an increasing number of research work focusing on application scheduling in stream computing systems, however, this problem is still far from being completely solved. In this paper, we focus on the state of vertex in applications and the runtime feature of resources in a data center, and propose a state and runtime-aware scheduling framework (Sra-Stream) for elastic streaming computing systems, which incorporates the following features: (1) Profiling mathematical relationships between the system response time and the arrival rate of data streams, and identifying relevant resource constraints to meet the low response time and high throughput objectives. (2) Classifying vertex into stateless vertex or stateful vertex from a quantitative perspective, and achieving vertex parallelization by considering the state of the vertex. (3) Demonstrating a proposed stream application scheduling scheme consisting of a modified first-fit based runtime-aware data tuple scheduling strategy at the initial stage, and a maximum latency-sensitive based runtime-aware data stream scheduling strategy at the online stage, by considering the current scheduling status of the application. (4) Evaluating the achievement levels of low response time and high throughput objectives in a real-world elastic stream computing system. Experimental results conclusively demonstrate that the proposed Sra-Stream provides significant performance improvements on achieving the low system response time and high system throughput.
       
  • UAV-enabled healthcare architecture: Issues and challenges
    • Abstract: Publication date: August 2019Source: Future Generation Computer Systems, Volume 97Author(s): Sana Ullah, Ki-Il Kim, Kyong Hoon Kim, Muhammad Imran, Pervez Khan, Eduardo Tovar, Farman Ali Unmanned Aerial Vehicles (UAVs) have great potential to revolutionize the future of automotive, energy, and healthcare sectors by working as wireless relays to improve connectivity with ground networks. They are able to collect and process real-time information by connecting existing network infrastructures including Internet of Medical Things (e.g., Body Area Networks (BANs)) and Internet of Vehicles with clouds or remote servers. In this article, we advocate and promote the notion of employing UAVs as data collectors. To demonstrate practicality of the idea, we propose a UAV-based architecture to communicate with BANs in a reliable and power-efficient manner. The proposed architecture adopts the concept of wakeup-radio based communication between a UAV and multiple BANs. We analyze the performance of the proposed protocol in terms of throughput and delay by allocating different priorities to the hubs or gateways. The proposed architecture may be useful in remote or disaster areas, where BANs have poor or no access to conventional wireless communication infrastructure, and may even assist vehicular networks by monitoring driver’s physiological conditions through BANs. We further highlight open research issues and challenges that are important for developing efficient protocols for UAV-based data collection in smart healthcare systems.
       
  • Activity-based model for medium-sized cities considering external
           activity–travel: Enhancing FEATHERS framework
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Syed Fazal Abbas Baqueri, Muhammad Adnan, Bruno Kochan, Tom Bellemans Travel demand modeling has evolved from the traditional four-step models to tour-based models which eventually became the basis of the advanced Activity-Based Models (ABM). The added value of the ABM over others is its ability to test various policy scenarios by considering the complete activity–travel pattern of individuals living in the region. However, the majority of the ABM restricts residents’ activities within the study area which results in distorted travel patterns. The external travel is modeled separately via external models which are insensitive to policy tests that an ABM is capable of analyzing. Consequently, to minimize external travel, transport planners tend to define a larger study area. This approach, however, requires huge resources which significantly deterred the worldwide penetration of ABM. To overcome these limitations, this study presents a framework to model residents’ travel and activities outside the study area as part of the complete activity–travel schedule. This is realized by including the Catchment Area (CA), a region outside the study area, in the destination choice models. Within the destination choice models, a top-level model is introduced that specifies for each activity its destination inside or outside the study area. For activities to be performed inside the study area, the detailed land use information is utilized to determine the exact location. However, for activities in the CA, another series of models are presented that use land use information obtained from open-source platforms in order to minimize the data collection efforts. These modifications are implemented in FEATHERS, an ABM operational for Flanders, Belgium and the methodology is tested on three medium-sized regions within Flanders. The results indicate improvements in the model outputs by defining medium-sized regions as study areas as compared to defining a large study area. Furthermore, the Points of Interests (POI) density is also found to be significant in many cases. Lastly, a comprehensive validation framework is presented to compare the results of the ABM for the medium-sized regions against the ABM for Flanders. The validation includes the (dis)aggregate distribution of activities, trips, and tours in time, space and structure (e.g. transport modes used and types of activities performed) through eleven measures. The results demonstrate similar distributions between the two ABM (i.e. ABM for medium-sized regions and for Flanders) and thus confirms the validity of the proposed methodology. This study, therefore, shall lead to the development of ABM for medium-sized regions.
       
  • A sector-based random routing scheme for protecting the source location
           privacy in WSNs for the Internet of Things
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Yu He, Guangjie Han, Hao Wang, James Adu Ansere, Whenbo Zhang With the development of the Internet of Things (IoT), Smart Data, which effectively support the IoT for planning, operation, monitoring, control, and intelligent decision making, has received extensive interest by researchers. However, the security of the data source has not been entirely resolved. Wireless sensor networks (WSNs) are vital components of the IoT for event monitoring and information gathering. Recently, source location privacy (SLP) protection in WSNs has attracted attentions as an approach to prevent adversaries from performing a backtracking strategy to capture the data sources. In view of the characteristics of the sensor nodes, the restricted computing power and the energy resource, we propose a sector-based random routing (SRR) scheme to address the SLP problem and reduce the energy consumption. In the SRR, the data packets are sent to random phantom sources that are located in different sectors and are distributed in all directions to reach the sink node. In addition, the concept of a hop threshold is presented to control the routing strategies and reduce the energy consumption. The theoretical analysis and experimental results prove that the proposed protocol efficiently reduces backtracking and direction attacks while safekeeping the balance between security and network lifetime.
       
  • Urban data management system: Towards Big Data analytics for Internet of
           Things based smart urban environment using customized Hadoop
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Muhammad Babar, Fahim Arif, Mian Ahmad Jan, Zhiyuan Tan, Fazlullah Khan The unbroken amplification of a versatile urban setup is challenged by huge Big Data processing. Understanding the voluminous data generated in a smart urban environment for decision making is a challenging task. Big Data analytics is performed to obtain useful insights about the massive data. The existing conventional techniques are not suitable to get a useful insight due to the huge volume of data. Big Data analytics has attracted significant attention in the context of large-scale data computation and processing. This paper presents a Hadoop-based architecture to deal with Big Data loading and processing. The proposed architecture is composed of two different modules, i.e., Big Data loading and Big Data processing. The performance and efficiency of data loading is tested to propose a customized methodology for loading Big Data to a distributed and processing platform, i.e., Hadoop. To examine data ingestion into Hadoop, data loading is performed and compared repeatedly against different decisions. The experimental results are recorded for various attributes along with manual and traditional data loading to highlight the efficiency of our proposed solution. On the other hand, the processing is achieved using YARN cluster management framework with specific customization of dynamic scheduling. In addition, the effectiveness of our proposed solution regarding processing and computation is also highlighted and decorated in the context of throughput.
       
  • PAU: Privacy Assessment method with Uncertainty consideration for
           cloud-based vehicular networks
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Xia Feng, Liangmin Wang With the rapid progress of wireless communication and big data, the traditional Vehicular Ad-hoc Networks (VANETs) gradually evolve into the new Heterogeneous Vehicular Networks (HetVNets). Meanwhile, with the combination of multiple forms of communication modes, it initiates the Vehicle to Everything(V2X) communication model providing more efficient services. V2X communication generates much more private data than traditional VANETs, but the concerns over privacy breaches are increasing. these big data burdens the concerns about. To protect the privacy in these cloud-based vehicular networks is remained unsolved. In this paper, we propose Privacy Assessment method with Uncertainty consideration (PAU) to estimate the nodes’ capability in protecting privacy, and then choose the vehicular nodes with high priority calculated by PAU to improve the whole network’s privacy protection level. PAU expands subjective logic based on two-tuple to triad and keeps uncertainty as a constituent element. It evaluates the nodes by using the historical data from the vehicular cloud and the real-time data from V2V communications. The experiments and analysis show that the improvement of privacy-preserving capability achieved when applied PAU in Mix-zone scenarios.
       
  • Bayesian network based weighted APT attack paths modeling in cloud
           computing
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Aaron Zimba, Hongsong Chen, Zhaoshun Wang Security vulnerabilities exhibited in cloud computing components and technologies not limited to hypervisors, virtual machines, and virtualization present a major security concern. The primary challenge has been to characterize interlinked attack paths generated by Advanced Persistent Thereat (APT) attackers upon exploitation of vulnerabilities exhibited in cloud components. We propose a Bayesian network based weighted attack paths modeling technique to model these attack paths. In our approach, we employ quantitative induction to express weighted attack paths. We chain marginal and conditional probabilities together to characterize multiple attack paths from the attack source to the target node. In so doing, we evaluate the likelihood of an APT occurring in a given path. Furthermore, we propose an optimized algorithm to find the shortest attack path from multiple sources based on key nodes and key edges. The algorithm not only finds the shortest path but also resolves any existing ties amongst paths of equal weights. We characterize the attack time expense of the APT attack by modeling the associated atomic attack events in a path as Poisson variables obeying the Erlang distribution. The attack time expense is classified into three different levels; High, Medium and Low. We use the WannaCry ransomware attack to evaluate our model.
       
  • Privacy-preserving and sparsity-aware location-based prediction method for
           collaborative recommender systems
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Shunmei Meng, Lianyong Qi, Qianmu Li, Wenmin Lin, Xiaolong Xu, Shaohua Wan With the rapid growth of public cloud offerings, how to design effective prediction models that provide appropriate recommendations for potential users has become more and more important. In dynamic cloud environment, both of user behaviors and service performance are sensitive to contextual information, such as geographic location information. In addition, the increasing number of attacks and security threats also brought the problem that how to protect critical information assets such as sensitive data, cloud resources and communication in a more effective and secure manner. In view of these challenges, we propose a privacy-preserving and sparsity-aware location-based prediction method for collaborative recommender systems. Specifically, our method is designed as a three-phase process: Firstly, two privacy-preserving mechanisms, i.e., a randomized data obfuscation technique and a region aggregation strategy are presented to protect the private information of users and deal with the data sparsity problem. Then a location-aware latent factor model based on tensor factorization is applied to explore the spatial similarity relationships between services. Finally, predictions are made based on both global and spatial nearest neighbors. Experiments are designed and conducted to validate the effectiveness of our proposal. The experimental results show that our method achieves decent prediction accuracy on the premise of privacy preservation.
       
  • LACO: Lightweight Three-Factor Authentication, Access Control and
           Ownership Transfer Scheme for E-Health Systems in IoT
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Seyed Farhad Aghili, Hamid Mala, Mohammad Shojafar, Pedro Peris-Lopez The use of the Internet of Things (IoT) in the electronic health (e-health) management systems brings with it many challenges, including secure communications through insecure radio channels, authentication and key agreement schemes between the entities involved, access control protocols and also schemes for transferring ownership of vital patient information. Besides, the resource-limited sensors in the IoT have real difficulties in achieving this goal. Motivated by these considerations, in this work we propose a new lightweight authentication and ownership transfer protocol for e-health systems in the context of IoT (LACO in short). The goal is to propose a secure and energy-efficient protocol that not only provides authentication and key agreement but also satisfies access control and preserves the privacy of doctors and patients. Moreover, this is the first time that the ownership transfer of users is considered. In the ownership transfer phase of the proposed scheme, the medical server can change the ownership of patient information. In addition, the LACO protocol overcomes the security flaws of recent authentication protocols that were proposed for e-health systems, but are unfortunately vulnerable to traceability, de-synchronization, denial of service (DoS), and insider attacks. To avoid past mistakes, we present formal (i.e., conducted on ProVerif language) and informal security analysis for the LACO protocol. All this ensures that our proposed scheme is secure against the most common attacks in IoT systems. Compared to the predecessor schemes, the LACO protocol is both more efficient and more secure to use in e-health systems.
       
  • Feasibility analysis of Inter-Pulse Intervals based solutions for
           cryptographic token generation by two electrocardiogram sensors
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Lara Ortiz-Martin, Pablo Picazo-Sanchez, Pedro Peris-Lopez, Juan Tapiador, Gerardo Schneider In this paper we address the problem of how two devices that are sensing the same heart signal can generate the same cryptographic token by extracting them from the Inter-Pulse Intervals (IPIs) of each cardiac signal. Our analysis is based on the use of a run-time monitor, which is extracted from a formal model and verified against predefined properties, combined with a fuzzy extractor to improve the final result. We first show that it is impossible, in general, to correct the differences between the IPIs derived from two captured electrocardiogram (ECG) signals when using only error correction techniques, thus being impossible to corroborate previous claims on the feasibility of this approach. Then, we provide a large-scale evaluation of the proposed method (run-time monitor and fuzzy extractor) over 19 public databases from the Physionet repository containing heart signals. The results clearly show the practicality of our proposal achieving a 91% of synchronization probability for healthy individuals. Additionally, we also conduct an experiment to check how long the sensors should record the heart signal in order to generate tokens of 32, 64 and 128 bits. Contrarily to what it is usually assumed (6, 12, and 24 s for individuals with a heart rate of 80 beats-per-minute), the sensors have to wait 13, 28 and 56.5 s on median, respectively, to derive the same token from both sensors.
       
  • A self-managed Mesos cluster for data analytics with QoS guarantees
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Sergio López-Huguet, Alfonso Pérez, Amanda Calatrava, Carlos de Alfonso, Miguel Caballer, Germán Moltó, Ignacio Blanquer This article describes the development of an automated configuration of a software platform for Data Analytics that supports horizontal and vertical elasticity to guarantee meeting a specific deadline. It specifies all the components, software dependencies and configurations required to build up the cluster, and analyses the deployment times of different instances, as well as the horizontal and vertical elasticity. The approach followed builds up self-managed hybrid clusters that can deal with different workloads and network requirements. The article describes the structure of the recipes, points out to public repositories where the code is available and discusses the limitations of the approach as well as the results of several experiments.
       
  • Software defined network management for dynamic smart GRID traffic
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Mita Cokic, Ivan Seskar One of the more challenging issues in Smart Grid (SG) communications is in handling the ever-increasing number of new SG applications that are being provisioned by the utility companies. These applications are resulting in an exponential increase in the amount of data that utility companies are collecting. Appropriate communication infrastructure and its management is vital for providing this data to unlock the full potential of the SG. Typically, these applications generate different types of data traffic that can be divided into multiple traffic classes with different QoS parameters (priority, throughput, latency etc.). Traditionally, these classes are handled with static network configuration based on individual application policies. However, due to increasing network dynamism, the problem arises as to how to adjust these configurations, based on changing traffic situations. In this paper, a software defined networking (SDN) based solution for distributed and dynamic Smart Grid network management is presented. Proposed solution responsiveness to complex dynamicity of Smart Grid communications is evaluated on a developed evaluation platform for the following cases: (1) Automatic Generation Control (AGC) during peak load, (2) Volt/Var optimization (VVO) during peak load, (3) steady-state operation with static (background) traffic load, (4) stress-state under continuous background traffic overload and (5) dynamic prioritization of traffic for data disaggregation. The presented solution provides significant benefits, when compared with traditional networking in tested scenarios, including: over 70 times lower latency for the most time-sensitive traffic (AGC), 25% increased VVO system observability and 5% to 7% decrease in unprivileged traffic bandwidth consumption whenever privileged traffic QoS is threatened. Additionally, it is shown that dynamic prioritization can provide requested QoS on demand as long as overall capacity is larger than the privileged traffic offered load.
       
  • BIGSEA: A Big Data analytics platform for public transportation
           information
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Andy S. Alic, Jussara Almeida, Giovanni Aloisio, Nazareno Andrade, Nuno Antunes, Danilo Ardagna, Rosa M. Badia, Tania Basso, Ignacio Blanquer, Tarciso Braz, Andrey Brito, Donatello Elia, Sandro Fiore, Dorgival Guedes, Marco Lattuada, Daniele Lezzi, Matheus Maciel, Wagner Meira, Demetrio Mestre, Regina Moraes Analysis of public transportation data in large cities is a challenging problem. Managing data ingestion, data storage, data quality enhancement, modelling and analysis requires intensive computing and a non-trivial amount of resources. In EUBra-BIGSEA (Europe–Brazil Collaboration of Big Data Scientific Research Through Cloud-Centric Applications) we address such problems in a comprehensive and integrated way. EUBra-BIGSEA provides a platform for building up data analytic workflows on top of elastic cloud services without requiring skills related to either programming or cloud services. The approach combines cloud orchestration, Quality of Service and automatic parallelisation on a platform that includes a toolbox for implementing privacy guarantees and data quality enhancement as well as advanced services for sentiment analysis, traffic jam estimation and trip recommendation based on estimated crowdedness. All developments are available under Open Source licenses (http://github.org/eubr-bigsea, https://hub.docker.com/u/eubrabigsea/).
       
  • A machine learning-based FinTech cyber threat attribution framework using
           high-level indicators of compromise
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Umara Noor, Zahid Anwar, Tehmina Amjad, Kim-Kwang Raymond Choo Cyber threat attribution identifies the source of a malicious cyber activity, which in turn informs cyber security mitigation responses and strategies. Such responses and strategies are crucial for deterring future attacks, particularly in the financial and critical infrastructure sectors. However, existing approaches generally rely on manual analysis of attack indicators obtained through approaches such as trace-back, firewalls, intrusion detection and honeypot deployments. These attack indicators, also known as low-level Indicators of Compromise (IOCs), are rarely re-used and can be easily modified and disguised resulting in a deceptive and biased cyber threat attribution. Cyber attackers, particularly financially-motivated actors, can use common high-level attack patterns that evolve less frequently as compared to the low-level IOCs. To attribute cyber threats effectively, it is necessary to identify them based on the high-level adversary’s attack patterns (e.g. tactics, techniques and procedures - TTPs, software tools and malware) employed in different phases of the cyber kill chain. Identification of high-level attack patterns is time-consuming, requiring forensic investigation of the victim network(s) and other resources. In the rare case that attack patterns are reported in cyber threat intelligence (CTI) reports, the format is textual and unstructured typically taking the form of lengthy incident reports prepared for human consumption (e.g. prepared for C-level and senior management executives), which cannot be directly interpreted by machines. Thus, in this paper we propose a framework to automate cyber threat attribution. Specifically, we profile cyber threat actors (CTAs) based on their attack patterns extracted from CTI reports, using the distributional semantics technique of Natural Language Processing. Using these profiles, we train and test five machine learning classifiers on 327 CTI reports collected from publicly available incident reports that cover events from May 2012 to February 2018. It is observed that the CTA profiles obtained attribute cyber threats with a high precision (i.e. 83% as compared to other publicly available CTA profiles, where the precision is 33%). The Deep Learning Neural Network (DLNN) based classifier also attributes cyber threats with a higher accuracy (i.e. 94% as compared to other classifiers).
       
  • An energy-efficient, QoS-aware and cost-effective scheduling approach for
           real-time workflow applications in cloud computing systems utilizing DVFS
           and approximate computations
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Georgios L. Stavrinides, Helen D. Karatza Green cloud computing attracts significant attention from both academia and industry. One of the major challenges involved, is to provide a high level of Quality of Service (QoS) in a cost-effective way for the end users and in an energy-efficient manner for the cloud providers. Towards this direction, this paper presents an energy-efficient, QoS-aware and cost-effective scheduling strategy for real-time workflow applications in cloud computing systems. The proposed approach utilizes per-core Dynamic Voltage and Frequency Scaling (DVFS) on the underlying heterogeneous multi-core processors, as well as approximate computations, in order to fill in schedule gaps. At the same time, it takes into account the effects of input error on the processing time of the component tasks. Our goal is to provide timeliness and energy efficiency by trading off result precision, while keeping the result quality of the completed jobs at an acceptable standard and the monetary cost required for the execution of the jobs at a reasonable level. The proposed scheduling heuristic is compared to two other baseline policies, under the impact of various QoS requirements. The simulation experiments reveal that our approach outperforms the other examined policies, providing promising results.
       
  • An energy efficient IoT data compression approach for edge machine
           learning
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Joseph Azar, Abdallah Makhoul, Mahmoud Barhamgi, Raphaël Couturier Many IoT systems generate a huge and varied amount of data that need to be processed and responded to in a very short time. One of the major challenges is the high energy consumption due to the transmission of data to the cloud.Edge computing allows the workload to be offloaded from the cloud at a location closer to the source of data that need to be processed while saving time, improving privacy, and reducing network traffic. In this paper, we propose an energy efficient approach for IoT data collection and analysis. First of all, we apply a fast error-bounded lossy compressor on the collected data prior to transmission, that is considered to be the greatest consumer of energy in an IoT device. In a second phase, we rebuild the transmitted data on an edge node and process it using supervised machine learning techniques. To validate our approach, we consider the context of driving behavior monitoring in intelligent vehicle systems where vital signs data are collected from the driver using a Wireless Body Sensor Network (WBSN) and wearable devices and sent to an edge node for stress level detection. The experimentation results show that the amount of transmitted data has been reduced by up to 103 times without affecting the quality of medical data and driver stress level prediction accuracy.
       
  • Designing collaborative blockchained signature-based intrusion detection
           in IoT environments
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Wenjuan Li, Steven Tug, Weizhi Meng, Yu Wang With the rapid development of Internet-of-Things (IoT), there is an increasing demand for securing the IoT environments. For such purpose, intrusion detection systems (IDSs) are one of the most important security mechanisms, which can help defend computer networks including IoT against various threats. In order to achieve better detection performance, collaborative intrusion detection systems or networks (CIDSs or CIDNs) are often adopted in a practical scenario, allowing a set of IDS nodes to exchange required information with each other, e.g., alarms, signatures. However, due to the distributed nature, such kind of collaborative network is vulnerable to insider attacks, i.e., malicious nodes can generate untruthful signatures and share to normal peers. This may cause intruders to be undetected and greatly degrade the effectiveness of IDSs. With the advent of blockchain technology, it provides a way to verify shared signatures (rules). In this work, our motivation is to develop CBSigIDS, a generic framework of collaborative blockchained signature-based IDSs, which can incrementally build and update a trusted signature database in a collaborative IoT environment. CBSigIDS can provide a verifiable manner in distributed architectures without the need of a trusted intermediary. In the evaluation, our results demonstrate that CBSigIDS can enhance the robustness and effectiveness of signature-based IDSs under adversarial scenarios.
       
  • PTAS: Privacy-preserving Thin-client Authentication Scheme in
           blockchain-based PKI
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Wenbo Jiang, Hongwei Li, Guowen Xu, Mi Wen, Guishan Dong, Xiaodong Lin Recent years have witnessed tremendous academic efforts and industry growth in Internet of Things (IoT). Security issues of IoT have become increasingly prominent. Public Key Infrastructure (PKI) can provide authentication service to IoT devices which is a crucial element to the security of IoT. However, the conventional PKIs are organized as a tree-like centralized structure which has demonstrated serious usability and security shortcomings such as the single point of failure. Blockchain has numerous desirable properties, such as decentralized nature, cryptographic technology and unalterable transaction record, these properties make it a potential tool to build a decentralized blockchain-based PKI. Nevertheless, the latest proposals for blockchain-based PKI did not take thin-clients into consideration where thin-clients indicate those users who cannot download the entire blockchain due to the limited storage capacity of their equipment (most IoT devices fall into this category). To settle this problem, we firstly present a Privacy-preserving Thin-client Authentication Scheme (PTAS) employing the idea of private information retrieval (PIR), which enables thin-clients to run normally like full node users and protect their privacy simultaneously. Furthermore, in order to enhance security, we further propose a (m-1)-private PTAS which means thin-client’s information can be protected against a collusion of at most (m-1) full node users. Besides, security analysis and functional comparison are performed to demonstrate high security and comprehensive functionality of our schemes. Finally, extensive experiments are conducted to compare computational overhead and communication overhead of PTAS and (m-1)-private PTAS.
       
  • An analytic computation-driven algorithm for Decentralized Multicore
           Systems
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Yezhi Lin, Xinyuan Jin, Jiuqiang Chen, Ali Hassan Sodhro, Zhifang Pan In the modern era, increasing numbers of cores per chip are applied for decentralized systems, but there is not any appropriate symbolic computation approach to construct multicore analytic approximation. Thus, it is essential to develop an efficient, simple and unified way for decentralized Adomian decomposition method to increase the potential speed of the multicore systems. In our paper, we present an innovative parallel algorithm of constructing analytic solutions for nonlinear differential system, which based on the Adomian–Rach double decomposition method and Rach’s Adomian polynomials. Based on our algorithm, we further developed a user-friendly Python software package to construct analytic approximations of initial or boundary value problems. Finally, the scope of validity of our Python software package is illustrated by several different types of nonlinear examples. The obtained results demonstrate the effectiveness of our package by compared with exact solution and numeric method, the characteristics of each class of Adomian polynomials and the efficiency of parallel algorithm with multicore processors. We emphasis that the super-linear speedup may happens for the duration of constructing approximate solutions. So, it can be considered as a promising alternative algorithm of decentralized Adomian decomposition method for solving nonlinear problems in science and engineering.
       
  • An edge computing-enabled computation offloading method with privacy
           preservation for internet of connected vehicles
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Xiaolong Xu, Yuan Xue, Lianyong Qi, Yuan Yuan, Xuyun Zhang, Tariq Umer, Shaohua Wan The Internet of connected vehicles (IoV) is employed to collect real-time traffic conditions for transportation control systems, and the computing tasks are available to be offloaded from the vehicles to the edge computing devices (ECDs) for implementation. Despite numerous benefits of IoV and ECDs, the wireless communication for computation offloading increases the risk of privacy leakage, which may consequently lead to tracking, identity tampering and virtual vehicle hijacking. Therefore, it remains a challenge to avoid privacy conflicts for computation offloading to the ECDs in IoV. To address this challenge, an edge computing-enabled computation offloading method, named ECO, with privacy preservation for IoV is proposed in this paper. Technically, the privacy conflicts of the computing tasks in IoV are analyzed in a formalized way. Then, vehicle-to-vehicle (V2V) communication-based routing for a vehicle is designed to acquire the routing vehicles from the origin vehicle where the computing task is located at the destination vehicle. NSGA-II (non-dominated sorting genetic algorithm II) is adopted to realize multi-objective optimization to reduce the execution time and energy consumption of ECDs and prevent privacy conflicts of the computing tasks. Finally, experimental evaluations are conducted to validate the efficiency and effectiveness of ECO.
       
  • Reducing the price of resource provisioning using EC2 spot instances with
           prediction models
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Javier Fabra, Joaquín Ezpeleta, Pedro Álvarez The increasing demand of computing resources has boosted the use of cloud computing providers. This has raised a new dimension in which the connections between resource usage and costs have to be considered from an organizational perspective. As a part of its EC2 service, Amazon introduced spot instances (SI) as a cheap public infrastructure, but at the price of not ensuring reliability of the service. On the Amazon SI model, hired instances can be abruptly terminated by the service provider when necessary. The interface for managing SI is based on a bidding strategy that depends on non-public Amazon pricing strategies, which makes complicated for users to apply any scheduling or resource provisioning strategy based on such (cheaper) resources. Although it is believed that the use of the EC2 SIs infrastructure can reduce costs for final users, a deep review of literature concludes that their characteristics and possibilities have not yet been deeply explored. In this work we present a framework for the analysis of the EC2 SIs infrastructure that uses the price history of such resources in order to classify the SI availability zones and then generate price prediction models adapted to each class. The proposed models are validated through a formal experimentation process. As a result, these models are applied to generate resource provisioning plans that get the optimal price when using the SI infrastructure in a real scenario. Finally, the recent changes that Amazon has introduced in the SI model and how this work can adapt to these changes is discussed.
       
  • SoC-based computing infrastructures for scientific applications and
           commercial services: Performance and economic evaluations
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Daniele D’Agostino, Alfonso Quarati, Andrea Clematis, Lucia Morganti, Elena Corni, Valentina Giansanti, Daniele Cesini, Ivan Merelli Energy consumption represents one of the most relevant issues by now in operating computing infrastructures, from traditional High Performance Computing Centers to Cloud Data Centers. Low power System-on-Chip (SoC) architectures, originally developed in the context of mobile and embedded technologies, are becoming attractive also for scientific and industrial applications given their increasing computing performances, coupled with relatively low costs and power demands. In this paper, we investigate the performance of the most representative SoCs for a computational intensive N-body benchmark, a simple deep learning based application and a real-life application taken from the field of molecular biology. The goal is to assess the trade-off among time-to-solution, energy-to-solution and economical aspects for both scientific and commercial purposes they are able to achieve in comparison to traditional server-grade architectures adopted in present infrastructures.
       
  • Editorial for Special issue of FGCS special issue on “Benchmarking
           big data systems”
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Sherif Sakr, Albert Zomaya, Athanasios V. Vasilakos Even though several big data processing and analytics systems have been introduced with various design architectures, we are still lacking a deeper understanding of the performance characteristics for the various design architectures in addition to lacking comprehensive benchmarks for the various Big Data platforms. There is a crucial need to conduct fundamental research with a more comprehensive performance evaluation for the various Big Data processing systems and architectures. We also lack the availability of validation tools, standard benchmarks, and system performance prediction methods that can help us have a deeper and more solid understanding of the strengths and weaknesses of the various Big Data processing platforms. This special issue is dedicated to original results and achievements by active researchers, designers, and developers working on various issues and challenges related to big data research.
       
  • Evolving graph construction for successive recommendation in event-based
           social networks
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Shenghao Liu, Bang Wang, Minghua Xu, Laurence T. Yang Personalized recommendation can help individual users to quickly reserve their interested events, which makes it indispensable in event-based social networks (EBSNs). However, as each EBSN is often with large amount of entities and each upcoming event is normally with non-repetitive uniqueness, how to deal with such challenges is crucial to the success of event recommendation. In this paper, we propose an evolving graph-based successive recommendation (EGSR) algorithm to address such challenges: The basic idea is to exploit the random walk with restart (RWR) on a recommendation graph for ranking the upcoming events. In EGSR, we employ a sliding window mechanism to construct evolving graphs for successively recommending new events for each user. We propose a graph entropy-based contribution measure for adjusting the window length and for weighting the history information. In EGSR, we also apply a topic analysis technique for analyzing event text description. We then propose to establish each user an interest model and to compute the similarities in between event content and user interest as edges’ weights for each recommendation graph. In successive recommendation, the number of upcoming events may experience great variations in different times. For a fair comparison, we also propose a set of cumulative evaluation metrics based on the traditional recommendation performance metrics. Experiments have been conducted based on the crawled one year data from a real EBSN for two cities. Results have validated the superiority of the proposed EGSR algorithm over the peer ones in terms of better recommendation performance and reduced computation complexity.
       
  • Intelligent distributed routing scheme based on social similarity for
           mobile social networks
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Fang Xu, Qiong Xu, Zenggang Xiong, Nan Xiao, Yong Xie, Min Deng, Huibing Hao In mobile social networks (MSNs), the social attributes of nodes are important factors driving the mobility of nodes. By studying the mobility of the daily activities of node carriers, an intelligent distributed routing algorithm based on social context information prediction was proposed. First, we study the data forwarding problem of mobile social networks from two aspects, the daily behavior of mobile nodes and the similarity of social attributes respectively. Then, our algorithm uses BP neural network to predict the encounter regularity of mobile nodes in terms of time and space dimensions. This information can provide a basis for routing decisions. Finally, a routing algorithm with predictive capability is designed in combination with synchronous delivery and asynchronous delivery. Simulation analysis and experimental results show that the proposed routing algorithm can effectively improve the message delivery ratio and reduce the network overhead.
       
  • One secure data integrity verification scheme for cloud storage
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Yongkai Fan, Xiaodong Lin, Gang Tan, Yuqing Zhang, Wei Dong, Jing Lei Cloud computing is a novel kind of information technology that users can enjoy sundry cloud services from the shared configurable computing resources. Compared with traditional local storage, cloud storage is a more economical choice because the remote data center can replace users for data management and maintenance, which can save time and money on the series of work. However, delivering data to an unknown Cloud Service Provider (CSP) makes the integrity of data become a potential vulnerability. To solve this problem, we propose a secure identity based aggregate signatures (SIBAS) as the data integrity checking scheme which resorts Trusted Execution Environment (TEE) as the auditor to check the outsourced data in the local side. SIBAS can not only check the integrity of outsourced data, but also achieve the secure key management in TEE through Shamir’s (t,n) threshold scheme. To prove the security, security analysis in the random oracle model under the computational Diffie–Hellman assumption shows that SIBAS can resist attacks from the adversary that chooses its messages and target identities, experimental results also show that our solution is viable and efficient in practice.
       
  • Energy-efficient crypto acceleration with HW/SW co-design for HTTPS
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Chunhua Xiao, Lei Zhang, Weichen Liu, Neil Bergmann, Yuhua Xie Entering the Big Data era leads to the rapid development of web applications which provide high-performance sensitive access on large cloud data centers. HTTPS has been widely deployed as an extension of HTTP by adding an encryption layer of SSL/TLS protocol for secure communication over the Internet. To accelerate the complex crypto computation, specific acceleration instruction set and hardware accelerator are adopted. However, energy consumption has been ignored in the rush for performance. Actually, energy efficiency has become a challenge with the increasing demands for performance and energy saving in data centers. In this paper, we present the EECA, an Energy-Efficient Crypto Acceleration system for HTTPS with OpenSSL. It provides high energy-efficient encryption through HW/SW co-design. The essential idea is to make full use of system resource to exert the superiorities of different crypto acceleration approaches for an energy-efficient design. Experimental results show that, if only do crypto computations with typical encryption algorithm AES-256-CBC, the proposed EECA could get up to 1637.13%, 84.82%, and 966.23% PPW (Performance per Watt) improvement comparing with original software encryption, instruction set acceleration and hardware accelerator, respectively. If considering the whole working flow for end-to-end secure HTTPS based on OpenSSL with cipher suite ECDHE-RSA-AES256-SHA384, EECA could also improve the energy efficiency by up to 422.26%, 40.14% and 96.05% comparing with the original Web server using software, instruction set and hardware accelerators, respectively.
       
  • Incorporating URL embedding into ensemble clustering to detect web
           anomalies
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Bo Li, Guiqin Yuan, Li Shen, Ruoyi Zhang, Yiyang Yao Web anomaly detection aims to find deviations from normal behaviour that happened in our system at most of the time. With the development of the Internet, it is vital for the security of the Internet to detect web-based anomalies. Clustering based on feature extraction by manually has been verified as a significant way to detect new anomalies. But the presentations of these features cannot express semantic information of the URLs. In addition, few studies try to cluster the anomalies into specific types like SQL-injection. In order to solve these two problems, we provide a weighted deep learning enabled subspace spectral ensemble clustering approach for web anomaly detection called WDL-SSEC. This approach has three steps. Firstly, an ensemble clustering model is applied to separate anomalies from normal samples. Then we use word2vec to get the semantical presentations of tokens and concatenate weighted tokens to get vectors of the URLs. Finally, another ensemble clustering based on subspace and locally adaptive clustering (LAC) multi-cluster anomalies into specific types. Our approach is run on a real-life data set. The results achieves better performance than existing approaches, , which demonstrates that our model has the ability to cluster anomalies into appropriate types.Graphical abstractGraphical abstract for this article
       
  • Task migration for mobile edge computing using deep reinforcement learning
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Cheng Zhang, Zixuan Zheng Mobile edge computing (MEC) is a new network architecture that puts computing capabilities and storage resource at the edges of the network in a distributed manner, instead of a kind of centralized cloud computing architecture. The computation tasks of the users can be offloaded to the nearby MEC servers to achieve high quality of computation experience. As many applications’ users have high mobility, such as applications of autonomous driving, the original MEC server with the offloaded tasks may become far from the users. Therefore, the key challenge of the MEC is to make decisions on where and when the tasks had better be migrated according to users’ mobility. Existing works formulated this problem as a sequential decision making model and using Markov decision process (MDP) to solve, with assumption that mobility pattern of the users is known ahead. However, it is difficult to get users’ mobility pattern in advance. In this paper, we propose a deep Q-network (DQN) based technique for task migration in MEC system. It can learn the optimal task migration policy from previous experiences without necessarily acquiring the information about users’ mobility pattern in advance. Our proposed task migration algorithm is validated by conducting extensive simulations in the MEC system.
       
  • DC coefficient recovery for JPEG images in ubiquitous communication
           systems
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Han Qiu, Gerard Memmi, Xuan Chen, Jian Xiong With the development of data based technology, data transmission methods for ubiquitous computing and communication systems are more and more needed. There are various kinds of data processing techniques such as compression or coding to improve the transmission efficiency for heterogeneous networks with different requirements. However, the methods for improving fault tolerance in ubiquitous communication systems are still lacked especially for the multimedia data driven applications at the execution ends. In this paper, we propose an image content recovery method for JPEG images that can recover the image content by estimating the DC coefficients without any pre-know knowledge. This method can also be used to transmit the rough image content by reducing the data amount needed to be transmitted. Thus, fault tolerance can be achieved at the receivers’ ends for ubiquitous communication systems. The result analysis with different images compared with previous works proved the effectiveness of our method.
       
  • Intelligent conditional collaborative private data sharing
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Giuseppe Bianchi, Tooska Dargahi, Alberto Caponi, Mauro Conti With the advent of distributed systems, secure and privacy-preserving data sharing between different entities (individuals or organizations) becomes a challenging issue. There are several real-world scenarios in which different entities are willing to share their private data only under certain circumstances, such as sharing the system logs when there is indications of cyber attack in order to provide cyber threat intelligence. Therefore, over the past few years, several researchers proposed solutions for collaborative data sharing, mostly based on existing cryptographic algorithms. However, the existing approaches are not appropriate for conditional data sharing, i.e., sharing the data if and only if a pre-defined condition is satisfied due to the occurrence of an event. Moreover, in case the existing solutions are used in conditional data sharing scenarios, the shared secret will be revealed to all parties and re-keying process is necessary. In this work, in order to address the aforementioned challenges, we propose, a “conditional collaborative private data sharing” protocol based on Identity-Based Encryption and Threshold Secret Sharing schemes. In our proposed approach, the condition based on which the encrypted data will be revealed to the collaborating parties (or a central entity) could be of two types: (i) threshold, or (ii) pre-defined policy. Supported by thorough analytical and experimental analysis, we show the effectiveness and performance of our proposal.
       
  • A reliable adaptive forwarding approach in named data networking
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Zeinab Rezaeifar, Jian Wang, Heekuck Oh, Suk-Bok Lee, Junbeom Hur Named Data Networking (NDN) is a new paradigm for the future Internet infrastructure based on routable named data. The NDN infrastructure consists of a new component called strategy layer. The strategy layer allows for dynamically selecting network interfaces taking into account network conditions such as delay to forward Interest messages toward a provider. However, defining proper criteria for selecting the best possible paths to forward Interest messages is challenging in this network because different parameters and conditions conflict one another when choosing the best interfaces. Moreover, in NDN, data can be retrieved from different sources. However, to the best of our knowledge, the previous forwarding strategy methods that can estimate from which path the valid data can be fetched have not considered an attacker who tries to inject fake data with the same name as valid data. Therefore, in this paper, we take a holistic, adaptive forwarding approach that takes into account various metrics: bandwidth, load, delay, and reliability. Especially, we propose a reliability metric that defines which path is more stable and reliable to retrieve legitimate data. Our evaluation demonstrates that the proposed method enables reliable message delivery against potential attackers that inject invalid data, in addition, our method introduces marginal delay compared with the conventional forwarding methods in NDN.Graphical abstractGraphical abstract for this article
       
  • Quality of Service (QoS)-driven resource provisioning for large-scale
           graph processing in cloud computing environments: Graph
           Processing-as-a-Service (GPaaS)
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Safiollah Heidari, Rajkumar Buyya Large-scale graph data is being generated every day through applications and services such as social networks, Internet of Things (IoT) and mobile applications. Traditional processing approaches such as MapReduce are inefficient for processing graph datasets. To overcome this limitation, several exclusive graph processing frameworks have been developed since 2010. However, despite broad accessibility of cloud computing paradigm and its useful features namely as elasticity and pay-as-you-go pricing model, most frameworks are designed for high performance computing infrastructure (HPC). There are few graph processing systems that are developed for cloud environments but similar to their other counterparts, they also try to improve the performance by implementing new computation or communication techniques. In this paper, for the first time, we introduce the large-scale graph processing-as-a-service (GPaaS). GPaaS considers service level agreement (SLA) requirements and quality of service (QoS) for provisioning appropriate combination of resources in order to minimize the monetary cost of the operation. It also reduces the execution time compared to other graph processing frameworks such as Giraph up to 10%–15%. We show that our service significantly reduces the monetary cost by more than 40% compared to Giraph or other frameworks such as PowerGraph.
       
  • Page-sharing-based virtual machine packing with multi-resource constraints
           to reduce network traffic in migration for clouds
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Huixi Li, Wenjun Li, Shigeng Zhang, Haodong Wang, Yi Pan, Jianxin Wang Virtual machine (VM) packing plays an important role in improving resource utilization in cloud data centers. Recently, memory content similarity among VM instances has been used to speed up multiple VM migration in large clouds. Based on this, many VM packing algorithms have been proposed, which only considered the memory capacity of physical machines (PMs) as the resource constraint. However, in practice the results of such algorithms are not feasible, because thy may not satisfy the constraints of multiple resources (e.g., CPU of the PMs). Besides, the granularities of memory sharing in existing studies are very coarse, and they cannot fully leverage the benefits of memory content similarity which mainly appears at memory page level. In this paper, we study the page-sharing-based VM packing that considers constraints in multiple resources. Given a set of VM instances that share a large number of common memory pages, we pack them into the minimum number of PMs, subject to the constraints in the multiple resources on the PMs. This problem is solved in two steps. First, we pack the maximum number of VMs into a given PM, and then propose an approximation algorithm. The approximation ratio is better than that of the existing algorithm. Then, based on this approximation algorithm, we propose a heuristic algorithm to solve the general problem. Experimental results show that our heuristic algorithm outperforms existing approaches with at most 25% less required PMs and at most 40% less memory page transferring.
       
  • MCA-V2I: A Multi-hop Clustering Approach over Vehicle-to-Internet
           communication for improving VANETs performances
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Oussama Senouci, Zibouda Aliouat, Saad Harous The Internet of Vehicles is a new Intelligent Transportation System paradigm and a promising solution to improve conventional Vehicular Ad-hoc NETworks (VANETs) performances. It has received a great deal of attention in recent years, from many researchers. For this reason, several control mechanisms have been proposed for these networks to confront their challenges, such as dynamic topology and the scalability problem due to the high mobility of vehicles and the high number of connected vehicles, respectively. As an important mechanism used in a VANET, clustering has significantly improved the performance in numerous applications. In this regard, the present work proposes a new Multi-hop Clustering Approach over Vehicle-to-Internet called MCA-V2I to improve VANETs’ performance. MCA-V2I is based on the reasonable assumption that a vehicle can connect to the Internet via a special infrastructure called a Road Side Unit Gateway. Once connected to the Internet, each vehicle can obtain and share the necessary information about its Multi-hop neighbors to perform the clustering process. This latter is performed using a Breadth-first search (BFS) algorithm for traversing a graph based on a Mobility Rate that is calculated according to mobility metrics. MCA-V2I strengthens clusters’ stability through the selection of a Slave Cluster Head in addition to the Master Cluster Head. We evaluate the performances of the proposed scheme using network simulator NS-2 and the VanetMobiSim integrated environment.
       
  • Missing data resilient decision-making for healthcare IoT through
           personalization: A case study on maternal health
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Iman Azimi, Tapio Pahikkala, Amir M. Rahmani, Hannakaisa Niela-Vilén, Anna Axelin, Pasi Liljeberg Remote health monitoring is an effective method to enable tracking of at-risk patients outside of conventional clinical settings, providing early-detection of diseases and preventive care as well as diminishing healthcare costs. Internet-of-Things (IoT) technology facilitates developments of such monitoring systems although significant challenges need to be addressed in the real-world trials. Missing data is a prevalent issue in these systems, as data acquisition may be interrupted from time to time in long-term monitoring scenarios. This issue causes inconsistent and incomplete data and subsequently could lead to failure in decision making. Analysis of missing data has been tackled in several studies. However, these techniques are inadequate for real-time health monitoring as they neglect the variability of the missing data. This issue is significant when the vital signs are being missed since they depend on different factors such as physical activities and surrounding environment. Therefore, a holistic approach to customize missing data in real-time health monitoring systems is required, considering a wide range of parameters while minimizing the bias of estimates. In this paper, we propose a personalized missing data resilient decision-making approach to deliver health decisions 24/7 despite missing values. The approach leverages various data resources in IoT-based systems to impute missing values and provide an acceptable result. We validate our approach via a real human subject trial on maternity health, in which 20 pregnant women were remotely monitored for 7 months. In this setup, a real-time health application is considered, where maternal health status is estimated utilizing maternal heart rate. The accuracy of the proposed approach is evaluated, in comparison to existing methods. The proposed approach results in more accurate estimates especially when the missing window is large.
       
  • EATDDS: Energy-aware middleware for wireless sensor and actuator networks
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Anas Al-Roubaiey, Tarek Sheltami, Ashraf Mahmoud, Ansar Yasar According to Object Management Group organization, Data Distribution Service (DDS) middleware is the leading technology for Industrial Internet of Things (IIoT). Therefore, using DDS-based middleware for Wireless Sensor/Actuator Networks (WSAN) will extremely ease the development and integration of WSAN applications into IIoT, which has an effective impact on improving the productivity and saving the cost. However, applying such technology over WSAN significantly affects the energy consumption. In this work, an energy-aware middleware for WSAN is developed based on DDS standard, which is called EATDDS. Furthermore, developing this middleware leads to a major enhancement into TOSSIM simulator; in which an Online Energy Model (OEM) is developed to make TOSSIM capable of developing and testing energy-aware protocols. The model is validated by comparing it against POWERTOSSIM. Our results show that EATDDS is efficient and can be accommodated with limited system resources.
       
  • Probabilistic normed load monitoring in large scale distributed systems
           using mobile agents
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Moazam Ali, Susmit Bagchi Due to rapid advancements and developments in computing platforms, there is a tremendous growth in heterogeneous distributed systems involving mobile devices. In distributed systems, achieving better performance and efficient utilization of shared resources are dependent on appropriate load monitoring mechanisms. In large-scale distributed systems, performing load monitoring is a challenging task and, it effects in increasing response time degrading system performance. In this paper, we have developed and implemented mobile agent-based load monitoring system intended to large-scale distributed systems. Mobile agent based design is an attractive approach for load monitoring in large-scale distributed systems, because mobile agents are autonomous, goal-driven, reactive to environmental changes and, lightweight piece of program. In this paper, a detailed software architectural design for mobile agent based load monitoring system is presented. The design is based on a probabilistic normed estimation model and corresponding monitoring algorithms. The experimental evaluations and scalability analysis presented in this paper illustrate the behavior of agents and performance of the system under different load conditions. Moreover, a detailed qualitative as well as quantitative analysis of various mobile agent models are presented.
       
  • A tale of three systems: Case studies on the application of architectural
           tactics for cyber-foraging
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Grace Lewis, Patricia Lago, Sebastián Echeverría, Pieter Simoens Cyber-foraging is a technique to enable mobile devices to extend their computing power and storage by offloading computation or data to more powerful servers located in the cloud or in single-hop proximity. In previous work, we developed a set of reusable architectural tactics for cyber-foraging systems. We define architectural tactics as design decisions that influence the achievement of a system quality. In this article we present the results of three case studies to validate the application of the tactics to promote their intended functional and non-functional requirements. The first two case studies focus on the identification of architectural tactics in existing cyber-foraging systems. The third case study focuses on the development of a new cyber-foraging system using the architectural tactics. The results of the case studies are an initial demonstration of the validity of the tactics, and the potential for taking a tactics-driven approach to fulfill functional and non-functional requirements for cyber-foraging systems.
       
  • A semantic-based correlation approach for detecting hybrid and low-level
           APTs
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Amir Mohammadzade Lajevardi, Morteza Amini Sophisticated and targeted malwares, which today are known as Advanced Persistent Threats (APTs), use multi-step, distributed, hybrid and low-level patterns to leak and exfiltrate information, manipulate data, or prevent progression of a program or mission. Since current intrusion detection systems (IDSs) and alert correlation systems do not correlate low-level operating system events with network events and use alert correlation instead of event correlation, the intruders use low and hybrid events in order to distribute the attack vector, hide malwares behaviors, and therefore make detection difficult for such detection systems. In this paper, a new approach for detecting hybrid and low-level attacks, which are prevalent in APTs, is proposed. The proposed approach uses low-level interception and correlates operating system events with network events based on the semantic relationships that are defined between the entities in system ontology. In this scheme, malicious events, especially the events implicitly violate the security policies, are deduced and detected based on the event relations and defined security policies. Also, the proposed approach can track information flows between the existing subjects using a memory transition/manipulation model to reconstruct distributed attack vectors. Evaluation of the proposed approach on a computer network which contains many APTs scenarios shows the effectiveness of our detection approach.
       
  • Competition in data-based service provision: Nash equilibrium
           characterization
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Luis Guijarro, Vicent Pla, Jose R. Vidal, Maurizio Naldi This paper analyzes the economic feasibility of a market for data-based services, which is a scenario envisioned in sectors that are adopting Internet of Things and/or Big Data technologies. A business model for the provision of a data-based service is proposed. In this model, service providers build services from data that is collected and sold by data providers in a market, and provide services to final users. Service providers compete strategically against each other in the data market and in the service market. The strategic interaction between the service providers is analyzed by means of non-cooperative game theory. A thorough mathematical analysis of the Nash equilibria is carried out and existence and uniqueness conditions are proved and discussed.We conclude that if the sensitivity of users to the data-rate-to-price ratio is above the unity and the number of service provider does not exceed a given limit, a unique and meaningful equilibrium exists. We also conclude that the intensity of competition is beneficial for the users and detrimental for the service providers, while for the data providers, maximum surplus is obtained in an intermediate intensity of competition. Finally, an increase in the users’ sensitivity to the data-rate-to-price ratio causes a shift on the surplus from both the service providers and data providers to the users.
       
  • DEED: Dynamic Energy-Efficient Data offloading for IoT applications under
           unstable channel conditions
    • Abstract: Publication date: July 2019Source: Future Generation Computer Systems, Volume 96Author(s): Hui Yan, Xiongtao Zhang, Huangke Chen, Yun Zhou, Weidong Bao, Laurence T. Yang With the widespread use of Internet of things (IoT) applications, the fast response and efficient data storage have been the main concerns of the service users and providers. Thus, data offloading has become a hotspot in both industry and academia, especially for real-time applications. To achieve efficient data offloading, a great number of in-depth studies have been conducted. Nevertheless, when addressing the issue of data offloading, few studies have taken into account the unstable channel conditions, which is however more practical and really needs more attention. In this paper, we consider the unstable channel state in the communication model. Based on this, we propose the task reliability model, the energy consumption model, and the device reliability model. From the perspective of optimizing energy consumption, we propose an optimal task scheduling model. Moreover, an innovative Dynamic Energy-Efficient Data offloading scheduling algorithm-DEED is proposed. The purpose of DEED is to as much as possibly reduce the energy consumption while ensuring the task reliability. To verify the effectiveness of the proposed DEED, extensive experiments are conducted to compare it with three comparison algorithms: DRSD, DEPD, and DRPD. The experimental results under different channel conditions demonstrate the superiority of the DEED in terms of the energy saving, reliability, and robustness.
       
  • CloudWave: Content gathering network with flying clouds
    • Abstract: Publication date: Available online 21 March 2019Source: Future Generation Computer SystemsAuthor(s): Roaxana Gabriela Stan, Catalin Negru, Florin Pop The necessity of achieving high streaming quality requires to combine the benefits of cloud computing with the verticals of a content delivery network into a robust, reliable, flexible and fault-tolerant system. This paper presents a solution for data acquisition, processing and Internet-enabled distribution of multimedia content. The objective is to propose an elastic content gathering network which handles media files delivered, for instance, by unmanned aerial vehicles, being further served on-demand to end users scattered over the globe. We have implemented the framework architecture, the system components with their attached responsibilities and capabilities, evaluating the performance based on the extensive simulations. The designed framework has been validated in terms of ensured correctness. Experimental results have proven the proper behavior of the built system by handling both types of requests, as for the storage of massive incoming data sets and for distributing content through multiple employed servers strategically placed in the proximity of the initiated requests’ locations. Furthermore, once an autonomous and scalable network has been successfully designed, the number of required surrogate servers dynamically adjusts to consume the multimedia services in a cost-efficient manner.
       
  • LPCMsim: A Lightweight Phase Change Memory Simulator
    • Abstract: Publication date: Available online 21 March 2019Source: Future Generation Computer SystemsAuthor(s): Zaihong He, Jishun Kuang, Yanjie Tan, Shihui Peng, Huailiang Tan Phase Change Memory (PCM) is a kind of non-volatile byte-addressable memories with the potential to replace DRAM as memory, which has been widely researched. PCM has the disadvantage that too many writes can cause wear-out of PCM cells. Wear-leveling and error correction are two methods to improve the lifetime of PCM. However, performing such research with real PCM devices is time-consuming and expensive. The state-of-the-art simulators do not directly support PCM wear-leveling and error correction algorithms simulation. It is necessary to offer the community a simulator to model wear-leveling and error correction schemes for PCM. To fill the gap, this paper presents a lightweight PCM simulator named LPCMsim, which focuses on simulating wear-leveling and error correction, and does not simulate unessential functions for PCM (such as cache and network functions). To accelerate a simulating process, LPCMsim directly simulates memory accessing during simulating wear-leveling, and simulates error correction based on the normal distribution of lifetime. In order to simplify the integration of different algorithms into the simulator, LPCMsim is divided into the framework and simulated algorithms which are implemented in separated modules and are connected by an interface. LPCMsim facilitates the designers of new algorithms to validate their schemes by exploiting the exported APIs, and further, for some common algorithms, performance optimizations have already been integrated in advance. The experimental results show that LPCMsim is an efficient lightweight PCM simulator.
       
  • Security situation assessment for massive MIMO systems for 5G
           communications
    • Abstract: Publication date: Available online 20 March 2019Source: Future Generation Computer SystemsAuthor(s): Shiqi Yang, Di Yin, Xia Song, Xiaoming Dong, Gunasekaran Manogaran, George Mastorakis, Constandinos X. Mavromoustakis, Jordi Mongay Batalla MIMO technology is a key technology of 5G, which is widely used in next-generation scenarios such as heterogeneous networks, millimeter-wave networks, and automotive networks. How to build a large-scale MIMO system security situation assessment model for 5G has become the main topic of current concern. This paper analyzes the situational awareness theory of 5g-oriented MIMO system security. Firstly, based on the theory of MIMO system, the influence of MIMO system on 5G network security and the theory of situation awareness technology, the security situation awareness system model of 5g-oriented large-scale MIMO system is constructed. In the security situation assessment section of the MIMO system, according to the rules of evidence reasoning, of different active attack and passive eavesdropping two kinds of empowerment behavior under different attribute data, and then put the empowerment process of average power and large scale attenuation coefficient, interrupt probability, series, and maintain confidentiality gain data as model inputs, such as implementing MIMO system level of situation assessment. In the security situation prediction part of MIMO system, based on the actual situation level of the system as the criterion, 30 sets of experimental simulation data are used to predict the next moment situation level of the MIMO system. In order to verify the stability and validity of the model, MATLAB is used to simulate the experiment. The results show that the mean square error RMSE of different iterations is kept below 0.02, and the TSQ values of both sides are kept below 0.16. The overall prediction effect of the model was good, indicating that the system could provide effective decision support for 5G MIMO security situation prediction
       
  • Extension of MIH for FPMIPv6 (EMIH-FPMIPv6) to Support Optimized
           Heterogeneous Handover
    • Abstract: Publication date: Available online 20 March 2019Source: Future Generation Computer SystemsAuthor(s): Jianfeng Guan, Vishal Sharma, Ilsun You, Mohammad Atiquzzaman, Muhammad Imran Fast handover for Proxy Mobile IPv6 (FPMIPv6) can optimize the handover performance compared with PMIPv6 in terms of handover delay and packet loss. However, FPMIPv6 cannot handle heterogeneous handovers due to the lack of unified Layer 2 triggering mechanism. While Media Independent Handover (MIH) can provide heterogeneous handover support, and a lot of MIH-based integration solutions have been proposed. However, most of these solutions are based on the integration of MIH and PMIPv6, and require additional mechanisms such as L2 scanning, handover coordinator or neighbor discovery, which are out of the scope of MIH and difficult to be standardized. Furthermore, the direct integration of MIH and FPMIPv6 will cause redundant signaling cost due to the similar functions such as MIH handover commit procedure in MIH and inter-MAG (Mobility Access Gateway) tunnel setup in FPMIPv6. This paper provides a comprehensive survey on these solutions and compares each solution’s functionality and characteristic, and then proposes an integration scheme based on Extension of MIH for FPMIPv6 (EMIH-FPMIPv6) to support optimized heterogeneous handover, which extends the existing MIH standard and reduces the redundant messages interaction caused by FPMIPv6 and MIH. This paper adopts the city section mobility model and heterogeneous networks model to analyze and compare the performance of EMIH-FPMIPv6 under different heterogeneous handover scenarios. The analytical results show that EMIH-FPMIPv6 is capable of reducing the handover delay and the signaling cost compared to the solution specified in MIH standard (noted as standard handover solution) and FPMIPv6.
       
  • Software-defined object storage in multi-tenant environments
    • Abstract: Publication date: Available online 19 March 2019Source: Future Generation Computer SystemsAuthor(s): Raúl Gracia-Tinedo, Josep Sampé, Gerard París, Marc Sánchez-Artigas, Pedro García-López, Yosef Moatti Object stores are becoming pervasive due to their scalability and simplicityto manage data growth. Their rampant adoption, however, contrasts with their scant flexibility to support multi-tenancy. Very often, this results in a deficient adaptation of the system to the heterogeneous tenants’ demands and the multiple applications sharing the same object storage infrastructure. In this manuscript, we present Crystal, the first Software-Defined Storage (SDS) architecture whose core objective is to efficiently support multi-tenancy in object stores. Abide by the principles of software-defined systems, Crystal separates the storage’s control logic from the underlying storage nodes, proxies and caches, promoting the (logical) centralization of storage control and the ability to program the storage layer. For storage control, Crystal enables high-level policies at the tenant, container, and object granularities. It also translates these policies into a set of distributed controllers that can orchestrate computations at the storage layer based on real-time workload information. We demonstrate Crystal through three use cases on top of OpenStack Swift: One that proves its storage automation capabilities; one that exploits the pushdown of lambda functions on data streams to improve analytics data ingestion; and another that differentiates IO bandwidth in a multi-tenant scenario. We show that Crystal is an extensible platform to deploy new SDS services for object stores with small overhead.
       
  • Accelerating in-memory transaction processing using general purpose
           graphics processing units
    • Abstract: Publication date: Available online 19 March 2019Source: Future Generation Computer SystemsAuthor(s): Lan Gao, Yunlong Xu, Rui Wang, Hailong Yang, Zhongzhi Luan, Depei Qian High throughput is critical for on-line transaction processing (OLTP) applications with a large amount of users. With massive parallel processing units and high memory bandwidth, GPUs are suitable for accelerating OLTP transactions. However, it is challenge to implement transaction execution on GPUs, due to (1) the branch divergences caused by the single instruction multiple threads (SIMT) execution paradigm, and (2) the lack of fine-grained synchronization mechanism and pointer-based dynamic data structures in the GPU ecosystem.In this paper, we present a high-performance in-memory transaction processing system on GPUs to accelerate OLTP applications, named GPU-TPS. Firstly, we propose a transaction execution model to improve GPU hardware utilization and perform synchronization among transactions. Secondly, we optimize the indexing data structures that used extensively in OLTP systems (i.e., hash table for unordered store, and b+tree for ordered store) for fast storing on GPUs.To evaluate GPU-TPS, we apply it to two popular OLTP workloads (SmallBank and TPCC), and compare it with the state-of-the-art hardware transactional memory based CPU OLTP system (DrTM) and a GPU OLTP system (GPUTx). The experimental results show that GPU-TPS outperforms the CPU implementation by 3.8X for SmallBank and by 1.9X for TPCC, and outperforms the GPU implementation by 1.6X for SmallBank and by 1.8X for TPCC.
       
  • PASCAL: An architecture for proactive auto-scaling of distributed services
    • Abstract: Publication date: Available online 19 March 2019Source: Future Generation Computer SystemsAuthor(s): Federico Lombardi, Andrea Muti, Leonardo Aniello, Roberto Baldoni, Silvia Bonomi, Leonardo Querzoni One of the main characteristics that today makes cloud services so popular is their ability to be elastic, i.e., they can adapt their provisioning to variable workloads, thus increasing resource utilization and reducing operating costs. At the core of any elastic service lies an automatic scaling mechanism that drives provisioning on the basis of a given strategy. In this paper we propose PASCAL , an architecture for Proactive Auto-SCALing of generic distributed services. PASCAL combines a proactive approach, to forecast incoming workloads, with a profiling system, to estimate required provision. Scale-in/out operations are decided according to an application-specific strategy, which aims at provisioning the minimum number of resources needed to sustain the foreseen workload. The main novelties introduced with PASCAL architecture are: (i) a strategy to proactively auto-scale a distributed stream processing system (namely, Apache Storm) with the aim of load balancing operators through an accurate system performance estimation model, and (ii) a strategy to proactively auto-scale a distributed datastore (namely, Apache Cassandra), focussed on how to choose when executing scaling actions on the basis of the time needed for the activation/deactivation of storage nodes so as to have the configuration ready when needed. We provide a prototype implementation of PASCAL for both use cases and, through an experimental evaluation conducted on a private cloud, we validate our approach and demonstrate the effectiveness of the proposed strategies in terms of saved resources and response time.
       
  • A novel binary artificial bee colony algorithm
    • Abstract: Publication date: Available online 19 March 2019Source: Future Generation Computer SystemsAuthor(s): Clodomir J. Santana, Mariana Macedo, Hugo Siqueira, Anu Gokhale, Carmelo J.A. Bastos-Filho This paper presents a novel artificial bee colony algorithm for binary optimization in general. Our proposal, named NBABC, features a mechanism which limits the number of dimensions that can be changed in the employed and onlookers bees’ phase. We compare the NBABC to other five binary variants of the continuous ABC, including the state-of-the-art versions for binary optimization, and other four well-known methods. We employ different types of tasks to assess the performance of all the algorithms including the OneMax problem, five variations of the 0/1 Knapsack problems and Feature Selection using eight public datasets. The experiments show that the new proposal obtained competitive results, and in some cases outperformed not only the binary-based ABCs but also the other binary swarm-based and evolutionary-based optimizers.
       
  • Fast congestion–free consistent flow forwarding rules update in
           software defined networking
    • Abstract: Publication date: Available online 19 March 2019Source: Future Generation Computer SystemsAuthor(s): Pan Li, Songtao Guo, Chengsheng Pan, Li Yang, Guiyan Liu, Yue Zeng In software defined networking (SDN), flow migration will be required when topology changes to improve network performance such as load balancing. However, black holes, loops and transient congestion may occur during flow migration due to the asynchronous update of switches on the data plane. Therefore, in this paper, we propose a novel segmented update method to shorten the time of rules update, and a novel transient congestion avoidance algorithm to minimize the number of delayed updating flows, which can both reduce update time of flows. Specifically, we construct three novel models to guarantee no black holes, no loops and no transient congestion, respectively. The first two models to avoid black holes and loops can update multiple nodes in each segment instead of updating the nodes one by one like Cupid. The third model to avoid transient congestion minimizes the number of delayed updating flows. Subsequently, three novel black holes avoidance algorithm, loops avoidance algorithm and congestion avoidance algorithm are respectively proposed. Furthermore, we propose a novel rules update (RU) algorithm which combines these three algorithms to update the rules to avoid black holes, loops and transient congestion simultaneously. Simulation results show that our scheme can increase the number of directly updated flows by 75% on a single congestion link and reduce the rules update time of the flows by 34% compared with the existing work.
       
  • Analyse digital forensic evidences through a semantic-based methodology
           and NLP techniques
    • Abstract: Publication date: Available online 19 March 2019Source: Future Generation Computer SystemsAuthor(s): F. Amato, G. Cozzolino, V. Moscato, F. Moscato The increasing adoption of digital technologies to manage and process information used in everyday life, results in an increase in the demand for digital data analysis for investigative purposes. In fact, the reconstruction of computer and telematic crimes, or, in general, of crimes committed with computer systems, require the adoption of Computer Forensics best practices in order to extract relevant evidences from electronic devices, guaranteeing the integrity of data and their admissibility during a trial. The process of extraction, conservation, analysis and documentation of a forensic investigation can be enhanced by a framework that support investigators during their work, correlating evidences collected by different forensic tools. So, in this work we propose a semantic methodology and a system architecture for evidences correlation aiming to provide enhanced retrieval and reasoning capabilities.
       
  • A comparative analysis of emerging approaches for securing java software
           with Intel SGX
    • Abstract: Publication date: Available online 18 March 2019Source: Future Generation Computer SystemsAuthor(s): Luigi Coppolino, Salvatore D’Antonio, Giovanni Mazzeo, Luigi Romano Intel SGX enables developers to protect security critical parts of their application code and data even from privileged software. This type of protection is needed in all cases where applications run on untrusted infrastructures, including public clouds. Since a significant fraction of current applications is written in Java, the research strand on how to fully unleash the potential of SGX in Java is flourishing, and multiple techniques have been proposed. In this paper, we review such techniques, and select the most promising ones – namely SCONE, SGX-LKL, and SGX-JNI Bridge – for an experimental comparison with respect to effort, security, and performance. We use a benchmark application from a real-world case study based on microservices – possibly the most prominent software architecture for current applications – and built on the widely adopted Vert.x development framework. We focus on specific microservices characterized by three different profiles in terms of resource usage – I/O-, CPU-, and Memory-intensive – and assess the trade-offs of the three aforementioned techniques for SGX integration. The results of the analysis can be used as a reference by practitioners willing to identify the best approach for integrating SGX in their Java applications, based on priorities of their particular context.
       
  • A critical analysis of an IoT–aware AAL system for elderly
           monitoring
    • Abstract: Publication date: Available online 18 March 2019Source: Future Generation Computer SystemsAuthor(s): Aitor Almeida, Rubén Mulero, Piercosimo Rametta, Vladimir Urošević, Marina Andrić, Luigi Patrono A growing number of elderly people (65+ years old) are affected by particular conditions, such as Mild Cognitive Impairment (MCI) and frailty, which are characterized by a gradual cognitive and physical decline. Early symptoms may spread across years and often they are noticed only at late stages, when the outcomes remain irrevocable and require costly intervention plans. Therefore, the clinical utility of early detecting these conditions is of substantial importance in order to avoid hospitalization and lessen the socio-economic costs of caring, while it may also significantly improve elderly people’s quality of life. This work deals with a critical performance analysis of an Internet of Things aware Ambient Assisted Living (AAL) system for elderly monitoring. The analysis is focused on three main system components: (i) the City-wide data capturing layer, (ii) the Cloud-based centralized data management repository, and (iii) the risk analysis and prediction module. Each module can provide different operating modes, therefore the critical analysis aims at defining which are the best solutions according to context’s needs. The proposed system architecture is used by the H2020 City4Age project to support geriatricians for the early detection of MCI and frailty conditions.
       
  • THOR: A framework to build an advanced metering infrastructure resilient
           to DAP failures in smart grids
    • Abstract: Publication date: Available online 18 March 2019Source: Future Generation Computer SystemsAuthor(s): Igor C.G. Ribeiro, Célio Albuquerque, Antônio Rocha, Diego Passos Smart Grids represent an evolution of the currently deployed power transmission and distribution systems. It implements a bidirectional data and energy flow infrastructure, which enables new important applications, such as distributed generation, remote metering and demand response. These applications require that a control center collects data from and sends commands to smart meters on the consumer premises. The communication between the control center and smart meters is performed through a gateway, called Data Aggregation Point (DAP). The interconnection of smart meters and DAPs results in an Advanced Metering Infrastructure (AMI), which is a key component of Smart Grids. Because a DAP is a potential single point of failure, the AMI must be implemented in such a way that each smart meter can reach the control center through multiple DAPs. In this work, we propose THOR, a framework that takes advantage of DAP redundancy to improve AMI reliability in case of DAP failures. In addition, we also propose the MultiDAP Selection Algorithm (MDSA) as an efficient implementation of THOR. Simulations results show that MDSA is able to guarantee the delivery of nearly 100% of all sent messages, without imposing significant message delay penalties.
       
  • New Landscapes of the Data Stream Processing in the era of Fog Computing
    • Abstract: Publication date: Available online 16 March 2019Source: Future Generation Computer SystemsAuthor(s): Valeria Cardellini, Gabriele Mencagli, Domenico Talia, Massimo Torquati The “New Landscapes of the Data Stream Processing in the era of Fog Computing” special issue aims to present new research works on topics related to recent advances in Data Streaming Processing (DSP) computing paradigm in the emerging environments of Fog Computing and Internet of Things (IoT). The papers included in this special issue are relevant examples of recent research achievements in the definition of new DSP applications in the Fog Computing context, of run-time systems mechanisms and techniques targeting DPS frameworks, and also of new high-level interfaces for data streaming in highly dynamic IT environments.
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 3.235.41.241
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-