Journal Cover
Future Internet
Journal Prestige (SJR): 0.219
Citation Impact (citeScore): 1
Number of Followers: 149  

  This is an Open Access Journal Open Access journal
ISSN (Print) 1999-5903
Published by MDPI Homepage  [222 journals]
  • Future Internet, Vol. 11, Pages 182: An Ontology-Based Recommender System
           with an Application to the Star Trek Television Franchise

    • Authors: Paul Sheridan, Mikael Onsjö, Claudia Becerra, Sergio Jimenez, George Dueñas
      First page: 182
      Abstract: Collaborative filtering based recommender systems have proven to be extremely successful in settings where user preference data on items is abundant. However, collaborative filtering algorithms are hindered by their weakness against the item cold-start problem and general lack of interpretability. Ontology-based recommender systems exploit hierarchical organizations of users and items to enhance browsing, recommendation, and profile construction. While ontology-based approaches address the shortcomings of their collaborative filtering counterparts, ontological organizations of items can be difficult to obtain for items that mostly belong to the same category (e.g., television series episodes). In this paper, we present an ontology-based recommender system that integrates the knowledge represented in a large ontology of literary themes to produce fiction content recommendations. The main novelty of this work is an ontology-based method for computing similarities between items and its integration with the classical Item-KNN (K-nearest neighbors) algorithm. As a study case, we evaluated the proposed method against other approaches by performing the classical rating prediction task on a collection of Star Trek television series episodes in an item cold-start scenario. This transverse evaluation provides insights into the utility of different information resources and methods for the initial stages of recommender system development. We found our proposed method to be a convenient alternative to collaborative filtering approaches for collections of mostly similar items, particularly when other content-based approaches are not applicable or otherwise unavailable. Aside from the new methods, this paper contributes a testbed for future research and an online framework to collaboratively extend the ontology of literary themes to cover other narrative content.
      Citation: Future Internet
      PubDate: 2019-08-22
      DOI: 10.3390/fi11090182
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 183: A Proof-of-Concept Demonstration of
           Isolated and Encrypted Service Function Chains

    • Authors: Håkon Gunleifsen, Thomas Kemmerich, Vasileios Gkioulos
      First page: 183
      Abstract: Contemporary Service Function Chaining (SFC), and the requirements arising from privacy concerns, call for the increasing integration of security features such as encryption and isolation across Network Function Virtualisation (NFV) domains. Therefore, suitable adaptations of automation and encryption concepts for the development of interconnected data centre infrastructures are essential. Nevertheless, packet isolation constraints related to the current NFV infrastructure and SFC protocols, render current NFV standards insecure. Accordingly, the goal of our work was an experimental demonstration of a new SFC packet forwarding standard that enables contemporary data centres to overcome these constraints. This article presents a comprehensive view of the developed architecture, focusing on the elements that constitute a new forwarding standard of encrypted SFC packets. Through a Proof-of-Concept demonstration, we present our closing experimental results of how the architecture fulfils the requirements defined in our use case.
      Citation: Future Internet
      PubDate: 2019-08-24
      DOI: 10.3390/fi11090183
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 184: Mobility-Enabled Edge Server
           Selection for Multi-User Composite Services

    • Authors: Wenming Zhang, Yiwen Zhang, Qilin Wu, Kai Peng
      First page: 184
      Abstract: In mobile edge computing, a set of edge servers is geographically deployed near the mobile users such that accessible computing capacities and services can be provided to users with low latency. Due to user’s mobility, one fundamental and critical problem in mobile edge computing is how to select edge servers for many mobile users so that the total waiting time is minimized. In this paper, we propose a multi-user waiting time computation model about composite services and show the resource contention of the edge server among mobile users. Then, we introduce a novel and optimal Multi-user Edge server Selection method based on Particle swarm optimization (MESP) in mobile edge computing, which selects edge servers for mobile uses in advance within polynomial time. Extensive simulations on a real-world data-trace show that the MESP algorithm can effectively reduce the total waiting time compared with traditional approaches.
      Citation: Future Internet
      PubDate: 2019-08-25
      DOI: 10.3390/fi11090184
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 185: An Improved Method for Named Entity
           Recognition and Its Application to CEMR

    • Authors: Ming Gao, Qifeng Xiao, Shaochun Wu, Kun Deng
      First page: 185
      Abstract: Named Entity Recognition (NER) on Clinical Electronic Medical Records (CEMR) is a fundamental step in extracting disease knowledge by identifying specific entity terms such as diseases, symptoms, etc. However, the state-of-the-art NER methods based on Long Short-Term Memory (LSTM) fail to exploit GPU parallelism fully under the massive medical records. Although a novel NER method based on Iterated Dilated CNNs (ID-CNNs) can accelerate network computing, it tends to ignore the word-order feature and semantic information of the current word. In order to enhance the performance of ID-CNNs-based models on NER tasks, an attention-based ID-CNNs-CRF model, which combines the word-order feature and local context, is proposed. Firstly, position embedding is utilized to fuse word-order information. Secondly, the ID-CNNs architecture is used to extract global semantic information rapidly. Simultaneously, the attention mechanism is employed to pay attention to the local context. Finally, we apply the CRF to obtain the optimal tag sequence. Experiments conducted on two CEMR datasets show that our model outperforms traditional ones. The F1-scores of 94.55% and 91.17% are obtained respectively on these two datasets, and both are better than LSTM-based models.
      Citation: Future Internet
      PubDate: 2019-08-26
      DOI: 10.3390/fi11090185
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 186: Sustainable Communication Systems: A
           Graph-Labeling Approach for Cellular Frequency Allocation in
           Densely-Populated Areas

    • Authors: Adebola Orogun, Oluwaseun Fadeyi, Ondrej Krejcar
      First page: 186
      Abstract: The need for smart and sustainable communication systems has led to the development of mobile communication networks. In turn, the vast functionalities of the global system of mobile communication (GSM) have resulted in a growing number of subscribers. As the number of users increases, the need for efficient and effective planning of the “limited” frequency spectrum of the GSM is inevitable, particularly in densely-populated areas. As such, there are ongoing discussions about frequency (channel) allocation methods to resolve the challenges of channel allocation, which is a complete NP (Nondeterministic Polynomial time) problem. In this paper, we propose an algorithm for channel allocation which takes into account soft constraints (co-channel interference and adjacent channel interference). By using the Manhattan distance concept, this study shows that the formulation of the algorithm is correct and in line with results in the literature. Hence, the Manhattan distance concept may be useful in other scheduling and optimization problems. Furthermore, this unique concept makes it possible to develop a more sustainable telecommunication system with ease of connectivity among users, even when several subscribers are on a common frequency.
      Citation: Future Internet
      PubDate: 2019-08-26
      DOI: 10.3390/fi11090186
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 187: Research on SWIM Services Dynamic
           Migration Method

    • Authors: Zhijun Wu, Shengyan Zhou, Liang Liu, Jin Lei
      First page: 187
      Abstract: Air traffic management (ATM) plays an important role in maintaining and promoting air traffic safety, maintaining air traffic order and ensuring smooth air traffic. As the core of air traffic management, it is essential to ensure the safe and stable operation of system-wide information management (SWIM). Facing the complex and ever-changing network environment, a SWIM services dynamic migration method is proposed in this paper. This method combines SWIM core services to select destination nodes and migrate services. The experiment proves that the method can hide the service node while ensuring service continuity and increase the difficulty of malicious detection. By comparing with others, this method is more suitable for SWIM in terms of invulnerability. The throughput and delay performance of the method can meet the needs of SWIM.
      Citation: Future Internet
      PubDate: 2019-08-27
      DOI: 10.3390/fi11090187
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 188: SEO Practices: A Study about the Way
           News Websites Allow the Users to Comment on Their News Articles

    • Authors: Minos-Athanasios Karyotakis, Evangelos Lamprou, Matina Kiourexidou, Nikos Antonopoulos
      First page: 188
      Abstract: In the current media world, there is a huge debate about the importance of the visibility of a news website in order to secure its existence. Thus, search engine optimization (SEO) practices have emerged in the news media systems around the world. This study aimed to expand the current literature about the SEO practices by focusing on examining, via the walkthrough method, the ways that news companies allow the users to comment on their online news articles. The comments on the news websites are related to the notions of social influence, information diffusion, and play an essential role as a SEO practice, for instance, by providing content and engagement. The examined sample was collected by the most visited news websites’ rankings of for a global scale and for the countries Greece and Cyprus. The findings reveal that the news websites throughout the globe use similar features and ways to support the comments of the users. In the meantime, though, a high number of the news websites did not allow the users to use their social media accounts in order to comment the provided news articles, or provided multiple comment platforms. This trend goes against the SEO practices. It is believed that this finding is associated with the difficulty of the news organizations to regulate and protect themselves by the users’ comments that promote, in some case harmful rhetoric and polarization.
      Citation: Future Internet
      PubDate: 2019-08-30
      DOI: 10.3390/fi11090188
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 189: RFID Based Embedded System for
           Sustainable Food Management in an IoT Network Paradigm

    • Authors: Raúl Parada, Alfonso Palazón, Carlos Monzo, Joan Melià-Seguí
      First page: 189
      Abstract: A third of the food produced in the world ends up in the rubbish, enough to put an end to world hunger. On the other hand, society is increasingly concerned to bring healthy eating habits. A RFID (radio frequency identification) food management system is designed to palliate the previously described issues in an Internet of Things (IoT) network paradigm. It consists of RFID readers placed on a user’s kitchen furniture, which automatically reads food information. There is no need for direct sight between reader and tag, as it occurs through the barcode technology. As a complement, a multi-platform web application is developed, allowing its users to check the date of food expiration and other detailed information. The application notifies the user when a product is about to expire. It also offers recipes that might be prepared with available foods, thus preventing them from being wasted. The recipes are accompanied by their nutritional information, so that the user can exhaustively monitor what he/she eats. This embedded system may provide economic benefits to the manufacturer, since it allows supermarkets to pay for displaying their products advertised through the application. After system deployment, design conclusions are shown, and future improvement points are indicated.
      Citation: Future Internet
      PubDate: 2019-09-01
      DOI: 10.3390/fi11090189
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 190: Deep Learning-Based Sentimental
           Analysis for Large-Scale Imbalanced Twitter Data

    • Authors: Jamal, Xianqiao, Aldabbas
      First page: 190
      Abstract: Emotions detection in social media is very effective to measure the mood of people about a specific topic, news, or product. It has a wide range of applications, including identifying psychological conditions such as anxiety or depression in users. However, it is a challenging task to distinguish useful emotions’ features from a large corpus of text because emotions are subjective, with limited fuzzy boundaries that may be expressed in different terminologies and perceptions. To tackle this issue, this paper presents a hybrid approach of deep learning based on TensorFlow with Keras for emotions detection on a large scale of imbalanced tweets’ data. First, preprocessing steps are used to get useful features from raw tweets without noisy data. Second, the entropy weighting method is used to compute the importance of each feature. Third, class balancer is applied to balance each class. Fourth, Principal Component Analysis (PCA) is applied to transform high correlated features into normalized forms. Finally, the TensorFlow based deep learning with Keras algorithm is proposed to predict high-quality features for emotions classification. The proposed methodology is analyzed on a dataset of 1,600,000 tweets collected from the website ‘kaggle’. Comparison is made of the proposed approach with other state of the art techniques on different training ratios. It is proved that the proposed approach outperformed among other techniques.
      Citation: Future Internet
      PubDate: 2019-09-02
      DOI: 10.3390/fi11090190
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 191: Social Network Services Management
           and Risk of Doocing. Comment on Kim, S.; Park, H.; Choi, M.J. “Negative
           Impact of Social Network Services Based on Stressor-Stress-Outcome: The
           Role of Experience of Privacy Violations, Future Int. 2019, 11, 137”

    • Authors: Fantinelli, Cortini
      First page: 191
      Abstract: In light of the recent work by Kim and colleagues about Social Network Service (SNS), examining the individual and SNS characteristics as predictors of SNS fatigue, we hypothesize to enlarge their model to the job context. SNS is a relevant issue in occupational life as employers use it to have a deeper knowledge of their employees and as a tool of corporate communication. Employees can use SNS as a social platform and as a way to express discontent. In this latter case, the organization can implement a disciplinary procedure toward employees, known as doocing. The perception of privacy violation is strictly related to the fear and awareness of doocing, which in turn can predict SNS fatigue as well. So, it could be worthwhile to extend Kim and colleagues’ model to the workplace with particular attention to the doocing phenomenon.
      Citation: Future Internet
      PubDate: 2019-09-04
      DOI: 10.3390/fi11090191
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 192: A Framework for the Detection of
           Search and Rescue Patterns Using Shapelet Classification

    • Authors: Konstantinos Kapadais, Iraklis Varlamis, Christos Sardianos, Konstantinos Tserpes
      First page: 192
      Abstract: The problem of unmanned supervision of maritime areas has attracted the interest of researchers for the last few years, mainly thanks to the advances in vessel monitoring that the Automatic Identification System (AIS) has brought. Several frameworks and algorithms have been proposed for the management of vessel trajectory data, which focus on data compression, data clustering, classification and visualization, offering a wide variety of solutions from vessel monitoring to automatic detection of complex events. This work builds on our previous work in the topic of automatic detection of Search and Rescue (SAR) missions, by developing and evaluating a methodology for classifying the trajectories of vessels that possibly participate in such missions. The proposed solution takes advantage of a synthetic trajectory generator and a classifier that combines a genetic algorithm (GENDIS) for the extraction of informative shapelets from training data and a transformation to the shapelets’ feature space. Using the generator and several SAR patterns that are formally described in naval operations bibliography, it generates a synthetic dataset that is used to train the classifier. Evaluation on both synthetic and real data has very promising results and helped us to identify vessel SAR maneuvers without putting any effort into manual annotation.
      Citation: Future Internet
      PubDate: 2019-09-04
      DOI: 10.3390/fi11090192
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 193: Satellite Integration into 5G: Accent
           on First Over-The-Air Tests of an Edge Node Concept with Integrated
           Satellite Backhaul

    • Authors: Florian Völk, Konstantinos Liolis, Marius Corici, Joe Cahill, Robert T. Schwarz, Thomas Schlichter, Eric Troudt, Andreas Knopp
      First page: 193
      Abstract: The 5G vision embraces a broad range of applications including the connectivity in underserved and remote areas. In particular, for these applications, satellites are going to play a role in future 5G networks to provide capacity on trains, vessels, aircraft, and for base stations around the globe. In this paper, a 5G edge node concept, developed and evaluated with over-the-air tests using satellites in the geostationary orbit, is presented. The article covers a testbed demonstration study in Europe with a large-scale testbed including satellites and the latest standardization for the network architecture. The main goal of this testbed is to evaluate how satellite networks can be best integrated within the convergent 5G environment. The over-the-air tests for 5G satellite integration in this article are based on a 3GPP Release 15 core network architecture.
      Citation: Future Internet
      PubDate: 2019-09-05
      DOI: 10.3390/fi11090193
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 194: 25 Years of Bluetooth Technology

    • Authors: Sherali Zeadally, Farhan Siddiqui, Zubair Baig
      First page: 194
      Abstract: Bluetooth technology started off as a wireless, short-range cable replacement technology but it has undergone significant developments over the last two decades. Bluetooth radios are currently embedded in almost all computing devices including personal computers, smart phones, smart watches, and even micro-controllers. For many of us, Bluetooth is an essential technology that we use every day. We provide an insight into the history of Bluetooth and its significant design developments over the last 25 years. We also discuss related issues (including security) and Bluetooth as a driving technology for the Internet of Things (IoT). Finally, we also present recent research results obtained with Bluetooth technology in various application areas.
      Citation: Future Internet
      PubDate: 2019-09-09
      DOI: 10.3390/fi11090194
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 195: ERMOCTAVE: A Risk Management
           Framework for IT Systems Which Adopt Cloud Computing

    • Authors: Masky Mackita, Soo-Young Shin, Tae-Young Choe
      First page: 195
      Abstract: Many companies are adapting cloud computing technology because moving to the cloud has an array of benefits. During decision-making, having processed for adopting cloud computing, the importance of risk management is progressively recognized. However, traditional risk management methods cannot be applied directly to cloud computing when data are transmitted and processed by external providers. When they are directly applied, risk management processes can fail by ignoring the distributed nature of cloud computing and leaving numerous risks unidentified. In order to fix this backdrop, this paper introduces a new risk management method, Enterprise Risk Management for Operationally Critical Threat, Asset, and Vulnerability Evaluation (ERMOCTAVE), which combines Enterprise Risk Management and Operationally Critical Threat, Asset, and Vulnerability Evaluation for mitigating risks that can arise with cloud computing. ERMOCTAVE is composed of two risk management methods by combining each component with another processes for comprehensive perception of risks. In order to explain ERMOCTAVE in detail, a case study scenario is presented where an Internet seller migrates some modules to Microsoft Azure cloud. The functionality comparison with ENISA and Microsoft cloud risk assessment shows that ERMOCTAVE has additional features, such as key objectives and strategies, critical assets, and risk measurement criteria.
      Citation: Future Internet
      PubDate: 2019-09-10
      DOI: 10.3390/fi11090195
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 196: Evaluating the Degree of Uncertainty
           of Research Activities in Industry 4.0

    • Authors: Sirio Cividino, Gianluca Egidi, Ilaria Zambon, Andrea Colantoni
      First page: 196
      Abstract: Research and development (R&D) are always oriented towards new discoveries, based on original terms or hypotheses, and their concluding outcomes are often uncertain. The present work focused on the degree of uncertainty for R&D activities. In fact, uncertainty makes it difficult to quantify the time and resources needed to achieve a final outcome, create a work plan and budget, and finalize the resulting “innovative” products or services that could be transferred or exchanged in a specific market. The present work attempts to indicate the degree of uncertainty of the research activities developed by a set of firms. The method used aimed to quantify the five criteria defined by the Manual of Frascati. Through the creation of an uncertainty cloud, a cone of uncertainty was defined following an approach based on project management. The evaluation grid was characterized by the decomposition of the different variables divided into quartiles, which allowed for the detection of the evolution of the project and each of its component. The ancillary objective aim was to also observe the development degree of these industries towards a framework of Industry 4.0.
      Citation: Future Internet
      PubDate: 2019-09-11
      DOI: 10.3390/fi11090196
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 197: MU R-CNN: A Two-Dimensional Code
           Instance Segmentation Network Based on Deep Learning

    • Authors: Yuan, Li, Jiang, Xu, Guo, Zhao, Zhang, Guo, Shen
      First page: 197
      Abstract: In the context of Industry 4.0, the most popular way to identify and track objects is to add tags, and currently most companies still use cheap quick response (QR) tags, which can be positioned by computer vision (CV) technology. In CV, instance segmentation (IS) can detect the position of tags while also segmenting each instance. Currently, the mask region-based convolutional neural network (Mask R-CNN) method is used to realize IS, but the completeness of the instance mask cannot be guaranteed. Furthermore, due to the rich texture of QR tags, low-quality images can lower intersection-over-union (IoU) significantly, disabling it from accurately measuring the completeness of the instance mask. In order to optimize the IoU of the instance mask, a QR tag IS method named the mask UNet region-based convolutional neural network (MU R-CNN) is proposed. We utilize the UNet branch to reduce the impact of low image quality on IoU through texture segmentation. The UNet branch does not depend on the features of the Mask R-CNN branch so its training process can be carried out independently. The pre-trained optimal UNet model can ensure that the loss of MU R-CNN is accurate from the beginning of the end-to-end training. Experimental results show that the proposed MU R-CNN is applicable to both high- and low-quality images, and thus more suitable for Industry 4.0.
      Citation: Future Internet
      PubDate: 2019-09-13
      DOI: 10.3390/fi11090197
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 198: Dynamic Group Recommendation Based on
           the Attention Mechanism

    • Authors: Xu, Ding, Sun, Zhao, Chen
      First page: 198
      Abstract: Group recommendation has attracted significant research efforts for its importance in benefiting group members. The purpose of group recommendation is to provide recommendations to group users, such as recommending a movie to several friends. Group recommendation requires that the recommendation should be as satisfactory as possible to each member of the group. Due to the lack of weighting of users in different items, group decision-making cannot be made dynamically. Therefore, in this paper, a dynamic recommendation method based on the attention mechanism is proposed. Firstly, an improved density peak clustering (DPC) algorithm is used to discover the potential group; and then the attention mechanism is adopted to learn the influence weight of each user. The normalized discounted cumulative gain NDCG and hit ratio (HR) are adopted to evaluate the validity of the recommendation results. Experimental results on the CAMRa2011 dataset show that our method is effective.
      Citation: Future Internet
      PubDate: 2019-09-17
      DOI: 10.3390/fi11090198
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 199: Enhancing the 3GPP V2X Architecture
           with Information-Centric Networking

    • Authors: Marica Amadeo, Claudia Campolo, Antonella Molinaro, Jerome Harri, Christian Esteve Rothenberg, Alexey Vinel
      First page: 199
      Abstract: Vehicle-to-everything (V2X) communications allow a vehicle to interact with other vehicles and with communication parties in its vicinity (e.g., road-side units, pedestrian users, etc.) with the primary goal of making the driving and traveling experience safer, smarter and more comfortable. A wide set of V2X-tailored specifications have been identified by the Third Generation Partnership Project (3GPP) with focus on the design of architecture enhancements and a flexible air interface to ensure ultra-low latency, highly reliable and high-throughput connectivity as the ultimate aim. This paper discusses the potential of leveraging Information-Centric Networking (ICN) principles in the 3GPP architecture for V2X communications. We consider Named Data Networking (NDN) as reference ICN architecture and elaborate on the specific design aspects, required changes and enhancements in the 3GPP V2X architecture to enable NDN-based data exchange as an alternative/complementary solution to traditional IP networking, which barely matches the dynamics of vehicular environments. Results are provided to showcase the performance improvements of the NDN-based proposal in disseminating content requests over the cellular network against a traditional networking solution.
      Citation: Future Internet
      PubDate: 2019-09-18
      DOI: 10.3390/fi11090199
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 200: Incorporating Background Checks with
           Sentiment Analysis to Identify Violence Risky Chinese Microblogs

    • Authors: Yun-Fei Jia, Shan Li, Renbiao Wu
      First page: 200
      Abstract: Based on Web 2.0 technology, more and more people tend to express their attitude or opinions on the Internet. Radical ideas, rumors, terrorism, or violent contents are also propagated on the Internet, causing several incidents of social panic every year in China. In fact, most of this content comprises joking or emotional catharsis. To detect this with conventional techniques usually incurs a large false alarm rate. To address this problem, this paper introduces a technique that combines sentiment analysis with background checks. State-of-the-art sentiment analysis usually depends on training datasets in a specific topic area. Unfortunately, for some domains, such as violence risk speech detection, there is no definitive training data. In particular, topic-independent sentiment analysis of short Chinese text has been rarely reported in the literature. In this paper, the violence risk of the Chinese microblogs is calculated from multiple perspectives. First, a lexicon-based method is used to retrieve violence-related microblogs, and then a similarity-based method is used to extract sentiment words. Semantic rules and emoticons are employed to obtain the sentiment polarity and sentiment strength of short texts. Second, the activity risk is calculated based on the characteristics of part of speech (PoS) sequence and by semantic rules, and then a threshold is set to capture the key users. Finally, the risk is confirmed by historical speeches and the opinions of the friend-circle of the key users. The experimental results show that the proposed approach outperforms the support vector machine (SVM) method on a topic-independent corpus and can effectively reduce the false alarm rate.
      Citation: Future Internet
      PubDate: 2019-09-19
      DOI: 10.3390/fi11090200
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 201: Role-Mining Optimization with

    • Authors: Wei Sun, Shiwei Wei, Huaping Guo, Hongbing Liu
      First page: 201
      Abstract: Role-based access control (RBAC), which has been regarded as one of the most popular access-control mechanisms, is featured by the separation-of-duty constraints, mutually exclusive constraints, and the least-privileges principle. Role mining, a bottom-up role-engineering technology, is an effective method to migrate from a non-RBAC system to an RBAC system. However, conventional role-mining approaches not only do not consider the separation of duty constraints, but also cannot ensure the security of a constructed RBAC system when the corresponding mined results violate the separation of a duty constraint and/or the least-privileges principle. To solve these problems, this paper proposes a novel method called role-mining optimization with separation-of-duty constraints and security detections for authorizations (RMO_SODSDA), which mainly includes two aspects. First, we present a role-mining-optimization approach for satisfying the separation of duty constraints, and we constructed different variants of mutually exclusive constraints to correctly implement the given separation of duty constraints based on unconstrained role mining. Second, to ensure the security of the constructed system and evaluate authorization performance, we reduced the authorization-query problem to a maximal-satisfiability problem. The experiments validate the effectiveness and efficiency of the proposed method.
      Citation: Future Internet
      PubDate: 2019-09-19
      DOI: 10.3390/fi11090201
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 202: Ranking by Relevance and Citation
           Counts, a Comparative Study: Google Scholar, Microsoft Academic, WoS and

    • Authors: Rovira, Codina, Guerrero-Solé, Lopezosa
      First page: 202
      Abstract: Search engine optimization (SEO) constitutes the set of methods designed to increase the visibility of, and the number of visits to, a web page by means of its ranking on the search engine results pages. Recently, SEO has also been applied to academic databases and search engines, in a trend that is in constant growth. This new approach, known as academic SEO (ASEO), has generated a field of study with considerable future growth potential due to the impact of open science. The study reported here forms part of this new field of analysis. The ranking of results is a key aspect in any information system since it determines the way in which these results are presented to the user. The aim of this study is to analyze and compare the relevance ranking algorithms employed by various academic platforms to identify the importance of citations received in their algorithms. Specifically, we analyze two search engines and two bibliographic databases: Google Scholar and Microsoft Academic, on the one hand, and Web of Science and Scopus, on the other. A reverse engineering methodology is employed based on the statistical analysis of Spearman’s correlation coefficients. The results indicate that the ranking algorithms used by Google Scholar and Microsoft are the two that are most heavily influenced by citations received. Indeed, citation counts are clearly the main SEO factor in these academic search engines. An unexpected finding is that, at certain points in time, Web of Science (WoS) used citations received as a key ranking factor, despite the fact that WoS support documents claim this factor does not intervene.
      Citation: Future Internet
      PubDate: 2019-09-19
      DOI: 10.3390/fi11090202
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 203: Vehicular Delay-Tolerant Networks
           with Image Recognition-Based Adaptive Array Antenna for Winter Road
           Surveillance in Local Areas

    • Authors: Noriki Uchida, Goshi Sato, Yoshitaka Shibata
      First page: 203
      Abstract: The rapid growth of the ITS (intelligent transport system) development requires us to realize new kinds of applications, such as the winter road surveillance system. However, it is still necessary to consider the network difficulty areas for LTE (long-term evolution) or 3G transmissions when one visits sightseeing spots such as ski resorts or spas in mountain areas. Therefore, this paper proposes a winter road surveillance system in the local area based on vehicular delay-tolerant networks. The adaptive array antenna controlled by image recognition with the Kalman filter algorithm is proposed as well to the system in order to realize higher delivery rates. The implementations of the prototype system are presented in this paper as well, and the effectivity of the radio transmission in the prototype system is realized by vehicular image recognition methods and the asynchronous voltage controls for antenna elements for the rapid directional controls of the radio transmission. The experimental results showed that the radio directional controls by the prototype system for the target vehicle can proceed within one second, and that the simulation with the GIS (geographic information system) map pointed out the delivery rates of the proposed method—which are better than those of the epidemic DTN (delay-tolerant networking) routing by the nondirectional antenna. The experiments of the proposed methods indicate a higher efficiency of the data transmissions—even in the mountain area. Furthermore, future research subjects are discussed in this paper.
      Citation: Future Internet
      PubDate: 2019-09-19
      DOI: 10.3390/fi11090203
      Issue No: Vol. 11, No. 9 (2019)
  • Future Internet, Vol. 11, Pages 164: Energy Efficient Communications for
           Reliable IoT Multicast 5G/Satellite Services

    • Authors: Francesco Chiti, Romano Fantacci, Laura Pierucci
      First page: 164
      Abstract: Satellites can provide strong value-add and complementarity with the new cellular system of the fifth generation (5G) in cost-effective solutions for a massive number of users/devices/things. Due to the inherent broadcast nature of satellite communications, which assures access to remote areas and the support to a very large number of devices, satellite systems will gain a major role in the development of the Internet of Things (IoT) sector. In this vision, reliable multicast services via satellite can be provided to deliver the same content efficiently to multiple devices on the Earth, or for software updating to groups of cars in the Machine-to-Machine (M2M) context or for sending control messages to actuators/IoT embedded devices. The paper focuses on the Network coding (NC) techniques applied to a hybrid satellite/terrestrial network to support reliable multicast services. An energy optimization method is proposed based on joint adaptation of: (i) the repetition factor of data symbols on multiple subcarries of the transmitted orthogonal frequency division multiplexing (OFDM) signal; and (ii) the mean number of needed coded packets according to the requirements of each group and to the physical satellite links conditions.
      Citation: Future Internet
      PubDate: 2019-07-25
      DOI: 10.3390/fi11080164
      Issue No: Vol. 11, No. 8 (2019)
  • Future Internet, Vol. 11, Pages 165: Social Emotional Opinion Decision

    • Authors: Jin Sol Yang, Myung-Sook Ko, Kwang Sik Chung
      First page: 165
      Abstract: Nowadays, based on mobile devices and internet, social network services (SNS) are common trends to everyone. Social opinions as public opinions are very important to the government, company, and a person. Analysis and decision of social polarity of SNS about social happenings, political issues and government policies, or commercial products is very critical to the government, company, and a person. Newly coined words and emoticons on SNS are created every day. Specifically, emoticons are made and sold by a person or companies. Newly coined words are mostly made and used by various kinds of communities. The SNS big data mainly consist of normal text with newly coined words and emoticons so that newly coined words and emoticons analysis is very important to understand the social and public opinions. Social big data is informally made and unstructured, and on social network services, many kinds of newly coined words and various emoticons are made anonymously and unintentionally by people and companies. In the analysis of social data, newly coined words and emoticons limit the guarantee the accuracy of analysis. The newly coined words implicitly contain the social opinions and trends of people. The emotional states of people significantly are expressed by emoticons. Although the newly coined words and emoticons are an important part of the social opinion analysis, they are excluded from the emotional dictionary and social big data analysis. In this research, newly coined words and emoticons are extracted from the raw Twitter’s twit messages and analyzed and included in a pre-built dictionary with the polarity and weight of the newly coined words and emoticons. The polarity and weight are calculated for emotional classification. The proposed emotional classification algorithm calculates the weight of polarity (positive or negative) and results in total polarity weight of social opinion. If the total polarity weight of social opinion is more than the pre-fixed threshold value, the twit message is decided as positive. If it is less than the pre-fixed threshold value, the twit message is decided as negative and the other values mean neutral opinion. The accuracy of the social big data analysis result is improved by quantifying and analyzing emoticons and newly coined words.
      Citation: Future Internet
      PubDate: 2019-07-25
      DOI: 10.3390/fi11080165
      Issue No: Vol. 11, No. 8 (2019)
  • Future Internet, Vol. 11, Pages 166: Software Defined Wireless Mesh
           Network Flat Distribution Control Plane

    • Authors: Elzain, Wu
      First page: 166
      Abstract: Wireless Mesh Networks (WMNs), have a potential offering relatively stable Internet broadband access. The rapid development and growth of WMNs attract ISPs to support users’ coverage anywhere anytime. To achieve this goal network architecture must be addressed carefully. Software Defined Networking (SDN) proposes new network architecture for wired and wireless networks. Software Defined Wireless Networking (SDWN) has a great potential to increase efficiency, ease the complexity of control and management, and accelerate technology innovation rate of wireless networking. An SDN controller is the core component of an SDN network. It needs to have updated reports of the network status change, as in network topology and quality of service (QoS) in order to effectively configure and manage the network it controls. In this paper, we propose Flat Distributed Software Defined Wireless Mesh Network architecture where the controller aggregates entire topology discovery and monitors QoS properties of extended WMN nodes using Link Layer Discovery Protocol (LLDP) protocol, which is not possible in multi-hop ordinary architectures. The proposed architecture has been implemented on top of POX controller and Advanced Message Queuing Protocol (AMQP) protocol. The experiments were conducted in a Mininet-wifi emulator, the results present the architecture control plane consistency and two application cases: topology discovery and QoS monitoring. The current results push us to study QoS-routing for video streaming over WMN.
      Citation: Future Internet
      PubDate: 2019-07-25
      DOI: 10.3390/fi11080166
      Issue No: Vol. 11, No. 8 (2019)
  • Future Internet, Vol. 11, Pages 167: A Hybrid Adaptive Transaction
           Injection Protocol and Its Optimization for Verification-Based
           Decentralized System

    • Authors: Saumendra Sengupta, Chen-Fu Chiang, Bruno Andriamanalimanana, Jorge Novillo, Ali Tekeoglu
      First page: 167
      Abstract: Latency is a critical issue that impacts the performance of decentralized systems. Recently we designed various protocols to regulate the injection rate of unverified transactions into the system to improve system performance. Each of the protocols is designed to address issues related to some particular network traffic syndrome. In this work, we first provide the review of our prior protocols. We then provide a hybrid scheme that combines our transaction injection protocols and provides an optimal linear combination of the protocols based on the syndromes in the network. The goal is to speed up the verification process of systems that rely on only one single basic protocol. The underlying basic protocols are Periodic Injection of Transaction via Evaluation Corridor (PITEC), Probabilistic Injection of Transactions (PIT), and Adaptive Semi-synchronous Transaction Injection (ASTI).
      Citation: Future Internet
      PubDate: 2019-07-27
      DOI: 10.3390/fi11080167
      Issue No: Vol. 11, No. 8 (2019)
  • Future Internet, Vol. 11, Pages 168: Latency-Based Dynamic Controller
           Assignment in Hybrid SDNs: Considering the Impact of Legacy Routers

    • Authors: Jie Yuan, Erxia Li, Chaoqun Kang, Fangyuan Chang, Tingting Yuan, Xiaoyong Li
      First page: 168
      Abstract: Software-defined networking (SDN) is a modern network architecture, which separates the network control plane from the data plane. Considering the gradual migration from traditional networks to SDNs, the hybrid SDN, which consists of SDN-enabled devices and legacy devices, is an intermediate state. For wide-area hybrid SDNs, to guarantee the control performance, such as low latency, multi SDN controllers are usually needed to be deployed at different places. How to assign them to switches and partition the network into several control domains is a critical problem. For this problem, the control latency and the packet loss rate of control messages are important metrics, which have been considered in a lot of previous works. However, hybrid SDNs have their unique characters, which can affect the assignment scheme and have been ignored by previous studies. For example, control messages pass through Legacy Forwarding Devices (LFDs) in hybrid SDNs and cause more latency and packet loss rate for queuing compared with SDN-enabled Forwarding Devices (SFDs). In this paper, we propose a dynamic controller assignment scheme in hybrid SDNs, which is called the Legacy Based Assignment (LBA). This scheme can dynamically delegate each controller with a subset of SFDs in the hybrid SDNs, whose objective is to minimize average SFD-to-control latency. We performed some experiments compared with other schemes, which show that our scheme has a better performance in terms of the latency and the packet loss rate.
      Citation: Future Internet
      PubDate: 2019-07-28
      DOI: 10.3390/fi11080168
      Issue No: Vol. 11, No. 8 (2019)
  • Future Internet, Vol. 11, Pages 169: An Image Feature-Based Method for
           Parking Lot Occupancy

    • Authors: Tătulea, Călin, Brad, Brâncovean, Greavu
      First page: 169
      Abstract: The main scope of the presented research was the development of an innovative product for the management of city parking lots. Our application will ensure the implementation of the Smart City concept by using computer vision and communication platforms, which enable the development of new integrated digital services. The use of video cameras could simplify and lower the costs of parking lot controls. In the aim of parking space detection, an aggregated decision was proposed, employing various metrics, computed over a sliding window interval provided by the camera. The history created over 20 images provides an adaptive method for background and accurate detection. The system has shown high robustness in two benchmarks, achieving a recognition rate higher than 93%.
      Citation: Future Internet
      PubDate: 2019-08-01
      DOI: 10.3390/fi11080169
      Issue No: Vol. 11, No. 8 (2019)
  • Future Internet, Vol. 11, Pages 170: Artificial Intelligence
           Implementations on the Blockchain. Use Cases and Future Applications

    • Authors: Konstantinos Sgantzos, Ian Grigg
      First page: 170
      Abstract: An exemplary paradigm of how an AI can be a disruptive technological paragon via the utilization of blockchain comes straight from the world of deep learning. Data scientists have long struggled to maintain the quality of a dataset for machine learning by an AI entity. Datasets can be very expensive to purchase, as, depending on both the proper selection of the elements and the homogeneity of the data contained within, constructing and maintaining the integrity of a dataset is difficult. Blockchain as a highly secure storage medium presents a technological quantum leap in maintaining data integrity. Furthermore, blockchain’s immutability constructs a fruitful environment for creating high quality, permanent and growing datasets for deep learning. The combination of AI and blockchain could impact fields like Internet of things (IoT), identity, financial markets, civil governance, smart cities, small communities, supply chains, personalized medicine and other fields, and thereby deliver benefits to many people.
      Citation: Future Internet
      PubDate: 2019-08-02
      DOI: 10.3390/fi11080170
      Issue No: Vol. 11, No. 8 (2019)
  • Future Internet, Vol. 11, Pages 171: Modeling of Cumulative QoE in
           On-Demand Video Services: Role of Memory Effect and Degree of Interest

    • Authors: Tho Nguyen Duc, Chanh Minh Tran, Phan Xuan Tan, Eiji Kamioka
      First page: 171
      Abstract: The growing demand on video streaming services increasingly motivates the development of a reliable and accurate models for the assessment of Quality of Experience (QoE). In this duty, human-related factors which have significant influence on QoE play a crucial role. However, the complexity caused by multiple effects of those factors on human perception has introduced challenges on contemporary studies. In this paper, we inspect the impact of the human-related factors, namely perceptual factors, memory effect, and the degree of interest. Based on our investigation, a novel QoE model is proposed that effectively incorporates those factors to reflect the user’s cumulative perception. Evaluation results indicate that our proposed model performed excellently in predicting cumulative QoE at any moment within a streaming session.
      Citation: Future Internet
      PubDate: 2019-08-04
      DOI: 10.3390/fi11080171
      Issue No: Vol. 11, No. 8 (2019)
  • Future Internet, Vol. 11, Pages 172: Scheduling for Multi-User Multi-Input
           Multi-Output Wireless Networks with Priorities and Deadlines

    • Authors: Li-on Raviv, Amir Leshem
      First page: 172
      Abstract: The spectral efficiency of wireless networks can be significantly improved by exploiting spatial multiplexing techniques known as multi-user MIMO. These techniques enable the allocation of multiple users to the same time-frequency block, thus reducing the interference between users. There is ample evidence that user groupings can have a significant impact on the performance of spatial multiplexing. The situation is even more complex when the data packets have priority and deadlines for delivery. Hence, combining packet queue management and beamforming would considerably enhance the overall system performance. In this paper, we propose a combination of beamforming and scheduling to improve the overall performance of multi-user MIMO systems in realistic conditions where data packets have both priority and deadlines beyond which they become obsolete. This method dubbed Reward Per Second (RPS), combines advanced matrix factorization at the physical layer with recently-developed queue management techniques. We demonstrate the merits of the this technique compared to other state-of-the-art scheduling methods through simulations.
      Citation: Future Internet
      PubDate: 2019-08-05
      DOI: 10.3390/fi11080172
      Issue No: Vol. 11, No. 8 (2019)
  • Future Internet, Vol. 11, Pages 173: Mars to Earth Data Downloading: A
           Directory Synchronization Approach

    • Authors: Marco Bertolazzi, Carlo Caini
      First page: 173
      Abstract: This paper aims to present a possible alternative to direct file transfer in “challenged networks”, by using DTNbox, a recent application for peer-to-peer directory synchronization between DTN nodes. This application uses the Bundle Protocol (BP) to tackle long delays and link intermittency typical of challenged networks. The directory synchronization approach proposed in the paper consists of delegating the transmission of bulk data files to DTNbox, instead of modifying source applications to interface with the API of a specific BP implementation, or making use of custom scripts for file transfers. The validity of the proposed approach is investigated in the paper by considering a Mars to Earth interplanetary environment. Experiments are carried out by means of Virtual Machines running ION, the NASA-JPL implementation of DTN protocols. The results show that the directory synchronization approach is a valid alternative to direct transfer in interplanetary scenarios such as that considered in the paper.
      Citation: Future Internet
      PubDate: 2019-08-08
      DOI: 10.3390/fi11080173
      Issue No: Vol. 11, No. 8 (2019)
  • Future Internet, Vol. 11, Pages 174: A Systematic Analysis of Real-World
           Energy Blockchain Initiatives

    • Authors: Peter O’Donovan, Dominic T. J. O’Sullivan
      First page: 174
      Abstract: The application of blockchain technology to the energy sector promises to derive new operating models focused on local generation and sustainable practices, which are driven by peer-to-peer collaboration and community engagement. However, real-world energy blockchains differ from typical blockchain networks insofar as they must interoperate with grid infrastructure, adhere to energy regulations, and embody engineering principles. Naturally, these additional dimensions make real-world energy blockchains highly dependent on the participation of grid operators, engineers, and energy providers. Although much theoretical and proof-of-concept research has been published on energy blockchains, this research aims to establish a lens on real-world projects and implementations that may inform the alignment of academic and industry research agendas. This research classifies 131 real-world energy blockchain initiatives to develop an understanding of how blockchains are being applied to the energy domain, what type of failure rates can be observed from recently reported initiatives, and what level of technical and theoretical details are reported for real-world deployments. The results presented from the systematic analysis highlight that real-world energy blockchains are (a) growing exponentially year-on-year, (b) producing relatively low failure/drop-off rates (~7% since 2015), and (c) demonstrating information sharing protocols that produce content with insufficient technical and theoretical depth.
      Citation: Future Internet
      PubDate: 2019-08-10
      DOI: 10.3390/fi11080174
      Issue No: Vol. 11, No. 8 (2019)
  • Future Internet, Vol. 11, Pages 175: Quality of Experience (QoE)-Aware
           Fast Coding Unit Size Selection for HEVC Intra-prediction

    • Authors: Buddhiprabha Erabadda, Thanuja Mallikarachchi, Chaminda Hewage, Anil Fernando
      First page: 175
      Abstract: The exorbitant increase in the computational complexity of modern video coding standards, such as High Efficiency Video Coding (HEVC), is a compelling challenge for resource-constrained consumer electronic devices. For instance, the brute force evaluation of all possible combinations of available coding modes and quadtree-based coding structure in HEVC to determine the optimum set of coding parameters for a given content demand a substantial amount of computational and energy resources. Thus, the resource requirements for real time operation of HEVC has become a contributing factor towards the Quality of Experience (QoE) of the end users of emerging multimedia and future internet applications. In this context, this paper proposes a content-adaptive Coding Unit (CU) size selection algorithm for HEVC intra-prediction. The proposed algorithm builds content-specific weighted Support Vector Machine (SVM) models in real time during the encoding process, to provide an early estimate of CU size for a given content, avoiding the brute force evaluation of all possible coding mode combinations in HEVC. The experimental results demonstrate an average encoding time reduction of 52.38%, with an average Bjøntegaard Delta Bit Rate (BDBR) increase of 1.19% compared to the HM16.1 reference encoder. Furthermore, the perceptual visual quality assessments conducted through Video Quality Metric (VQM) show minimal visual quality impact on the reconstructed videos of the proposed algorithm compared to state-of-the-art approaches.
      Citation: Future Internet
      PubDate: 2019-08-11
      DOI: 10.3390/fi11080175
      Issue No: Vol. 11, No. 8 (2019)
  • Future Internet, Vol. 11, Pages 176: Research on Factors Affecting
           Solvers’ Participation Time in Online Crowdsourcing Contests

    • Authors: Keng Yang
      First page: 176
      Abstract: A crowdsourcing contest is one of the most popular modes of crowdsourcing and is also an important tool for an enterprise to implement open innovation. The solvers’ active participation is one of the major reasons for the success of crowdsourcing contests. Research on solvers’ participation behavior is helpful in understanding the sustainability and incentives of solvers’ participation in the online crowdsourcing platform. So, how to attract more solvers to participate and put in more effort is the focus of researchers. In this regard, previous studies mainly used the submission quantity to measure solvers’ participation behavior and lacked an effective measure on the degree of participation effort expended by a solver. For the first time, we use solvers’ participation time as a dependent variable to measure their effort in a crowdsourcing contest. Thus, we incorporate participation time into the solver’s participation research. With the data from, we analyze how participation time is affected four key factors including task design, task description, task process, and environment, respectively. We found that, first, for task design, higher task rewards will attract solvers to invest more time in the participation process and the relationship between participation time and task duration is inverted U-shaped. Second, for task description, the length of the task description has a negative impact on participation time and the task description attachment will positively influence the participation time. Third, for the task process, communication and supplementary explanations in a crowdsourcing process positively affect participation time. Fourth, for environmental factors, the task density of the crowdsourcing platform and the market price of all crowdsourcing contests have respectively negative and positive effects on participation time.
      Citation: Future Internet
      PubDate: 2019-08-12
      DOI: 10.3390/fi11080176
      Issue No: Vol. 11, No. 8 (2019)
  • Future Internet, Vol. 11, Pages 177: RLXSS: Optimizing XSS Detection Model
           to Defend Against Adversarial Attacks Based on Reinforcement Learning

    • Authors: Yong Fang, Cheng Huang, Yijia Xu, Yang Li
      First page: 177
      Abstract: With the development of artificial intelligence, machine learning algorithms and deep learning algorithms are widely applied to attack detection models. Adversarial attacks against artificial intelligence models become inevitable problems when there is a lack of research on the cross-site scripting (XSS) attack detection model for defense against attacks. It is extremely important to design a method that can effectively improve the detection model against attack. In this paper, we present a method based on reinforcement learning (called RLXSS), which aims to optimize the XSS detection model to defend against adversarial attacks. First, the adversarial samples of the detection model are mined by the adversarial attack model based on reinforcement learning. Secondly, the detection model and the adversarial model are alternately trained. After each round, the newly-excavated adversarial samples are marked as a malicious sample and are used to retrain the detection model. Experimental results show that the proposed RLXSS model can successfully mine adversarial samples that escape black-box and white-box detection and retain aggressive features. What is more, by alternately training the detection model and the confrontation attack model, the escape rate of the detection model is continuously reduced, which indicates that the model can improve the ability of the detection model to defend against attacks.
      Citation: Future Internet
      PubDate: 2019-08-14
      DOI: 10.3390/fi11080177
      Issue No: Vol. 11, No. 8 (2019)
  • Future Internet, Vol. 11, Pages 178: Artificial Intelligence Imagery
           Analysis Fostering Big Data Analytics

    • Authors: Stefan Cremer, Claudia Loebbecke
      First page: 178
      Abstract: In an era of accelerating digitization and advanced big data analytics, harnessing quality data and insights will enable innovative research methods and management approaches. Among others, Artificial Intelligence Imagery Analysis has recently emerged as a new method for analyzing the content of large amounts of pictorial data. In this paper, we provide background information and outline the application of Artificial Intelligence Imagery Analysis for analyzing the content of large amounts of pictorial data. We suggest that Artificial Intelligence Imagery Analysis constitutes a profound improvement over previous methods that have mostly relied on manual work by humans. In this paper, we discuss the applications of Artificial Intelligence Imagery Analysis for research and practice and provide an example of its use for research. In the case study, we employed Artificial Intelligence Imagery Analysis for decomposing and assessing thumbnail images in the context of marketing and media research and show how properly assessed and designed thumbnail images promote the consumption of online videos. We conclude the paper with a discussion on the potential of Artificial Intelligence Imagery Analysis for research and practice across disciplines.
      Citation: Future Internet
      PubDate: 2019-08-15
      DOI: 10.3390/fi11080178
      Issue No: Vol. 11, No. 8 (2019)
  • Future Internet, Vol. 11, Pages 179: Impact of Modern Virtualization
           Methods on Timing Precision and Performance of High-Speed Applications

    • Authors: Veronika Kirova, Kirill Karpov, Eduard Siemens, Irina Zander, Oksana Vasylenko, Dmitry Kachan, Sergii Maksymov
      First page: 179
      Abstract: The presented work is a result of extended research and analysis on timing methods precision, their efficiency in different virtual environments and the impact of timing precision on the performance of high-speed networks applications. We investigated how timer hardware is shared among heavily CPU- and I/O-bound tasks on a virtualized OS as well as on bare OS. By replacing the invoked timing methods within a well-known application for estimation of available path bandwidth, we provide the analysis of their impact on estimation accuracy. We show that timer overhead and precision are crucial for high-performance network applications, and low-precision timing methods usage, e.g., the delays and overheads issued by virtualization result in the degradation of the virtual environment. Furthermore, in this paper, we provide confirmation that, by using the methods we intentionally developed for both precise timing operations and AvB estimation, it is possible to overcome the inefficiency of standard time-related operations and overhead that comes with the virtualization. The impacts of negative virtualization factors were investigated in five different environments to define the most optimal virtual environment for high-speed network applications.
      Citation: Future Internet
      PubDate: 2019-08-16
      DOI: 10.3390/fi11080179
      Issue No: Vol. 11, No. 8 (2019)
  • Future Internet, Vol. 11, Pages 180: Combined Self-Attention Mechanism for
           Chinese Named Entity Recognition in Military

    • Authors: Fei Liao, Liangli Ma, Jingjing Pei, Linshan Tan
      First page: 180
      Abstract: Military named entity recognition (MNER) is one of the key technologies in military information extraction. Traditional methods for the MNER task rely on cumbersome feature engineering and specialized domain knowledge. In order to solve this problem, we propose a method employing a bidirectional long short-term memory (BiLSTM) neural network with a self-attention mechanism to identify the military entities automatically. We obtain distributed vector representations of the military corpus by unsupervised learning and the BiLSTM model combined with the self-attention mechanism is adopted to capture contextual information fully carried by the character vector sequence. The experimental results show that the self-attention mechanism can improve effectively the performance of MNER task. The F-score of the military documents and network military texts identification was 90.15% and 89.34%, respectively, which was better than other models.
      Citation: Future Internet
      PubDate: 2019-08-18
      DOI: 10.3390/fi11080180
      Issue No: Vol. 11, No. 8 (2019)
  • Future Internet, Vol. 11, Pages 181: A Novel Task Caching and Migration
           Strategy in Multi-Access Edge Computing Based on the Genetic Algorithm

    • Authors: Lujie Tang, Bing Tang, Linyao Kang, Li Zhang
      First page: 181
      Abstract: Multi-access edge computing (MEC) brings high-bandwidth and low-latency access to applications distributed at the edge of the network. Data transmission and exchange become faster, and the overhead of the task migration between mobile devices and edge cloud becomes smaller. In this paper, we adopt the fine-grained task migration model. At the same time, in order to further reduce the delay and energy consumption of task execution, the concept of the task cache is proposed, which involves caching the completed tasks and related data on the edge cloud. Then, we consider the limitations of the edge cloud cache capacity to study the task caching strategy and fine-grained task migration strategy on the edge cloud using the genetic algorithm (GA). Thus, we obtained the optimal mobile device task migration strategy, satisfying minimum energy consumption and the optimal cache on the edge cloud. The simulation results showed that the task caching strategy based on fine-grained migration can greatly reduce the energy consumption of mobile devices in the MEC environment.
      Citation: Future Internet
      PubDate: 2019-08-20
      DOI: 10.3390/fi11080181
      Issue No: Vol. 11, No. 8 (2019)
  • Future Internet, Vol. 11, Pages 141: A Dynamic Application-Partitioning
           Algorithm with Improved Offloading Mechanism for Fog Cloud Networks

    • Authors: Abro, Deng, Memon, Laghari, Mohammadani, Ain
      First page: 141
      Abstract: This paper aims to propose a new fog cloud architecture that performs a joint energy-efficient task assignment (JEETA). The proposed JEETA architecture utilizes the dynamic application-partitioning algorithm (DAPTS), a novel algorithm that efficiently decides and switches the task to be offloaded or not in heterogeneous environments with minimal energy consumption. The proposed scheme outperforms baseline approaches such as MAUI, Think Air and Clone Cloud in many performance aspects. Results show that for the execution of 1000 Tasks on fog, mobile offloaded nodes, JEETA consumes the leas, i.e., 23% of the total energy whereas other baseline approaches consume in between 50%–100% of the total energy. Results are validated via real test-bed experiments and trice are driven efficient simulations.
      Citation: Future Internet
      PubDate: 2019-06-28
      DOI: 10.3390/fi11070141
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 142: Service Level Agreement Violations in
           Cloud Storage: Insurance and Compensation Sustainability

    • Authors: Loretta Mastroeni, Alessandro Mazzoccoli, Maurizio Naldi
      First page: 142
      Abstract: Service Level Agreements are employed to set availability commitments in cloud services. When a violation occurs as in an outage, cloud providers may be called to compensate customers for the losses incurred. Such compensation may be so large as to erode cloud providers’ profit margins. Insurance may be used to protect cloud providers against such a danger. In this paper, closed formulas are provided through the expected utility paradigm to set the insurance premium under different outage models and QoS metrics (no. of outages, no. of long outages, and unavailability). When the cloud service is paid through a fixed fee, we also provide the maximum unit compensation that a cloud provider can offer so as to meet constraints on its profit loss. The unit compensation is shown to vary approximately as the inverse square of the service fee.
      Citation: Future Internet
      PubDate: 2019-06-30
      DOI: 10.3390/fi11070142
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 143: Maintaining the Sense of Agency in
           Semi-Autonomous Robot Conferencing

    • Authors: Kazuaki Tanaka, Kota Takenouchi, Kohei Ogawa, Yuichiro Yoshikawa, Shuichi Nishio, Hiroshi Ishiguro
      First page: 143
      Abstract: In semi-autonomous robot conferencing, not only the operator controls the robot, but the robot itself also moves autonomously. Thus, it can modify the operator’s movement (e.g., adding social behaviors). However, the sense of agency, that is, the degree of feeling that the movement of the robot is the operator’s own movement, would decrease if the operator is conscious of the discrepancy between the teleoperation and autonomous behavior. In this study, we developed an interface to control the robot head by using an eye tracker. When the robot autonomously moves its eye-gaze position, the interface guides the operator’s eye movement towards this autonomous movement. The experiment showed that our interface can maintain the sense of agency, because it provided the illusion that the autonomous behavior of a robot is directed by the operator’s eye movement. This study reports the conditions of how to provide this illusion in semi-autonomous robot conferencing.
      Citation: Future Internet
      PubDate: 2019-07-03
      DOI: 10.3390/fi11070143
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 144: Automatic Addition of Fault-Tolerance
           in Presence of Unchangeable Environment Actions †

    • Authors: Mohammad Roohitavaf, Sandeep Kulkarni
      First page: 144
      Abstract: We focus on the problem of adding fault-tolerance to an existing concurrent protocol in the presence of unchangeable environment actions. Such unchangeable actions occur in cases where a subset of components/processes cannot be modified since they represent third-party components or are constrained by physical laws. These actions differ from faults in that they are (1) simultaneously collaborative and disruptive, (2) essential for satisfying the specification and (3) possibly non-terminating. Hence, if these actions are modeled as faults while adding fault-tolerance, it causes existing model repair algorithms to declare failure to add fault-tolerance. We present a set of algorithms for adding stabilization and fault-tolerance for programs that run in the presence of environment actions. We prove the soundness, completeness and the complexity of our algorithms. We have implemented all of our algorithms using symbolic techniques in Java. The experimental results of our algorithms for various examples are also provided.
      Citation: Future Internet
      PubDate: 2019-07-04
      DOI: 10.3390/fi11070144
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 145: A Sidecar Object for the Optimized
           Communication Between Edge and Cloud in Internet of Things Applications

    • Authors: Stefano Busanelli, Simone Cirani, Lorenzo Melegari, Marco Picone, Mirco Rosa, Luca Veltri
      First page: 145
      Abstract: The internet of things (IoT) is one of the most disrupting revolutions that is characterizing the technology ecosystem. In the near future, the IoT will have a significant impact on people’s lives and on the design and developments of new paradigms and architectures coping with a completely new set of challenges and service categories. The IoT can be described as an ecosystem where a massive number of constrained devices (denoted as smart objects) will be deployed and connected to cooperate for multiple purposes, such a data collection, actuation, and interaction with people. In order to meet the specific requirements, IoT services may be deployed leveraging a hybrid architecture that will involve services deployed on the edge and the cloud. In this context, one of the challenges is to create an infrastructure of objects and microservices operating between both the edge and in the cloud that can be easily updated and extended with new features and functionalities without the need of updating or re-deploying smart objects. This work introduces a new concept for extending smart objects’ support for cloud services, denoted as a sidecar object. A sidecar object serves the purpose of being deployed as additional component of a preexisting object without interfering with the mechanisms and behaviors that have already been implemented. In particular, the sidecar object implementation developed in this work focuses on the communication with existing IoT cloud services (namely, AWS IoT and Google Cloud IoT) to provide a transparent and seamless synchronization of data, states, and commands between the object on the edge and the cloud. The proposed sidecar object implementation has been extensively evaluated through a detailed set of tests, in order to analyze the performances and behaviors in real- world scenarios.
      Citation: Future Internet
      PubDate: 2019-07-05
      DOI: 10.3390/fi11070145
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 146: Marine Internet for Internetworking
           in Oceans: A Tutorial

    • Authors: Shengming Jiang
      First page: 146
      Abstract: Ever-increasing human activities in oceans require handy, reliable and cost-effective high-speed network access similar to terrestrial Internet services to be available on and under water. However, terrestrial Internet cannot be extended to oceans seamlessly due to huge differences between terrestrial and oceanic environments, while satellite services are still very expensive especially for ordinary users with communication quality susceptible to weather conditions, and cannot cover underwater networks either. Therefore, marine Internet, proposed about six years ago, is the earliest scheme that tries to systematically address the internetworking issue in oceans and still in its infancy stage. This tutorial aims to introduce the principle, architecture and applications of marine Internet, along with discussion on oceanic environments for communication, currently available and under developing communication systems in oceans, as well as challenging issues necessary for further studies to foster the development of marine Internet.
      Citation: Future Internet
      PubDate: 2019-07-05
      DOI: 10.3390/fi11070146
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 147: Web Browser Network Based on a BA
           Model for a Web-Based Virtual World

    • Authors: Masaki Kohana, Shinji Sakamoto, Shusuke Okamoto
      First page: 147
      Abstract: Real-time web applications such as a virtual world require considerable computing resources. However, as the number of servers increases, so does the maintenance and financial cost. To share tasks among web browsers, the browsers must share data. Therefore, a network must be constructed among the web browsers. In this paper, we propose the construction of a web browser network based on the Barabasi–Albert model (BA model). We focus on a web-based multiplayer online game that requires higher frequent communication and significant computing resources. We attempt to optimize computing resource utilization for web browsers. We improve upon the method in our previous study, which constructed a network for a web-based virtual world, using only location information. When a new user logged into a world, the web browser connected to two other browsers whose users had a location close to that of the user. The experimental results of that method showed 50% data coverage, which was insufficient to display the game screen because the web browser displays the characters on the virtual world. In this study, we attempt to use the BA model to construct more efficient networks than those in the previous study to increase data coverage. Our new method uses the number of connections of the web browser and location information to calculate the probability of web browser selection. The experimental results show that the data coverage exceeds 90%, indicating significant improvement over the previous method.
      Citation: Future Internet
      PubDate: 2019-07-05
      DOI: 10.3390/fi11070147
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 148: The Effects of the Floating Action
           Button on Quality of Experience

    • Authors: Jesenka Pibernik, Jurica Dolic, Hrvoje Abraham Milicevic, Bojan Kanizaj
      First page: 148
      Abstract: Google’s Material Design, created in 2014, led to the extended application of floating action buttons (FAB) in user interfaces of web pages and mobile applications. FAB’s roll is to trigger an activity either on the present screen, or it can play out an activity that makes another screen. A few specialists in user experience (UX) and user interface (UI) design are sceptical regarding the usability of FAB in the interfaces of both web pages and mobile applications. They claim that the use of FAB easily distracts users and that it interferes with using other important functions of the applications, and it is unusable in applications designed for iOS systems. The aim of this paper is to investigate by an experiment the quality of experience (QoE) of a static and animated FAB and compare it to the toolbar alternative. The experimental results of different testing methods rejected the hypothesis that the usage and animation of this UI element has a positive influence on the application usability. However, its static and animated utilization enhanced the ratings of hedonic and aesthetic features of the user experience, justifying the usage of this type of button.
      Citation: Future Internet
      PubDate: 2019-07-06
      DOI: 10.3390/fi11070148
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 149: An Image Authentication Scheme Using
           Merkle Tree Mechanisms

    • Authors: Yi-Cheng Chen, Yueh-Peng Chou, Yung-Chen Chou
      First page: 149
      Abstract: Research on digital image processing has become quite popular and rapid in recent years, and scholars have proposed various image verification mechanisms. Similarly , blockchain technology has also become very popular in recent years. This paper proposes a new image verification mechanism based on the Merkle tree technique in the blockchain. The Merkle tree root in the blockchain mechanism provides a reliable environment for storage of image features. In image verification, the verification of each image can be performed by the Merkle tree mechanism to obtain the hash value of the Merkle tree node on the path. In addition, the method combines the Inter-Planetary File System (IPFS) to improve the availability of images. The main purpose of this paper is to achieve the goal of image integrity verification. The proposed method can not only verify the integrity of the image but also restore the tampered area in the case of image tampering. Since the proposed method employs the blockchain mechanism, the image verification mechanism does not need third party resources . The verification method is performed by each node in the blockchain network. The experimental results demonstrate that the proposed method successfully achieved the goal of image authentication and tampered area restoration.
      Citation: Future Internet
      PubDate: 2019-07-06
      DOI: 10.3390/fi11070149
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 150: Town-Watching Workshop Using Disaster
           Information Tweeting and Mapping System

    • Authors: Keisuke Utsu, Shun Ueta, Sachi Tajima, Yoshitaka Kajita, Yuji Murakami, Osamu Uchida
      First page: 150
      Abstract: Self- and mutual-help by citizens are important as well as social-help from the local governments, for disaster prevention and mitigation. Then, town watching and disaster prevention map-making workshops are held to review the town and promote self- and mutual-help by citizens. On the other hand, the use of social media for information sharing during and after disasters has been gaining attention. To facilitate information sharing in disasters, we developed a web system, Disaster Information Tweeting and Mapping System (DITS/DIMS). From the above background, we organized a town-watching workshop using DITS/DIMS in October 2018 in Minami Ward, Sapporo City, Hokkaido, Japan; affected area of the Hokkaido Eastern Iburi Earthquake in September 2018. In this paper, we explain the workshop procedure, outcome, questionnaire survey results, and post-meeting. The questionnaire survey result shows that the workshop educated the participants about posting useful information on social media during a disaster. In addition, at the post-meeting, the participants recognized that they had reviewed the town only from the perspective of “daily life” convenience before the earthquake, and they had not evaluated the “emergency viewpoint.” Therefore, the workshop was a meaningful opportunity for the participants to review the town in terms of disaster prevention and mitigation.
      Citation: Future Internet
      PubDate: 2019-07-07
      DOI: 10.3390/fi11070150
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 151: Structuring Reference Architectures
           for the Industrial Internet of Things

    • Authors: Sebastian R. Bader, Maria Maleshkova, Steffen Lohmann
      First page: 151
      Abstract: The ongoing digital transformation has the potential to revolutionize nearly all industrial manufacturing processes. However, its concrete requirements and implications are still not sufficiently investigated. In order to establish a common understanding, a multitude of initiatives have published guidelines, reference frameworks and specifications, all intending to promote their particular interpretation of the Industrial Internet of Things (IIoT). As a result of the inconsistent use of terminology, heterogeneous structures and proposed processes, an opaque landscape has been created. The consequence is that both new users and experienced experts can hardly manage to get an overview of the amount of information and publications, and make decisions on what is best to use and to adopt. This work contributes to the state of the art by providing a structured analysis of existing reference frameworks, their classifications and the concerns they target. We supply alignments of shared concepts, identify gaps and give a structured mapping of regarded concerns at each part of the respective reference architectures. Furthermore, the linking of relevant industry standards and technologies to the architectures allows a more effective search for specifications and guidelines and supports the direct technology adoption.
      Citation: Future Internet
      PubDate: 2019-07-08
      DOI: 10.3390/fi11070151
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 152: A Next-Generation Core Network
           Architecture for Mobile Networks

    • Authors: Andrea G. Forte, Wei Wang, Luca Veltri, Gianluigi Ferrari
      First page: 152
      Abstract: Over the years, the cellular mobile network has evolved from a wireless plain telephone system to a very complex system providing telephone service, Internet connectivity and many interworking capabilities with other networks. Its air interface performance has increased drastically over time, leading to high throughput and low latency. Changes to the core network, however, have been slow and incremental, with increased complexity worsened by the necessity of backwards-compatibility with older-generation systems such as the Global System for Mobile communication (GSM). In this paper, a new virtualized Peer-to-Peer (P2P) core network architecture is presented. The key idea of our approach is that each user is assigned a private virtualized copy of the whole core network. This enables a higher degree of security and novel services that are not possible in today’s architecture. We describe the new architecture, focusing on its main elements, IP addressing, message flows, mobility management, and scalability. Furthermore, we will show some significant advantages this new architecture introduces. Finally, we investigate the performance of our architecture by analyzing voice-call traffic available in a database of a large U.S. cellular network provider.
      Citation: Future Internet
      PubDate: 2019-07-09
      DOI: 10.3390/fi11070152
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 153: Architecting Software for the
           Internet of Thing Based Systems

    • Authors: Abdulrahman Alreshidi, Aakash Ahmad
      First page: 153
      Abstract: Context—Internet of Thing (IoT) based systems support any-time/place computations by interconnecting humans, systems, services, devices, and things that enabling autonomous systems to shape-up digitized societies. Software architecture, as the blue-print of software-intensive systems, abstracts the complexities of modeling, design, development, and evolution phases of a software to engineer complex IoT driven systems effectively and efficiently. Objectives and Method—Research and development efforts are required to exploit architectural principle and practices to design and develop IoT systems to go beyond the state-of-the-art for IoTs. The objectives of this research are to empirically investigate and systematically classify the state-of-the-art on architecting IoT based software. We have used the Evidence Based Software Engineering (EBSE) method to conduct a mapping study of the existing IoT solutions by investigating 88 qualitatively selected studies. Results and Implications—The results of the mapping study highlight various research themes that exploit software architecture models to develop IoT systems. The identified research themes include, but are not limited to, cloud-based software ecosystems, software defined networking, autonomous, and adaptive software and agent-based systems that IoTs drive. The mapping study suggests that futuristic research on architecting IoT software is focused on architectural languages and patterns that support reusability, automation, and human decision support to develop and dynamically adapt IoT software. The mapping study represents a concentrated knowledge regarding architectural principle and practices to facilitate knowledge transfer—benefiting researchers and practitioners—on the role of software architecture for IoT systems.
      Citation: Future Internet
      PubDate: 2019-07-10
      DOI: 10.3390/fi11070153
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 154: Signatures of the Crypto-Currency
           Market Decoupling from the Forex

    • Authors: Stanisław Drożdż, Ludovico Minati, Paweł Oświȩcimka, Marek Stanuszek, Marcin Wa̧torek
      First page: 154
      Abstract: Based on the high-frequency recordings from Kraken, a cryptocurrency exchange and professional trading platform that aims to bring Bitcoin and other cryptocurrencies into the mainstream, the multiscale cross-correlations involving the Bitcoin (BTC), Ethereum (ETH), Euro (EUR) and US dollar (USD) are studied over the period between 1 July 2016 and 31 December 2018. It is shown that the multiscaling characteristics of the exchange rate fluctuations related to the cryptocurrency market approach those of the Forex. This, in particular, applies to the BTC/ETH exchange rate, whose Hurst exponent by the end of 2018 started approaching the value of 0.5, which is characteristic of the mature world markets. Furthermore, the BTC/ETH direct exchange rate has already developed multifractality, which manifests itself via broad singularity spectra. A particularly significant result is that the measures applied for detecting cross-correlations between the dynamics of the BTC/ETH and EUR/USD exchange rates do not show any noticeable relationships. This could be taken as an indication that the cryptocurrency market has begun decoupling itself from the Forex.
      Citation: Future Internet
      PubDate: 2019-07-10
      DOI: 10.3390/fi11070154
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 155: Stacking-Based Ensemble Learning of
           Self-Media Data for Marketing Intention Detection

    • Authors: Yufeng Wang, Shuangrong Liu, Songqian Li, Jidong Duan, Zhihao Hou, Jia Yu, Kun Ma
      First page: 155
      Abstract: Social network services for self-media, such as Weibo, Blog, and WeChat Public, constitute a powerful medium that allows users to publish posts every day. Due to insufficient information transparency, malicious marketing of the Internet from self-media posts imposes potential harm on society. Therefore, it is necessary to identify news with marketing intentions for life. We follow the idea of text classification to identify marketing intentions. Although there are some current methods to address intention detection, the challenge is how the feature extraction of text reflects semantic information and how to improve the time complexity and space complexity of the recognition model. To this end, this paper proposes a machine learning method to identify marketing intentions from large-scale We-Media data. First, the proposed Latent Semantic Analysis (LSI)-Word2vec model can reflect the semantic features. Second, the decision tree model is simplified by decision tree pruning to save computing resources and reduce the time complexity. Finally, this paper examines the effects of classifier associations and uses the optimal configuration to help people efficiently identify marketing intention. Finally, the detailed experimental evaluation on several metrics shows that our approaches are effective and efficient. The F1 value can be increased by about 5%, and the running time is increased by 20%, which prove that the newly-proposed method can effectively improve the accuracy of marketing news recognition.
      Citation: Future Internet
      PubDate: 2019-07-10
      DOI: 10.3390/fi11070155
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 156: A Game Theoretic Interference
           Management Scheme in Full Duplex Cellular Systems under Infeasible QoS

    • Authors: Ali Y. Al-Zahrani
      First page: 156
      Abstract: Several emerging mobile applications and services (e.g., autonomous cars) require higher wireless throughput than ever before. This demand stresses the need for investigating novel methods that have the potential to dramatically increase the spectral efficiency (SE) of wireless systems. An evolving approach is the Single-channel full duplex (SCFD) communication where each node may simultaneously receive and transmit over the same frequency channel, and, hence, this could potentially double the current SE figures. In an earlier research work, we derived a model of the signal to interference plus noise ratio (SINR) in an SCFD-based cellular system with imperfect self interference cancellation, and investigated interference management under feasible QoS requirements. In this paper, game theoretic results are exploited to investigate the intercell interference management in SCFD-based cellular networks under infeasible QoS requirements. The investigation starts with a game formulation that captures two different cases. Then, the existence and uniqueness of the Nash equilibrium point are established. After that, a computationally efficient distributed algorithm, which realizes best effort and fair wireless services, is designed. The merit of this scheme is that, when the QoS requirements are feasible, they will be achieved with minimum energy consumption. Results of extensive simulation experiments are presented to show the effectiveness of the proposed schemes.
      Citation: Future Internet
      PubDate: 2019-07-16
      DOI: 10.3390/fi11070156
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 157: A Multi-Attention Network for
           Aspect-Level Sentiment Analysis

    • Authors: Qiuyue Zhang, Ran Lu
      First page: 157
      Abstract: Aspect-level sentiment analysis (ASA) aims at determining the sentiment polarity of specific aspect term with a given sentence. Recent advances in attention mechanisms suggest that attention models are useful in ASA tasks and can help identify focus words. Or combining attention mechanisms with neural networks are also common methods. However, according to the latest research, they often fail to extract text representations efficiently and to achieve interaction between aspect terms and contexts. In order to solve the complete task of ASA, this paper proposes a Multi-Attention Network (MAN) model which adopts several attention networks. This model not only preprocesses data by Bidirectional Encoder Representations from Transformers (BERT), but a number of measures have been taken. First, the MAN model utilizes the partial Transformer after transformation to obtain hidden sequence information. Second, because words in different location have different effects on aspect terms, we introduce location encoding to analyze the impact on distance from ASA tasks, then we obtain the influence of different words with aspect terms through the bidirectional attention network. From the experimental results of three datasets, we could find that the proposed model could achieve consistently superior results.
      Citation: Future Internet
      PubDate: 2019-07-16
      DOI: 10.3390/fi11070157
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 158: Distributed Uniform Streaming
           Framework: An Elastic Fog Computing Platform for Event Stream Processing
           and Platform Transparency

    • Authors: Simon Vanneste, Jens de Hoog, Thomas Huybrechts, Stig Bosmans, Reinout Eyckerman, Muddsair Sharif, Siegfried Mercelis, Peter Hellinckx
      First page: 158
      Abstract: The increase of Internet of Things devices and the rise of more computationally intense applications presents challenges for future Internet of Things architectures. We envision a future in which edge, fog, and cloud devices work together to execute future applications. Because the entire application cannot run on smaller edge or fog devices, we will need to split the application into smaller application components. These application components will send event messages to each other to create a single application from multiple application components. The execution location of the application components can be optimized to minimize the resource consumption. In this paper, we describe the Distributed Uniform Stream (DUST) framework that creates an abstraction between the application components and the middleware which is required to make the execution location transparent to the application component. We describe a real-world application that uses the DUST framework for platform transparency. Next to the DUST framework, we also describe the distributed DUST Coordinator, which will optimize the resource consumption by moving the application components to a different execution location. The coordinators will use an adapted version of the Contract Net Protocol to find local minima in resource consumption.
      Citation: Future Internet
      PubDate: 2019-07-19
      DOI: 10.3390/fi11070158
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 159: Implementation and Evaluation of
           Activity-Based Congestion Management Using P4 (P4-ABC)

    • Authors: Michael Menth, Habib Mostafaei, Daniel Merling, Marco Häberle
      First page: 159
      Abstract: Activity-Based Congestion management (ABC) is a novel domain-based QoS mechanism providing more fairness among customers on bottleneck links. It avoids per-flow or per-customer states in the core network and is suitable for application in future 5G networks. However, ABC cannot be configured on standard devices. P4 is a novel programmable data plane specification which allows defining new headers and forwarding behavior. In this work, we implement an ABC prototype using P4 and point out challenges experienced during implementation. Experimental validation of ABC using the P4-based prototype reveals the desired fairness results.
      Citation: Future Internet
      PubDate: 2019-07-19
      DOI: 10.3390/fi11070159
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 160: Crossing the Borders: Re-Use of Smart
           Learning Objects in Advanced Content Access Systems

    • Authors: Hamza Manzoor, Kamil Akhuseyinoglu, Jackson Wonderly, Peter Brusilovsky, Clifford A. Shaffer
      First page: 160
      Abstract: Researchers in many disciplines are developing novel interactive smart learning objects like exercises and visualizations. Meanwhile, Learning Management Systems (LMS) and eTextbook systems are also becoming more sophisticated in their ability to use standard protocols to make use of third party smart learning objects. But at this time, educational tool developers do not always make best use of the interoperability standards and need exemplars to guide and motivate their development efforts. In this paper we present a case study where the two large educational ecosystems use the Learning Tools Interoperability (LTI) standard to allow cross-sharing of their educational materials. At the end of our development process, Virginia Tech’s OpenDSA eTextbook system became able to import materials from Aalto University’s ACOS smart learning content server, such as python programming exercises and Parsons problems. Meanwhile, University of Pittsburgh’s Mastery Grids (which already uses the ACOS exercises) was made to support CodeWorkout programming exercises (a system already used within OpenDSA). Thus, four major projects in CS Education became inter-operable.
      Citation: Future Internet
      PubDate: 2019-07-19
      DOI: 10.3390/fi11070160
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 161: Leveraging the Internet of Things and
           Blockchain Technology in Supply Chain Management

    • Authors: Abderahman Rejeb, John G. Keogh, Horst Treiblmaier
      First page: 161
      Abstract: Modern supply chains have evolved into highly complex value networks and turned into a vital source of competitive advantage. However, it has become increasingly challenging to verify the source of raw materials and maintain visibility of products and merchandise while they are moving through the value chain network. The application of the Internet of Things (IoT) can help companies to observe, track, and monitor products, activities, and processes within their respective value chain networks. Other applications of IoT include product monitoring to optimize operations in warehousing‚ manufacturing, and transportation. In combination with IoT, Blockchain technology can enable a broad range of different application scenarios to enhance value chain transparency and to increase B2B trust. When combined, IoT and Blockchain technology have the potential to increase the effectiveness and efficiency of modern supply chains. The contribution of this paper is twofold. First, we illustrate how the deployment of Blockchain technology in combination with IoT infrastructure can streamline and benefit modern supply chains and enhance value chain networks. Second, we derive six research propositions outlining how Blockchain technology can impact key features of the IoT (i.e., scalability, security, immutability and auditing, information flows, traceability and interoperability, quality) and thus lay the foundation for future research projects.
      Citation: Future Internet
      PubDate: 2019-07-20
      DOI: 10.3390/fi11070161
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 162: Improving Forensic Triage Efficiency
           through Cyber Threat Intelligence

    • Authors: Nikolaos Serketzis, Vasilios Katos, Christos Ilioudis, Dimitrios Baltatzis, Georgios Pangalos
      First page: 162
      Abstract: The complication of information technology and the proliferation of heterogeneous security devices that produce increased volumes of data coupled with the ever-changing threat landscape challenges have an adverse impact on the efficiency of information security controls and digital forensics, as well as incident response approaches. Cyber Threat Intelligence (CTI)and forensic preparedness are the two parts of the so-called managed security services that defendants can employ to repel, mitigate or investigate security incidents. Despite their success, there is no known effort that has combined these two approaches to enhance Digital Forensic Readiness (DFR) and thus decrease the time and cost of incident response and investigation. This paper builds upon and extends a DFR model that utilises actionable CTI to improve the maturity levels of DFR. The effectiveness and applicability of this model are evaluated through a series of experiments that employ malware-related network data simulating real-world attack scenarios. To this extent, the model manages to identify the root causes of information security incidents with high accuracy (90.73%), precision (96.17%) and recall (93.61%), while managing to decrease significantly the volume of data digital forensic investigators need to examine. The contribution of this paper is twofold. First, it indicates that CTI can be employed by digital forensics processes. Second, it demonstrates and evaluates an efficient mechanism that enhances operational DFR.
      Citation: Future Internet
      PubDate: 2019-07-23
      DOI: 10.3390/fi11070162
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 163: Platform Economy and
           Techno-Regulation—Experimenting with Reputation and Nudge

    • Authors: Nicola Lettieri, Alfonso Guarino, Delfina Malandrino, Rocco Zaccagnino
      First page: 163
      Abstract: In the cloud-based society, where the vast majority of social, economic and personal interactions is mediated by information communication technology (ICT), technology is no longer simply a subject of regulation but is becoming an integral part of the regulatory process. Techno-regulation, the “intentional influencing of individuals’ behavior by building norms into technological devices,” is inspiring new ways to support legal safeguards through hardware and software tools, technical solutions allowing the creation of legal relations, hampering breaches of law and even promoting norm compliance. This paper touches on these issues by focusing on Digital Labor Platforms, one of the most relevant phenomena in the gig economy. We present a research project exploring innovative techno-regulatory solutions to protect gig economy workers. The idea is to integrate, in the same strategy, legal principles, regulatory objectives and software solutions. Our attention focuses on two results of our activity—a techno-regulatory model relying on reputational mechanisms to affect the behavior of digital labor market operators and GigAdvisor, a cross-platform experimental application implementing the model.
      Citation: Future Internet
      PubDate: 2019-07-23
      DOI: 10.3390/fi11070163
      Issue No: Vol. 11, No. 7 (2019)
  • Future Internet, Vol. 11, Pages 121: Dynamic Task Scheduling Algorithm
           with Deadline Constraint in Heterogeneous Volunteer Computing Platforms

    • Authors: Xu, Qiao, Lin, Zhang
      First page: 121
      Abstract: Volunteer computing (VC) is a distributed computing paradigm, which provides unlimited computing resources in the form of donated idle resources for many large-scale scientific computing applications. Task scheduling is one of the most challenging problems in VC. Although, dynamic scheduling problem with deadline constraint has been extensively studied in prior studies in the heterogeneous system, such as cloud computing and clusters, these algorithms can’t be fully applied to VC. This is because volunteer nodes can get offline whenever they want without taking any responsibility, which is different from other distributed computing. For this situation, this paper proposes a dynamic task scheduling algorithm for heterogeneous VC with deadline constraint, called deadline preference dispatch scheduling (DPDS). The DPDS algorithm selects tasks with the nearest deadline each time and assigns them to volunteer nodes (VN), which solves the dynamic task scheduling problem with deadline constraint. To make full use of resources and maximize the number of completed tasks before the deadline constraint, on the basis of the DPDS algorithm, improved dispatch constraint scheduling (IDCS) is further proposed. To verify our algorithms, we conducted experiments, and the results show that the proposed algorithms can effectively solve the dynamic task assignment problem with deadline constraint in VC.
      Citation: Future Internet
      PubDate: 2019-05-28
      DOI: 10.3390/fi11060121
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 122: Survey and Perspectives of Vehicular
           Wi-Fi versus Sidelink Cellular-V2X in the 5G Era

    • Authors: Alessandro Bazzi, Giammarco Cecchini, Michele Menarini, Barbara M. Masini, Alberto Zanella
      First page: 122
      Abstract: The revolution of cooperative connected and automated vehicles is about to begin and a key milestone is the introduction of short range wireless communications between cars. Given the tremendous expected market growth, two different technologies have been standardized by international companies and consortia, namely IEEE 802.11p, out for nearly a decade, and short range cellular-vehicle-to-anything (C-V2X), of recent definition. In both cases, evolutions are under discussion. The former is only decentralized and based on a sensing before transmitting access, while the latter is based on orthogonal resources that can be also managed by an infrastructure. Although studies have been conducted to highlight advantages and drawbacks of both, doubts still remain. In this work, with a reference to the literature and the aid of large scale simulations in realistic urban and highway scenarios, we provide an insight in such a comparison, also trying to isolate the contribution of the physical and medium access control layers.
      Citation: Future Internet
      PubDate: 2019-05-29
      DOI: 10.3390/fi11060122
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 123: Vehicle Speed Estimation Based on 3D
           ConvNets and Non-Local Blocks

    • Authors: Huanan Dong, Ming Wen, Zhouwang Yang
      First page: 123
      Abstract: Vehicle speed estimation is an important problem in traffic surveillance. Many existing approaches to this problem are based on camera calibration. Two shortcomings exist for camera calibration-based methods. First, camera calibration methods are sensitive to the environment, which means the accuracy of the results are compromised in some situations where the environmental condition is not satisfied. Furthermore, camera calibration-based methods rely on vehicle trajectories acquired by a two-stage tracking and detection process. In an effort to overcome these shortcomings, we propose an alternate end-to-end method based on 3-dimensional convolutional networks (3D ConvNets). The proposed method bases average vehicle speed estimation on information from video footage. Our methods are characterized by the following three features. First, we use non-local blocks in our model to better capture spatial–temporal long-range dependency. Second, we use optical flow as an input in the model. Optical flow includes the information on the speed and direction of pixel motion in an image. Third, we construct a multi-scale convolutional network. This network extracts information on various characteristics of vehicles in motion. The proposed method showcases promising experimental results on commonly used dataset with mean absolute error (MAE) as 2.71 km/h and mean square error (MSE) as 14.62 .
      Citation: Future Internet
      PubDate: 2019-05-30
      DOI: 10.3390/fi11060123
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 124: Indoor Vehicles Geolocalization Using

    • Authors: Pietro Manzoni, Carlos T. Calafate, Juan-Carlos Cano, Enrique Hernández-Orallo
      First page: 124
      Abstract: One of the main drawbacks of Global Navigation Satellite Sytems (GNSS) is that they do not work indoors. When inside, there is often no direct line from the satellite signals to the device and the ultra high frequency (UHF) used is blocked by thick, solid materials such as brick, metal, stone or wood. In this paper, we describe a solution based on the Long Range Wide Area Network (LoRaWAN) technology to geolocalise vehicles indoors. Through estimation of the behaviour of a LoRaWAN channel and using trilateration, the localisation of a vehicle can be obtained within a 20–30 m range. Indoor geolocation for Intelligent Transporation Systems (ITS) can be used to locate vehicles of any type in underground parkings, keep a platoon of trucks in formation or create geo-fences, that is, sending an alert if an object moves outside a defined area, like a bicycle being stolen. Routing of heavy vehicles within an industrial setting is another possibility.
      Citation: Future Internet
      PubDate: 2019-05-31
      DOI: 10.3390/fi11060124
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 125: Joint Location-Dependent Pricing and
           Request Mapping in ICN-Based Telco CDNs For 5G

    • Authors: Mingshuang Jin, Hongbin Luo, Shuai Gao, Bohao Feng
      First page: 125
      Abstract: Telco content delivery networks (CDNs) have envisioned building highly distributed and cloudified sites to provide a high-quality CDN service in the 5G era. However, there are still two open problems to be addressed. First, telco CDNs are operated upon the underlay network evolving towards information-centric networking (ICN). Different from CDNs that perform on the application layer, ICN enables information-centric forwarding to the network layer. Thus, it is challenging to take advantage of the benefits of both ICN and CDN to provide a high-quality content delivery service in the context of ICN-based telco CDNs. Second, bandwidth pricing and request mapping issues in ICN-based telco CDNs have not been thoroughly studied. In this paper, we first propose an ICN-based telco CDN framework that integrates the information-centric forwarding enabled by ICN and the powerful edge caching enabled by telco CDNs. Then, we propose a location-dependent pricing (LDP) strategy, taking into consideration the congestion level of different sites. Furthermore, on the basis of LDP, we formulate a price-aware request mapping (PARM) problem, which can be solved by existing linear programming solvers. Finally, we conduct extensive simulations to evaluate the effectiveness of our design.
      Citation: Future Internet
      PubDate: 2019-06-03
      DOI: 10.3390/fi11060125
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 126: CKMI: Comprehensive Key Management
           Infrastructure Design for Industrial Automation and Control Systems

    • Authors: Pramod T. C., Thejas G. S., S. S. Iyengar, N. R. Sunitha
      First page: 126
      Abstract: Industrial Automation and Control Systems (IACS) are broadly utilized in critical infrastructures for monitoring and controlling the industrial processes remotely. The real-time transmissions in such systems provoke security breaches. Many security breaches have been reported impacting society severely. Hence, it is essential to achieve secure communication between the devices for creating a secure environment. For this to be effective, the keys used for secure communication must be protected against unauthorized disclosure, misuse, alteration or loss, which can be taken care of by a Key Management Infrastructure. In this paper, by considering the generic industrial automation network, a comprehensive key management infrastructure (CKMI) is designed for IACS. To design such an infrastructure, the proposed scheme employs ECDH, matrix method, and polynomial crypto mechanisms. The proposed design handles all the standard key management operations, viz. key generation, device registration, key establishment, key storage, device addition, key revocation, key update, key recovery, key archival, and key de-registration and destruction. The design supports secure communication between the same and different levels of IACS devices. The proposed design can be applied for major industrial automation networks to handle the key management operations. The performance analysis and implementation results highlight the benefits of the proposed design.
      Citation: Future Internet
      PubDate: 2019-06-04
      DOI: 10.3390/fi11060126
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 127: Cyber-Storms Come from Clouds:
           Security of Cloud Computing in the IoT Era

    • Authors: Michele De Donno, Alberto Giaretta, Nicola Dragoni, Antonio Bucchiarone, Manuel Mazzara
      First page: 127
      Abstract: The Internet of Things (IoT) is rapidly changing our society to a world where every “thing” is connected to the Internet, making computing pervasive like never before. This tsunami of connectivity and data collection relies more and more on the Cloud, where data analytics and intelligence actually reside. Cloud computing has indeed revolutionized the way computational resources and services can be used and accessed, implementing the concept of utility computing whose advantages are undeniable for every business. However, despite the benefits in terms of flexibility, economic savings, and support of new services, its widespread adoption is hindered by the security issues arising with its usage. From a security perspective, the technological revolution introduced by IoT and Cloud computing can represent a disaster, as each object might become inherently remotely hackable and, as a consequence, controllable by malicious actors. While the literature mostly focuses on the security of IoT and Cloud computing as separate entities, in this article we provide an up-to-date and well-structured survey of the security issues of cloud computing in the IoT era. We give a clear picture of where security issues occur and what their potential impact is. As a result, we claim that it is not enough to secure IoT devices, as cyber-storms come from Clouds.
      Citation: Future Internet
      PubDate: 2019-06-04
      DOI: 10.3390/fi11060127
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 128: Distributed Edge Computing to Assist
           Ultra-Low-Latency VANET Applications

    • Authors: Vladyko, Khakimov, Muthanna, Ateya, Koucheryavy
      First page: 128
      Abstract: Vehicular ad hoc networks (VANETs) are a recent class of peer-to-peer wireless networks that are used to organize the communication and interaction between cars (V2V), between cars and infrastructure (V2I), and between cars and other types of nodes (V2X). These networks are based on the dedicated short-range communication (DSRC) IEEE 802.11 standards and are mainly intended to organize the exchange of various types of messages, mainly emergency ones, to prevent road accidents, alert when a road accident occurs, or control the priority of the roadway. Initially, it was assumed that cars would only interact with each other, but later, with the advent of the concept of the Internet of things (IoT), interactions with surrounding devices became a demand. However, there are many challenges associated with the interaction of vehicles and the interaction with the road infrastructure. Among the main challenge is the high density and the dramatic increase of the vehicles’ traffic. To this end, this work provides a novel system based on mobile edge computing (MEC) to solve the problem of high traffic density and provides and offloading path to vehicle’s traffic. The proposed system also reduces the total latency of data communicated between vehicles and stationary roadside units (RSUs). Moreover, a latency-aware offloading algorithm is developed for managing and controlling data offloading from vehicles to edge servers. The system was simulated over a reliable environment for performance evaluation, and a real experiment was conducted to validate the proposed system and the developed offloading method.
      Citation: Future Internet
      PubDate: 2019-06-04
      DOI: 10.3390/fi11060128
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 129: The Design and Deployment of an
           End-To-End IoT Infrastructure for the Natural Environment

    • Authors: Vatsala Nundloll, Barry Porter, Gordon S. Blair, Bridget Emmett, Jack Cosby, Davey L. Jones, Dave Chadwick, Ben Winterbourn, Philip Beattie, Graham Dean, Rory Shaw, Wayne Shelley, Mike Brown, Izhar Ullah
      First page: 129
      Abstract: Internet of Things (IoT) systems have seen recent growth in popularity for city and home environments. We report on the design, deployment, and use of the IoT infrastructure for environmental monitoring and management. Working closely with hydrologists, soil scientists, and animal behaviour scientists, we successfully deployed and utilised a system to deliver integrated information across these two fields in the first such example of real-time multidimensional environmental science. We describe the design of this system; its requirements and operational effectiveness for hydrological, soil, and ethological scientists; and our experiences from building, maintaining, and using the deployment at a remote site in difficult conditions. Based on this experience, we discuss key future work for the IoT community when working in these kinds of environmental deployments.
      Citation: Future Internet
      PubDate: 2019-06-07
      DOI: 10.3390/fi11060129
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 130: Digital Marketing Actions That
           Achieve a Better Attraction and Loyalty of Users: An Analytical Study

    • Authors: Juan José López García, David Lizcano, Celia MQ Ramos, Nelson Matos
      First page: 130
      Abstract: Currently, the digital economy contributes decisively to an increase in competitiveness, especially as a digital transformation involves migrating to new technological models where digital marketing is a key part of growth and user loyalty strategies. Internet and Digital Marketing have become important factors in campaigns, which attract and retain Internet users. This study aims to identify the main ways in which users can be gained and retained by using Digital Marketing. The Delphi method with in-depth interviews was the methodology used in this study. The results of the research show the most important actions for achieving user recruitment and loyalty with Digital Marketing from the opinions of consulted experts. The limitations of this study are those related to the number of experts included in the study, and the number of research papers consulted in the literature review. The literature review and the results of this research are used to propose new solid research with a consolidated critical methodology. This research deals with a new approach that will optimize web technologies for the evolution of user trends, and therefore, will be of academic and professional use for marketing managers and web solution developers. The conclusions of the investigation show the key factors, discarding others that do not affect the optimization of conversions in B2C businesses such as the duration of the session and the rebound percentage. Likewise, the results of the research identify the specific actions that must be carried out to attract and retain users in B2C companies that use the Digital Marketing ecosystem on the Internet. The requirements for companies that wish to implement a model to optimize conversions using the current digital economy are also shown.
      Citation: Future Internet
      PubDate: 2019-06-08
      DOI: 10.3390/fi11060130
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 131: Time Series Irradiance Synthesizer
           for Optical GEO Satellite Downlinks in 5G Networks

    • Authors: Theodore T. Kapsis, Nikolaos K. Lyras, Charilaos I. Kourogiorgas, Athanasios D. Panagopoulos
      First page: 131
      Abstract: Next generation 5G networks generate a need for broadband, low latency and power efficient backhauling and data-relay services. In this paper, optical satellite communications links, as an integrated component of 5G networks, are studied. More specifically, the Geostationary (GEO) satellite-to-ground optical communication link is investigated. Long-term irradiance statistics based on experimental measurements from the ARTEMIS program are presented and a new time series generator related to the received irradiance/power fluctuations due to atmospheric turbulence is reported. The proposed synthesizer takes into consideration the turbulence-induced scintillation effects that deteriorate the laser beam propagation, on the assumption of the Kolmogorov spectrum. The modeling is based on Rytov theory regarding weak turbulence conditions with the incorporation of first order stochastic differential equations. Summing up, the time series synthesizer is validated in terms of first and second order statistics with experimental results from the European Space Agency‘s ARTEMIS experimental optical downlink and simulated received power statistics for various weather conditions are presented using the proposed validated methodology. Some important conclusions are drawn.
      Citation: Future Internet
      PubDate: 2019-06-13
      DOI: 10.3390/fi11060131
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 132: A Choreography-Based and
           Collaborative Road Mobility System for L’Aquila City

    • Authors: Marco Autili, Amleto Di Salle, Francesco Gallo, Claudio Pompilio, Massimo Tivoli
      First page: 132
      Abstract: Next Generation Internet (NGI) is the European initiative launched to identify the future internet technologies, designed to serve the needs of the digitalized society while ensuring privacy, trust, decentralization, openness, inclusion, and business cooperation. NGI provides efficient support to promote diversity, decentralization and the growth of disruptive innovation envisioned by smart cities. After the earthquake of 6 April 2009, the city of L’Aquila is facing a massive and innovative reconstruction process. As a consequence, nowadays, the L’Aquila city can be considered as a living laboratory model for applications within the context of smart cities. This paper describes and evaluates the realization of a Collaborative Road Mobility System (CRMS) for L’Aquila city by using our CHOReVOLUTION approach for the automated choreography production. The CRMS allows vehicles and transport infrastructure to interconnect, share information and use it to coordinate their actions.
      Citation: Future Internet
      PubDate: 2019-06-14
      DOI: 10.3390/fi11060132
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 133: Energy-Efficient Multi-User Routing
           in a Software-Defined Multi-Hop Wireless Network

    • Authors: Ziqi Liu, Gaochao Xu, Peng Liu, Xiaodong Fu, Yang Liu
      First page: 133
      Abstract: Software-defined networking (SDN) is an innovative architecture that designs a logical controller to manage and program the network based on the global view, providing more efficient management, better performance, and higher flexibility for the network. Therefore, applying the SDN concept in a multi-hop wireless network (MWN) has been proposed and extensively studied to overcome the challenges of MWN. In this paper, we propose an energy-efficient global routing algorithm for a software-defined multi-hop wireless network (SDMWN), which is able to get transmission paths for several users at the same time to minimize the global energy consumption with the premise of satisfying the QoS required by users. To this end, we firstly propose a Lagrange relaxation-based aggregated cost (LARAC) and K-Dijkstra combined algorithm to get the top K energy-minimum paths that satisfy the QoS in polynomial time. Then, we combine the alternative paths of each user obtained by K-LARAC and propose an improved genetic algorithm to solve the global routing strategy. The simulation results show that the proposed K-LARAC and genetic algorithm combined method has the ability to obtain an approximate optimal solution with lower time cost.
      Citation: Future Internet
      PubDate: 2019-06-17
      DOI: 10.3390/fi11060133
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 134: Bridging User Perception and
           Stickiness in Business Microblog Contexts: A Moderated Mediation Model

    • Authors: Chien-Lung Hsu, Yi-Chuan Liao
      First page: 134
      Abstract: This study develops a contingent mediation model to investigate whether user perception enhances customer stickiness through emotional connection and further assess such mediating effect varies with different adaptivity. A moderated mediation approach is adopted to test the hypotheses. Findings reveal the mediating role of emotional connection on the link between perceived usefulness and customer stickiness, but not moderated by adaptivity. On the other hand, the results showed that the relationship between perceived ease of use and customer stickiness is not mediated by emotional connection; however, after considering the moderating effect, our results show that moderated mediation exists.
      Citation: Future Internet
      PubDate: 2019-06-19
      DOI: 10.3390/fi11060134
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 135: The Interaction Effect of Type of
           Message X YouTuber’s Media Metrics on Customers’ Responses and the
           Moderation of Conformity Intention

    • Authors: Melby Karina Zuniga Huertas, Tarcisio Duarte Coelho
      First page: 135
      Abstract: This is a study of the way in which YouTubers’ media metrics influence the effect of their one-sided messages (1SMs) and two-sided messages (2SMs), providing theoretical explanations based on the elaboration likelihood model. Its main objective is the proposition and testing of: (i) the interaction effect between type of message and media metrics of the YouTuber on customers’ responses, and (ii) the moderation of individuals’ conformity intention for the interaction effect between type of message and media metrics on customers’ responses. The results of an experiment showed that high YouTubers’ media metrics have more effect for 1SMs and less effect for 2SMs. Additionally, conformity intention moderates the effect of the interaction type of message X media metrics. A high level of conformity intention neutralizes the interaction effect between YouTubers’ media metrics and message sidedness. This study makes a theoretical contribution to research into online content and information use, providing explanations of how media metrics of a vlog influence the effect of two types of messages.
      Citation: Future Internet
      PubDate: 2019-06-20
      DOI: 10.3390/fi11060135
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 136: Developing Internet Online
           Procurement Frameworks for Construction Firms

    • Authors: Rafikullah Deraman, Chen Wang, Jeffrey Boon Hui Yap, Heng Li, Faizul Azli Mohd-Rahim
      First page: 136
      Abstract: Electronic purchasing or e-procurement saves millions of dollars yearly in transaction costs. E-procurement helps to cut down the supplier base, promotes paperless transactions, and increases transparency and accountability in the procurement process. Nonetheless, studies report that around 80% of e-procurement initiatives have met with failure and failed to achieve the desired results. Although studies to better understand the Critical Success Factors (CSFs) of e-procurement implementation involving various industries have been on the rise, little is known about architecture engineering and construction (AEC) practices, which has led to limited development of pragmatic frameworks to uncover the factors. Thus, this study aims to identify those CSFs (predicting variables) which significantly contribute to e-procurement implementation success in the construction sector and to put forward for better implementation. Results from multiple regression analysis revealed five factors to be statistically significant predictors of success. Three factors were determined to be predictors of user satisfaction. Finally, internet online procurement frameworks were developed for the success of e-procurement implementation in the construction sector.
      Citation: Future Internet
      PubDate: 2019-06-20
      DOI: 10.3390/fi11060136
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 137: Negative Impact of Social Network
           Services Based on Stressor-Stress-Outcome: The Role of Experience of
           Privacy Violations

    • Authors: Sanghyun Kim, Hyunsun Park, Moon Jong Choi
      First page: 137
      Abstract: Social network service (SNS) information has benefited many individuals. However, as such information has increased exponentially, the number of SNS users has increased dramatically and negative effects of SNSs on users have emerged. Many SNS users experience negative psychological conditions such as fatigue, burnout, and stress. Thus, in this study, we investigated the SNS and user characteristics that affect SNS fatigue, living disorder, and reduced SNS use intention. We developed a research model to examine the impact of two SNS characteristics (irrelevant information overload and open reachability) and two user characteristics (engagement and maintaining self-reputation) on SNS fatigue. We also examined the role of the experience of privacy violations in the relationship between living disorder and reduced SNS use intention. We collected data from 579 SNS users and created a partial least squares structural equation model to test the hypotheses. The results of the analysis showed that three factors, other than open reachability, positively affected SNS fatigue. Furthermore, we found that SNS fatigue significantly affected living disorder and reduced SNS use intention, and that experience of privacy violations significantly affected the relationship between living disorder and reduced SNS use intention. These results expand our understanding of SNS fatigue and users’ negative behaviors.
      Citation: Future Internet
      PubDate: 2019-06-20
      DOI: 10.3390/fi11060137
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 138: Data Anonymization for Hiding
           Personal Tendency in Set-Valued Database Publication

    • Authors: Dedi Gunawan, Masahiro Mambo
      First page: 138
      Abstract: Set-valued database publication has been increasing its importance recently due to its benefit for various applications such as marketing analysis and advertising. However, publishing a raw set-valued database may cause individual privacy breach such as the leakage of sensitive information like personal tendencies when data recipients perform data analysis. Even though imposing data anonymization methods such as suppression-based methods and random data swapping methods to such a database can successfully hide personal tendency, it induces item loss from records and causes significant distortion in record structure that degrades database utility. To avoid the problems, we proposed a method based on swapping technique where an individual’s items in a record are swapped to items of the other record. Our swapping technique is distinct from existing one called random data swapping which yields much structure distortion. Even though the technique results in inaccuracy at a record level, it can preserve every single item in a database from loss. Thus, data recipients may obtain all the item information in an anonymized database. In addition, by carefully selecting a pair of records for item swapping, we can avoid excessive record structure distortion that leads to alter database content immensely. More importantly, such a strategy allows one to successfully hide personal tendency without sacrificing a lot of database utility.
      Citation: Future Internet
      PubDate: 2019-06-20
      DOI: 10.3390/fi11060138
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 139: Location-Based Deferred Broadcast for
           Ad-Hoc Named Data Networking

    • Authors: Meng Kuai, Xiaoyan Hong
      First page: 139
      Abstract: The emerging connected and autonomous vehicles (CAVs) challenge ad hoc wireless multi-hop communications by mobility, large-scale, new data acquisition and computing patterns. The Named Data Networking (NDN) is suitable for such vehicle ad hoc networks due to its information centric networking approach. However, flooding interest packets in ad-hoc NDN can lead to broadcast storm issue. Existing solutions will either increase the number of redundant interest packets or need a global knowledge about data producers. In this paper, a Location-Based Deferred Broadcast (LBDB) scheme is introduced to improve the efficiency and performance of interest broadcast in ad-hoc NDN. The scheme takes advantage of location information to set up timers when rebroadcasting an interest. The LBDB is implemented in V-NDN network architecture using ndnSIM simulator. Comparisons with several existing protocols are conducted in simulation. The results show that LBDB improves the overhead, the average number of hops and delay while maintaining an average satisfaction ratio when compared with several other broadcast schemes. The improvement can help offer timely data acquisition for quick responses in emergent CAV application situations.
      Citation: Future Internet
      PubDate: 2019-06-24
      DOI: 10.3390/fi11060139
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 140: Future Intelligent Systems and

    • Authors: Carmen De-Pablos-Heredero
      First page: 140
      Abstract: The purpose of this Special Issue is to collect current developments and future directions of Future Intelligent Systems and Networks [...]
      Citation: Future Internet
      PubDate: 2019-06-25
      DOI: 10.3390/fi11060140
      Issue No: Vol. 11, No. 6 (2019)
  • Future Internet, Vol. 11, Pages 106: Dynamic Lognormal Shadowing Framework
           for the Performance Evaluation of Next Generation Cellular Systems

    • Authors: Karagiannis, Panagopoulos
      First page: 106
      Abstract: Performance evaluation tools for wireless cellular systems are very important for the establishment and testing of future internet applications. As the complexity of wireless networks keeps growing, wireless connectivity becomes the most critical requirement in a variety of applications (considered also complex and unfavorable from propagation point of view environments and paradigms). Nowadays, with the upcoming 5G cellular networks the development of realistic and more accurate channel model frameworks has become more important since new frequency bands are used and new architectures are employed. Large scale fading known also as shadowing, refers to the variations of the received signal mainly caused by obstructions that significantly affect the available signal power at a receiver’s position. Although the variability of shadowing is considered mostly spatial for a given propagation environment, moving obstructions may significantly impact the received signal’s strength, especially in dense environments, inducing thus a temporal variability even for the fixed users. In this paper, we present the case of lognormal shadowing, a novel engineering model based on stochastic differential equations that models not only the spatial correlation structure of shadowing but also its temporal dynamics. Based on the proposed spatio-temporal shadowing field we present a computationally efficient model for the dynamics of shadowing experienced by stationary or mobile users. We also present new analytical results for the average outage duration and hand-offs based on multi-dimensional level crossings. Numerical results are also presented for the validation of the model and some important conclusions are drawn.
      Citation: Future Internet
      PubDate: 2019-05-02
      DOI: 10.3390/fi11050106
      Issue No: Vol. 11, No. 5 (2019)
  • Future Internet, Vol. 11, Pages 107: An Extensible Automated Failure
           Localization Framework Using NetKAT, Felix, and SDN Traceroute

    • Authors: István Pelle, András Gulyás
      First page: 107
      Abstract: Designing, implementing, and maintaining network policies that protect from internal and external threats is a highly non-trivial task. Often, troubleshooting networks consisting of diverse entities realizing complex policies is even harder. Software-defined networking (SDN) enables networks to adapt to changing scenarios, which significantly lessens human effort required for constant manual modifications of device configurations. Troubleshooting benefits SDN’s method of accessing forwarding devices as well, since monitoring is made much easier via unified control channels. However, by making policy changes easier, the job of troubleshooting operators is made harder too: For humans, finding, analyzing, and fixing network issues becomes almost intractable. In this paper, we present a failure localization framework and its proof-of-concept prototype that helps in automating the investigation of network issues. Like a controller for troubleshooting tools, our framework integrates the formal specification (expected behavior) and network monitoring (actual behavior) and automatically gives hints about the location and type of network issues by comparing the two types of information. By using NetKAT (Kleene algebra with tests) for formal specification and Felix and SDN traceroute for network monitoring, we show that the integration of these tools in a single framework can significantly ease the network troubleshooting process.
      Citation: Future Internet
      PubDate: 2019-05-04
      DOI: 10.3390/fi11050107
      Issue No: Vol. 11, No. 5 (2019)
  • Future Internet, Vol. 11, Pages 108: A Lightweight Elliptic-Elgamal-Based
           Authentication Scheme for Secure Device-to-Device Communication

    • Authors: Adeel Abro, Zhongliang Deng, Kamran Ali Memon
      First page: 108
      Abstract: Device-to-Device (D2D) is a major part of 5G that will facilitate deployments with extended coverage where devices can act as users or relays. These relays normally act as decode and forward relays (semi-intelligent devices) with limited computational and storage capabilities. However, introducing such a technology, where users can act as relays, presents a wide range of security threats, in particular, rogue relay devices or man in the middle attacks (M-I-T-M). Second, passing fewer control messages is always advisable when considering authenticity and secrecy. To mitigate M-I-T-M and to reduce communication costs, this paper presents a lightweight elliptic-ElGamal-based authentication scheme using PKI (FHEEP) in D2D communication. Pollard’s rho and Baby Step, Giant Step (BSGS) methods are used to evaluate the authenticity and secrecy of our proposed scheme. The communication cost is calculated based on the comparative analysis indicating that our proposed scheme outperforms the baseline protocol. The proposed scheme can be used for any infrastructure architecture that will enhance the security of any D2D settings with better performance.
      Citation: Future Internet
      PubDate: 2019-05-07
      DOI: 10.3390/fi11050108
      Issue No: Vol. 11, No. 5 (2019)
  • Future Internet, Vol. 11, Pages 109: Novel Approach to Task Scheduling and
           Load Balancing Using the Dominant Sequence Clustering and Mean Shift
           Clustering Algorithms

    • Authors: Amer Al-Rahayfeh, Saleh Atiewi, Abdullah Abuhussein, Muder Almiani
      First page: 109
      Abstract: Cloud computing (CC) is fast-growing and frequently adopted in information technology (IT) environments due to the benefits it offers. Task scheduling and load balancing are amongst the hot topics in the realm of CC. To overcome the shortcomings of the existing task scheduling and load balancing approaches, we propose a novel approach that uses dominant sequence clustering (DSC) for task scheduling and a weighted least connection (WLC) algorithm for load balancing. First, users’ tasks are clustered using the DSC algorithm, which represents user tasks as graph of one or more clusters. After task clustering, each task is ranked using Modified Heterogeneous Earliest Finish Time (MHEFT) algorithm. where the highest priority task is scheduled first. Afterwards, virtual machines (VM) are clustered using a mean shift clustering (MSC) algorithm using kernel functions. Load balancing is subsequently performed using a WLC algorithm, which distributes the load based on server weight and capacity as well as client connectivity to server. A highly weighted or least connected server is selected for task allocation, which in turn increases the response time. Finally, we evaluate the proposed architecture using metrics such as response time, makespan, resource utilization, and service reliability.
      Citation: Future Internet
      PubDate: 2019-05-08
      DOI: 10.3390/fi11050109
      Issue No: Vol. 11, No. 5 (2019)
  • Future Internet, Vol. 11, Pages 110: A Yielding Protocol that Uses
           Inter-Vehicle Communication to Improve the Traffic of Vehicles on a
           Low-Priority Road at an Unsignalized Intersection

    • Authors: Yajima, Takami
      First page: 110
      Abstract: Self-driven vehicles are being actively developed. When widespread, they will help reduce the number of traffic accidents and ease traffic congestion. They will coexist with human-driven vehicles for years. If there is a mismatch between human drivers’ operations and the judgments of self-driven vehicles, congestion may arise at an unsignalized intersection, in particular, where roads are prioritized. Vehicles on the low-priority road attempting to cross, or turn to, the priority road can significantly reduce the traffic flow. We have proposed a yielding protocol to deal with this problem and evaluated it using a simulation that focused on traffic flow efficiency at an intersection. In the simulation, we have varied the number of vehicles coming into the roads and the percentage of self-driven vehicles and confirmed that the proposed yielding protocol could improve the traffic flow of vehicles on the low-priority road.
      Citation: Future Internet
      PubDate: 2019-05-09
      DOI: 10.3390/fi11050110
      Issue No: Vol. 11, No. 5 (2019)
  • Future Internet, Vol. 11, Pages 111: Guidelines towards Information-Driven
           Mobility Management

    • Authors: Rute C. Sofia
      First page: 111
      Abstract: The architectural semantics of Information-Centric Networking bring in interesting features in regards to mobility management: Information-Centric Networking is content-oriented, connection-less, and receiver-driven. Despite such intrinsic advantages, the support for node movement is being based on the principles of IP solutions. IP-based solutions are, however, host-oriented, and Information-Centric Networking paradigms are information-oriented. By following IP mobility management principles, some of the natural mobility support advantages of Information-Centric Networking are not being adequately explored. This paper contributes with an overview on how Information-Centric Networking paradigms handle mobility management as of today, highlighting current challenges and proposing a set of design guidelines to overcome them, thus steering a vision towards a content-centric mobility management approach.
      Citation: Future Internet
      PubDate: 2019-05-10
      DOI: 10.3390/fi11050111
      Issue No: Vol. 11, No. 5 (2019)
  • Future Internet, Vol. 11, Pages 112: Substitute Seed Nodes Mining
           Algorithms for Influence Maximization in Multi-Social Networks

    • Authors: Rao, Zhao, Chen, Lin
      First page: 112
      Abstract: Due to the growing interconnections of social networks, the problem of influence maximization has been extended from a single social network to multiple social networks. However, a critical challenge of influence maximization in multi-social networks is that some initial seed nodes may be unable to be active, which obviously leads to a low performance of influence spreading. Therefore, finding substitute nodes for mitigating the influence loss of uncooperative nodes is extremely helpful in influence maximization. In this paper, we propose three substitute mining algorithms for influence maximization in multi-social networks, namely for the Greedy-based substitute mining algorithm, pre-selected-based substitute mining algorithm, and similar-users-based substitute mining algorithm. The simulation results demonstrate that the existence of the uncooperative seed nodes leads to the range reduction of information influence. Furthermore, the viability and performance of the proposed algorithms are presented, which show that three substitute node mining algorithms can find suitable substitute nodes for multi-social networks influence maximization, thus achieves better influence.
      Citation: Future Internet
      PubDate: 2019-05-10
      DOI: 10.3390/fi11050112
      Issue No: Vol. 11, No. 5 (2019)
  • Future Internet, Vol. 11, Pages 113: Evaluating Forwarding Protocols in
           Opportunistic Networks: Trends, Advances, Challenges and Best Practices

    • Authors: Vishnupriya Kuppusamy, Udaya Miriya Thanthrige, Asanga Udugama, Anna Förster
      First page: 113
      Abstract: A variety of applications and forwarding protocols have been proposed for opportunistic networks (OppNets) in the literature. However, the methodology of evaluation, testing and comparing these forwarding protocols are not standardized yet, which leads to large levels of ambiguity in performance evaluation studies. Performance results depend largely on the evaluation environment, and on the used parameters and models. More comparability in evaluation scenarios and methodologies would largely improve also the availability of protocols and the repeatability of studies, and thus would accelerate the development of this research topic. In this survey paper, we focus our attention on how various OppNets data forwarding protocols are evaluated rather than what they actually achieve. We explore the models, parameters and the evaluation environments and make observations about their scalability, realism and comparability. Finally, we deduce some best practices on how to achieve the largest impact of future evaluation studies of OppNets data dissemination/forwarding protocols.
      Citation: Future Internet
      PubDate: 2019-05-11
      DOI: 10.3390/fi11050113
      Issue No: Vol. 11, No. 5 (2019)
  • Future Internet, Vol. 11, Pages 114: Word Sense Disambiguation Using
           Cosine Similarity Collaborates with Word2vec and WordNet

    • Authors: Korawit Orkphol, Wu Yang
      First page: 114
      Abstract: Words have different meanings (i.e., senses) depending on the context. Disambiguating the correct sense is important and a challenging task for natural language processing. An intuitive way is to select the highest similarity between the context and sense definitions provided by a large lexical database of English, WordNet. In this database, nouns, verbs, adjectives, and adverbs are grouped into sets of cognitive synonyms interlinked through conceptual semantics and lexicon relations. Traditional unsupervised approaches compute similarity by counting overlapping words between the context and sense definitions which must match exactly. Similarity should compute based on how words are related rather than overlapping by representing the context and sense definitions on a vector space model and analyzing distributional semantic relationships among them using latent semantic analysis (LSA). When a corpus of text becomes more massive, LSA consumes much more memory and is not flexible to train a huge corpus of text. A word-embedding approach has an advantage in this issue. Word2vec is a popular word-embedding approach that represents words on a fix-sized vector space model through either the skip-gram or continuous bag-of-words (CBOW) model. Word2vec is also effectively capturing semantic and syntactic word similarities from a huge corpus of text better than LSA. Our method used Word2vec to construct a context sentence vector, and sense definition vectors then give each word sense a score using cosine similarity to compute the similarity between those sentence vectors. The sense definition also expanded with sense relations retrieved from WordNet. If the score is not higher than a specific threshold, the score will be combined with the probability of that sense distribution learned from a large sense-tagged corpus, SEMCOR. The possible answer senses can be obtained from high scores. Our method shows that the result (50.9% or 48.7% without the probability of sense distribution) is higher than the baselines (i.e., original, simplified, adapted and LSA Lesk) and outperforms many unsupervised systems participating in the SENSEVAL-3 English lexical sample task.
      Citation: Future Internet
      PubDate: 2019-05-12
      DOI: 10.3390/fi11050114
      Issue No: Vol. 11, No. 5 (2019)
  • Future Internet, Vol. 11, Pages 115: Convolutional Two-Stream Network
           Using Multi-Facial Feature Fusion for Driver Fatigue Detection

    • Authors: Weihuang Liu, Jinhao Qian, Zengwei Yao, Xintao Jiao, Jiahui Pan
      First page: 115
      Abstract: Road traffic accidents caused by fatigue driving are common causes of human casualties. In this paper, we present a driver fatigue detection algorithm using two-stream network models with multi-facial features. The algorithm consists of four parts: (1) Positioning mouth and eye with multi-task cascaded convolutional neural networks (MTCNNs). (2) Extracting the static features from a partial facial image. (3) Extracting the dynamic features from a partial facial optical flow. (4) Combining both static and dynamic features using a two-stream neural network to make the classification. The main contribution of this paper is the combination of a two-stream network and multi-facial features for driver fatigue detection. Two-stream networks can combine static and dynamic image information, while partial facial images as network inputs can focus on fatigue-related information, which brings better performance. Moreover, we applied gamma correction to enhance image contrast, which can help our method achieve better results, noted by an increased accuracy of 2% in night environments. Finally, an accuracy of 97.06% was achieved on the National Tsing Hua University Driver Drowsiness Detection (NTHU-DDD) dataset.
      Citation: Future Internet
      PubDate: 2019-05-14
      DOI: 10.3390/fi11050115
      Issue No: Vol. 11, No. 5 (2019)
  • Future Internet, Vol. 11, Pages 116: Identity-as-a-Service: An Adaptive
           Security Infrastructure and Privacy-Preserving User Identity for the Cloud

    • Authors: Tri Hoang Vo, Woldemar Fuhrmann, Klaus-Peter Fischer-Hellmann, Steven Furnell
      First page: 116
      Abstract: In recent years, enterprise applications have begun to migrate from a local hosting to a cloud provider and may have established a business-to-business relationship with each other manually. Adaptation of existing applications requires substantial implementation changes in individual architectural components. On the other hand, users may store their Personal Identifiable Information (PII) in the cloud environment so that cloud services may access and use it on demand. Even if cloud services specify their privacy policies, we cannot guarantee that they follow their policies and will not (accidentally) transfer PII to another party. In this paper, we present Identity-as-a-Service (IDaaS) as a trusted Identity and Access Management with two requirements: Firstly, IDaaS adapts trust between cloud services on demand. We move the trust relationship and identity propagation out of the application implementation and model them as a security topology. When the business comes up with a new e-commerce scenario, IDaaS uses the security topology to adapt a platform-specific security infrastructure for the given business scenario at runtime. Secondly, we protect the confidentiality of PII in federated security domains. We propose our Purpose-based Encryption to protect the disclosure of PII from intermediary entities in a business transaction and from untrusted hosts. Our solution is compliant with the General Data Protection Regulation and involves the least user interaction to prevent identity theft via the human link. The implementation can be easily adapted to existing Identity Management systems, and the performance is fast.
      Citation: Future Internet
      PubDate: 2019-05-15
      DOI: 10.3390/fi11050116
      Issue No: Vol. 11, No. 5 (2019)
  • Future Internet, Vol. 11, Pages 117: Enhancing IoT Data Dependability
           through a Blockchain Mirror Model

    • Authors: Alessandro Bellini, Emanuele Bellini, Monica Gherardelli, Franco Pirri
      First page: 117
      Abstract: The Internet of Things (IoT) is a remarkable data producer and these data may be used to prevent or detect security vulnerabilities and increase productivity by the adoption of statistical and Artificial Intelligence (AI) techniques. However, these desirable benefits are gained if data from IoT networks are dependable—this is where blockchain comes into play. In fact, through blockchain, critical IoT data may be trusted, i.e., considered valid for any subsequent processing. A simple formal model named “the Mirror Model” is proposed to connect IoT data organized in traditional models to assets of trust in a blockchain. The Mirror Model sets some formal conditions to produce trusted data that remain trusted over time. A possible practical implementation of an application programming interface (API) is proposed, which keeps the data and the trust model in synch. Finally, it is noted that the Mirror Model enforces a top-down approach from reality to implementation instead of going the opposite way as it is now the practice when referring to blockchain and the IoT.
      Citation: Future Internet
      PubDate: 2019-05-21
      DOI: 10.3390/fi11050117
      Issue No: Vol. 11, No. 5 (2019)
  • Future Internet, Vol. 11, Pages 118: Intelligent Dynamic Data Offloading
           in a Competitive Mobile Edge Computing Market

    • Authors: Giorgos Mitsis, Pavlos Athanasios Apostolopoulos, Eirini Eleni Tsiropoulou, Symeon Papavassiliou
      First page: 118
      Abstract: Software Defined Networks (SDN) and Mobile Edge Computing (MEC), capable of dynamically managing and satisfying the end-users computing demands, have emerged as key enabling technologies of 5G networks. In this paper, the joint problem of MEC server selection by the end-users and their optimal data offloading, as well as the optimal price setting by the MEC servers is studied in a multiple MEC servers and multiple end-users environment. The flexibility and programmability offered by the SDN technology enables the realistic implementation of the proposed framework. Initially, an SDN controller executes a reinforcement learning framework based on the theory of stochastic learning automata towards enabling the end-users to select a MEC server to offload their data. The discount offered by the MEC server, its congestion and its penetration in terms of serving end-users’ computing tasks, and its announced pricing for its computing services are considered in the overall MEC selection process. To determine the end-users’ data offloading portion to the selected MEC server, a non-cooperative game among the end-users of each server is formulated and the existence and uniqueness of the corresponding Nash Equilibrium is shown. An optimization problem of maximizing the MEC servers’ profit is formulated and solved to determine the MEC servers’ optimal pricing with respect to their offered computing services and the received offloaded data. To realize the proposed framework, an iterative and low-complexity algorithm is introduced and designed. The performance of the proposed approach was evaluated through modeling and simulation under several scenarios, with both homogeneous and heterogeneous end-users.
      Citation: Future Internet
      PubDate: 2019-05-21
      DOI: 10.3390/fi11050118
      Issue No: Vol. 11, No. 5 (2019)
  • Future Internet, Vol. 11, Pages 119: The Next Generation Platform as A
           Service: Composition and Deployment of Platforms and Services

    • Authors: Angelos Mimidis-Kentis, Jose Soler, Paul Veitch, Adam Broadbent, Marco Mobilio, Oliviero Riganelli, Steven Van Rossem, Wouter Tavernier, Bessem Sayadi
      First page: 119
      Abstract: The emergence of widespread cloudification and virtualisation promises increased flexibility, scalability, and programmability for the deployment of services by Vertical Service Providers (VSPs). This cloudification also improves service and network management, reducing the Capital and Operational Expenses (CAPEX, OPEX). A truly cloud-native approach is essential, since 5G will provide a diverse range of services - many requiring stringent performance guarantees while maximising flexibility and agility despite the technological diversity. This paper proposes a workflow based on the principles of build-to-order, Build-Ship-Run, and automation; following the Next Generation Platform as a Service (NGPaaS) vision. Through the concept of Reusable Functional Blocks (RFBs), an enhancement to Virtual Network Functions, this methodology allows a VSP to deploy and manage platforms and services, agnostic to the underlying technologies, protocols, and APIs. To validate the proposed workflow, a use case is also presented herein, which illustrates both the deployment of the underlying platform by the Telco operator and of the services that run on top of it. In this use case, the NGPaaS operator facilitates a VSP to provide Virtual Network Function as a Service (VNFaaS) capabilities for its end customers.
      Citation: Future Internet
      PubDate: 2019-05-21
      DOI: 10.3390/fi11050119
      Issue No: Vol. 11, No. 5 (2019)
  • Future Internet, Vol. 11, Pages 120: Characteristics of Cyberstalking
           Behavior, Consequences, and Coping Strategies: A Cross-Sectional Study in
           a Sample of Italian University Students

    • Authors: Begotti, Acquadro Maran
      First page: 120
      Abstract: Aims: The aim of this study was to compare victims of one type of cyberstalking (OneType) with victims of more than one type of cyberstalking (MoreType) regarding (1) the impact of cyberstalking and (2) attitudes related to telling someone about the experience of cyberstalking and the coping strategies used by victims. Methods: A self-administered questionnaire was distributed to over 250 students at the University of Torino. Results: About half of the participants experienced at least one incident of cyberstalking. Among them, more than half experienced more than one type of cyberstalking. Victims suffered from depression more than those who had never experienced cyberstalking. No statistically significant difference emerged for anxiety. The coping strategies used by MoreType were more varied than those used by OneType victims of cyberstalking. Moreover, MoreType victims told someone about their victimization more than OneType victims. Conclusion: The work presented suggests implications for health care professionals, police officers, and government. For example, our suggestion is to pay attention to cyberstalking victims and provide flyers in schools, universities, and cafeterias that explain the risk of certain online behaviors and their consequences in physical and emotional spheres.
      Citation: Future Internet
      PubDate: 2019-05-22
      DOI: 10.3390/fi11050120
      Issue No: Vol. 11, No. 5 (2019)
  • Future Internet, Vol. 11, Pages 204: No-Reference Depth Map Quality
           Evaluation Model Based on Depth Map Edge Confidence Measurement in
           Immersive Video Applications

    • Authors: Dogan, Haddad, Ekmekcioglu, Kondoz
      First page: 204
      Abstract: When it comes to evaluating perceptual quality of digital media for overall quality of experience assessment in immersive video applications, typically two main approaches stand out: Subjective and objective quality evaluation. On one hand, subjective quality evaluation offers the best representation of perceived video quality assessed by the real viewers. On the other hand, it consumes a significant amount of time and effort, due to the involvement of real users with lengthy and laborious assessment procedures. Thus, it is essential that an objective quality evaluation model is developed. The speed-up advantage offered by an objective quality evaluation model, which can predict the quality of rendered virtual views based on the depth maps used in the rendering process, allows for faster quality assessments for immersive video applications. This is particularly important given the lack of a suitable reference or ground truth for comparing the available depth maps, especially when live content services are offered in those applications. This paper presents a no-reference depth map quality evaluation model based on a proposed depth map edge confidence measurement technique to assist with accurately estimating the quality of rendered (virtual) views in immersive multi-view video content. The model is applied for depth image-based rendering in multi-view video format, providing comparable evaluation results to those existing in the literature, and often exceeding their performance.
      Citation: Future Internet
      PubDate: 2019-09-20
      DOI: 10.3390/fi11100204
      Issue No: Vol. 11, No. 10 (2019)
  • Future Internet, Vol. 11, Pages 205: Time and Power Allocation for Energy
           Efficiency Maximization in Wireless-Powered Full-Duplex Relay Systems

    • Authors: Song, Ni, Han, Qin, Dong
      First page: 205
      Abstract: In this paper, we propose an optimal time and power allocation scheme in a wireless power supply full-duplex (FD) relay system, where we consider the number of relay antennas in the energy harvesting stage. At the same time, the energy efficiency optimization problem of the system is structured, where optimization issues related to time allocation factors and power allocation are established. For the FD dual-antenna and the FD single-antenna energy harvesting system, energy efficiency function is proven to be a concave function over the time-switch factor, and the optimal time-switching factor is theoretically obtained using the Lambert function. Then, according to the given value range of the optimal time switching factor, the optimal power distribution scheme is obtained by analyzing the derivative function of the system energy efficiency and using the properties of the Lambert function. The time-switching factor and transmission power are optimally selected at the wireless power supply FD relay. Results reveal that the performance of energy efficiency of the dual-antenna energy harvesting at the FD relay outperforms that of the single-antenna. Moreover, our results demonstrate that FD relay systems always substantially boost the energy efficiency compared with half-duplex (HD) relay systems.
      Citation: Future Internet
      PubDate: 2019-09-20
      DOI: 10.3390/fi11100205
      Issue No: Vol. 11, No. 10 (2019)
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
Home (Search)
Subjects A-Z
Publishers A-Z
Your IP address:
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-