Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Andrzej Goscinski;Elisa Bertino;Shangguang Wang;
Pages: 588 - 590 Abstract: In the last decade there has been a strong move towards mobile computing and the proliferation of the IoT (Internet of Things). A huge number of devices have been connected to the Internet and created zettabytes of data items. To extract value from such massive data volumes, processing power offered by cloud computing is often utilized. However, streaming data to the cloud exposes some limitations related to increased communication and data transfer, which introduces delays and consumes network bandwidth. Another limitation that cloud-based computing for IoT poses is a limited or no network connectivity. Other problems with cloud-based processing of IoT generated data regard the sensitivity of the information, because sending and storing so much information in the cloud involves privacy and security challenges, related to the protection of personally identifiable information, storing it in compliance with privacy laws, securing stored information, and preventing from being stolen, or accessed and shared illegally. The use of AI in edge processing resulted in a new interdisciplinary field that enables distributed intelligence with edge devices and is known as edge AI or edge intelligence. However, research on edge AI is still relatively new, and thus models, techniques, and protocols supporting intelligent management, querying and mining of large-scale amounts of data produced at the edge are required. A lot of challenges related to providing edge intelligence include training edge devices, so they can become more and more smart. There is also a need for the presentation of the most recent outcome of research of distributed intelligence. The papers in this special issue address many of the challenges we have outlined, and are briefly summarized. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Peiying Lin;Zhichen Shi;Zheng Xiao;Cen Chen;Kenli Li;
Pages: 591 - 601 Abstract: Deep learning services based on cloud computing have deficiencies in latency, privacy, etc. To meet the requirements of low latency, researchers have begun to consider the deployment of deep learning services in edges, i.e., edge intelligence service. Deploying deep learning models on multiple processors or devices so that the computation of a deep learning model can be conducted in parallel is a possible solution to improve the efficiency of edge intelligence services. In this article, we propose a novel latency-driven deep learning model placement method for efficient edge intelligence service. Model placement contains two procedures: model partition and sub-models assignment. In our method, we first convert the model into execution graphs and propose a novel latency-driven multilevel graph partition for the model. Then the partitioned sub-models are heuristically assigned to available processors. To the best of our knowledge, it is the first work that proposes latency-driven graph partition algorithms for model placement. Extensive experiments on several commonly used DNN (deep neural network) models and synthetic datasets show that our method can achieve the lowest execution latency with low complexity compared with other state-of-the-art model placement methods. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Ivan Lujic;Vincenzo De Maio;Srikumar Venugopal;Ivona Brandic;
Pages: 602 - 613 Abstract: Near real-time edge analytics requires dealing with the rapidly growing amount of data, limited resources, and high failure probabilities of edge nodes. Therefore, data replication is of vital importance to meet SLOs such as service availability and failure resilience. Consequently, specific input datasets, requested by on-demand analytics (e.g., object detection), can be present at different locations over time. This can prevent exploitation of data locality and timely decision-making processes. State-of-the-art solutions for on-demand edge analytics placement either fail in providing low-latency access to user-requested input data or do not consider data locality. We propose SEA-LEAP (Self-adaptive and Locality-aware Edge Analytics Placement), a framework including a new mechanism for tracking data movements, on top of which we devise a generic control mechanism. SEA-LEAP enables on-the-fly placement of on-demand analytics considering the most appropriate dataset location that minimizes overall analytics requests execution time. We conduct experiments using real-world (i) object detection application, (ii) image datasets as input, (iii) self-designed benchmarks, and (iv) heterogeneous edge infrastructure using Kubernetes. Experimental results show the ability to efficiently deploy on-demand analytics and reduce total latency by 65.85 percent on average by performing adaptive data movements, indicating a promising solution for edge multi-cluster and hybrid environments. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jianxin Zhao;Rui Han;Yongkai Yang;Benjamin Catterall;Chi Harold Liu;Lydia Y. Chen;Richard Mortier;Jon Crowcroft;Liang Wang;
Pages: 614 - 626 Abstract: With the massive amount of data generated from mobile devices and the increase of computing power of edge devices, the paradigm of Federated Learning has attracted great momentum. In federated learning, distributed and heterogeneous nodes collaborate to learn model parameters. However, while providing benefits such as privacy by design and reduced latency, the heterogeneous network present challenges to the synchronisation methods, or barrier control methods, used in training, regarding system progress and model convergence etc. The design of these barrier mechanisms is critical for the performance and scalability of federated learning systems. We propose a new barrier control technique called Probabilistic Synchronous Parallel (PSP). In contrast to existing mechanisms, it introduces a sampling primitive that composes with existing barrier control mechanisms to produce a family of mechanisms with improved convergence speed and scalability. Our proposal is supported with a convergence analysis of PSP-based SGD algorithm. In practice, we also propose heuristic techniques that further improve the efficiency of PSP. We evaluate the performance of proposed methods using the federated learning specific FEMNSIT dataset. The evaluation results show that PSP can effectively achieve good balance between system efficiency and model accuracy, mitigating the challenge of heterogeneity in federated learning. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Chaofeng Zhang;Mianxiong Dong;Kaoru Ota;
Pages: 627 - 639 Abstract: As the inevitable part of intelligent service in the new era, the services for AI tasks themselves have received significant attention, which due to the urgency of energy and computing resources, is difficult to implement in a stable and widely distributed system and coordinately utilize remote edge devices and cloud. In this article, we introduce an AI-based holistic network optimization solution to schedule AI services. Our proposed deep Q-learning algorithm optimizes the overall throughput of AI co-inference tasks themselves by balancing the uneven computation resources and traffic conditions. We use a multi-hop DAG (Directed Acyclic Graph) to describe a deep neural network (DNN) based co-inference network structure and introduce the virtual queue to analyze the Lyapunov stability for the system. Then, a priority-based data forwarding strategy is proposed to maximize the bandwidth efficiency, and we develop a Real-time Deep Q-learning based Edge Forwarding Scheme Optimization Algorithm (RDFO) to maximize the overall task processing rate. Finally, we conduct the platform simulation for the distributed co-inference system. Through the comparison with other benchmarks, we testify to the optimality of our proposal. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Min Xue;Huaming Wu;Guang Peng;Katinka Wolter;
Pages: 640 - 655 Abstract: With the rapid development of the Internet of Things (IoT) and communication technology, Deep Neural Network (DNN) applications like computer vision, can now be widely used in IoT devices. However, due to the insufficient memory, low computing capacity, and low battery capacity of IoT devices, it is difficult to support the high-efficiency DNN inference and meet users’ requirements for Quality of Service (QoS). Worse still, offloading failures may occur during the massive DNN data transmission due to the intermittent wireless connectivity between IoT devices and the cloud. In order to fill this gap, we consider the partitioning and offloading of the DNN model, and design a novel optimization method for parallel offloading of large-scale DNN models in a local-edge-cloud collaborative environment with limited resources. Combined with the coupling coordination degree and node balance degree, an improved Double Dueling Prioritized deep Q-Network (DDPQN) algorithm is proposed to obtain the DNN offloading strategy. Compared with existing algorithms, the DDPQN algorithm can obtain an efficient DNN offloading strategy with low delay, low energy consumption, and low cost under the premise of ensuring “delay-energy-cost” coordination and reasonable allocation of computing resources in a local-edge-cloud collaborative environment. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Dionysis Athanasopoulos;Dewei Liu;
Pages: 656 - 668 Abstract: Given that cloud servers are usually remotely located from the devices of mobile apps, the end-users of the apps can face delays. The Fog has been introduced to augment the apps with machines located at the network edge close to the end-users. However, edge machines are usually resource constrained. Thus, the execution of online data-analytics on edge machines may not be feasible if the time complexity of the data-analytics algorithm is high. To overcome this, multiple instances of the back-end should be deployed on edge and remote machines. In this case, the research question is how the switching of the app among the instances of the back-end can be dynamically decided based on the response time of the service instances. To answer this, we contribute an AI approach that trains machine-learning models of the response time of service instances. Our approach extends a back-end as a service into an AI self-back-end as a service that self-decides at runtime the right edge/remote instance that achieves the lowest response-time. We evaluate the accuracy and the efficiency of our approach by using real-word machine-learning datasets on an existing auction app. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Zheyuan Hu;Jianwei Niu;Tao Ren;Bin Dai;Qingfeng Li;Mingliang Xu;Sajal K. Das;
Pages: 669 - 683 Abstract: Mobile edge computing (MEC) has been envisioned as a promising paradigm that could effectively enhance the computational capacity of wireless user devices (WUDs) and quality of experience of mobile applications. One of the most crucial issues of MEC is computation offloading, which decides how to offload WUDs’ tasks to edge severs for further intensive computation. Conventional mathematical programming-based offloading approaches could face troubles in dynamic MEC environments due to the time-varying channel conditions (caused primarily by WUD mobility). To address the problem, reinforcement learning (RL) based offloading approaches have been proposed, which develop offloading policies by mapping MEC states to offloading actions. However, these approaches could fail to converge in large-scale MEC due to the exponentially-growing state and action spaces. In this article, we propose a novel online computation offloading approach that could effectively reduce task latency and energy consumption in dynamic MEC with large-scale WUDs. First, a RL-based computation offloading and energy transmission algorithm is proposed to accelerate the learning process. Then, a joint optimization method is adopted to develop the allocating algorithm, which obtains near-optimal solutions for energy and computation resources allocation. Simulation results show that the proposed approach can converge efficiently and achieve significant performance improvements over baseline approaches. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Lanlan Rui;Xushan Chen;Xiaomei Wang;Zhipeng Gao;Xuesong Qiu;Shangguang Wang;
Pages: 684 - 697 Abstract: With the development of complex networks and with increasing service demands, service use is becoming more complex and the composition of services is becoming more complicated. In the XaaS (X as a Service) environment, users only care about the QoE of a service and do not care about the composition process of the service. Therefore, it is important to evaluate the reliability of the entire service. In this article, we use Petri Net as a basis for modeling the composition of services. In addition, we consider the problems of shared resources and common cause faults. Both of these problems can cause network congestion and regional failures. We use distance to assess the effects of regional faults and queuing theory to simulate the network congestion process. Moreover, in the simulation, we verify the impacts of regional failures and network congestion on service reliability. We choose the Tree-Based Search algorithm and the Semi-Markov Model as comparison algorithms. The results of our algorithm are related to service time. Our algorithm can timely reflect the impact of regional failure or network congestion, and it can feedback different evaluation results according to environmental changes. Therefore, our algorithm is more comprehensive and has better performance. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Mohammad Allahbakhsh;Haleh Amintoosi;Aleksandar Ignjatovic;Elisa Bertino;
Pages: 698 - 709 Abstract: Social rating systems are widely used for gathering user feedbacks on the quality of products, items, and services. Social rating systems accept various forms of numeric and non-numeric recommendations as input to their aggregation algorithm. Fuzzy recommendations, as one form of input recommendations, while common in areas such as stock market and educational systems, are challenging in terms of aggregation and scaling. Also, taking into account trust and experience of raters while aggregating fuzzy variables is another challenge that needs investigations. In this article, we propose a trust-based experience-aware method for aggregation of fuzzy recommendations. We propose to use trust and experience of raters along with the area under the curve of the membership of the fuzzy recommendations to compute a weight for recommendations. Then, we present an iterative algorithm to aggregate these computed weighted recommendations. We evaluate our method using a real-world dataset and compare its performance with three well-known iterative algorithms. The comparison results show the superiority of our method over other related approaches. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xiangyu Wang;Jianfeng Ma;Yinbin Miao;Ximeng Liu;Ruikang Yang;
Pages: 710 - 723 Abstract: With the development of Mobile Healthcare Monitoring Network (MHMN), patients’ data collected by body sensors not only allows patients to monitor their health or make online pre-diagnosis but also enables clinicians to make proper decisions by utilizing data mining technique. However, sensitive data privacy is still a major concern. In this article, we propose practical techniques for searching and making online pre-diagnosis over encrypted data. First, we propose a new Diverse Keyword Searchable Encryption (DKSE) scheme which supports multi-dimension digital vectors range query and textual multi-keyword ranked search to gain a broad range of applications in practice. In addition, a framework called PRIDO based on the DKSE is designed to protect patients’ personal data in data mining and online pre-diagnosis. According to the PRIDO framework, we achieve privacy-preserving naïve Bayesian and decision tree classifiers and discuss its potential applications in actual deployments. Security analysis proves that patients’ data privacy can be well protected without loss of data confidentiality, and performance evaluation demonstrates the efficiency and accuracy in the diverse keyword search, data mining, and disease pre-diagnosis, respectively. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yang Chen;Wenmin Li;Fei Gao;Qiaoyan Wen;Hua Zhang;Huawei Wang;
Pages: 724 - 735 Abstract: Attribute-based keyword search (ABKS) has a broad developing prospect in providing search service for users and realizing fine-grained access control over ciphertext in the background of cloud computing. However, two open problems prevent further development and application of ABKS. First, most of ABKS schemes suffer from inside keyword guessing attack (KGA) inherently, which is a great threat to the security of the scheme. Second, the existing ABKS schemes focus on single or conjunctive keyword search, these inflexible retrieval modes may lead to efficiency loss caused by inaccurate positioning of user’s interest and greatly reduce user search experience. In this article, we introduce a semi-trusted server and build a dual server model. Based on the dual server model and our proposed techniques, we are the first to put forward an attribute-based multi-keyword ranked search scheme against inside keyword guessing attack (ABKRS-KGA) to solve the mentioned two problems simultaneously. In our scheme, the queries of users contain weighted keywords and the returned files can be ranked according to user’s query interest. We provide strict security definitions for two types of adversaries and we are the first to prove that the construction is adaptively secure against both chosen-keyword attack (CKA) and KGA. Finally, all-side simulation with real-world data set is implemented for the proposed scheme, and the simulation results show that the efficiency of the proposed scheme is acceptable. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Chang-ai Sun;Hepeng Dai;Guan Wang;Dave Towey;Tsong Yueh Chen;Kai-Yuan Cai;
Pages: 736 - 751 Abstract: In recent years, service oriented architecture (SOA) has been increasingly adopted to develop distributed applications in the context of the Internet. To develop reliable SOA-based applications, an important issue is how to ensure the quality of web services. In this article, we propose a dynamic random testing (DRT) technique for web services, which is an improvement over the widely-practiced random testing (RT) and partition testing (PT) approaches. We examine key issues when adapting DRT to the context of SOA, including a framework, guidelines for parameter settings, and a prototype for such an adaptation. Empirical studies are reported where DRT is used to test three real-life web services, and mutation analysis is employed to measure the effectiveness. Our experimental results show that, compared with the three baseline techniques, RT, Adaptive Testing (AT) and Random Partition Testing (RPT), DRT demonstrates higher fault-detection effectiveness with a lower test case selection overhead. Furthermore, the theoretical guidelines of parameter setting for DRT are confirmed to be effective. The proposed DRT and the prototype provide an effective and efficient approach for testing web services. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Qi Mo;Wei Song;Fei Dai;Leilei Lin;Tong Li;
Pages: 752 - 765 Abstract: Collaborative business processes gather a set of business processes with complementary competencies and knowledge to cooperate to achieve more business successes. To ensure their successful implementation, correctness is a key issue that needs to be addressed during their development. To this end, a novel correctness enforcement approach is proposed to support the development of collaborative business processes. In this approach, we first give an algorithm to check the correctness of an original process specified by Petri nets. Then, we prune its reachability graph to obtain its core in case of partially correct, which is a reduced reachability graph that doesn't cover invalid states. Finally, we generate a set of controllers from the core using coordination mapping (i.e., inserting some coordination activities into controllers), and then an enforced process is built by the composition of the original process and the controllers. Our approach is implemented as an analysis module called cetool in the PIPE (Platform Independent Petri Net Editor) and it is validated on a set of real-world cases. The results show the effectiveness and efficiency of the proposed approach. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Weiwei Lin;Wentai Wu;Ligang He;
Pages: 766 - 777 Abstract: As data centers are consuming massive amount of energy, improving the energy efficiency of cloud computing has emerged as a focus of research. However, it is challenging to reduce energy consumption while maintaining system performance without increasing the risk of Service Level Agreement violations. Most of the existing consolidation approaches for virtual machines (VMs) consider system performance and Quality of Service (QoS) metrics as constraints, which usually results in large scheduling overhead and impossibility to achieve effective improvement in energy efficiency without sacrificing some system performance and cloud service quality. In this article, we first define the metrics of peak power efficiency and optimal utilization for heterogeneous physical machines (PMs). Then we propose Peak Efficiency Aware Scheduling (PEAS), a novel strategy of VM placement and reallocation for achieving dual improvement in performance and energy conservation from the perspective of server clusters. PEAS allocates and reallocates VMs in an on-line manner and always attempts to maintain PMs working in their peak power efficiency via VM consolidation. Extensive experiments on Cloudsim show that PEAS outperforms several energy-aware consolidation algorithms with regard to energy consumption, system performance as well as multiple QoS metrics. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Teng Wang;Xinyu Yang;Xuebin Ren;Wei Yu;Shusen Yang;
Pages: 778 - 792 Abstract: With the increasing popularity of crowdsourcing services, high-dimensional crowdsourced data provides a wealth of knowledge. Nonetheless, unprecedented privacy threats to participants have emerged, due to complex correlations among multiple attributes and the vulnerabilities of untrusted crowdsourcing servers. Differential privacy-based paradigms have been proposed to release privacy-preserving datasets with statistical approximation. Nonetheless, most existing schemes are limited when facing highly correlated attributes, and cannot prevent privacy threats from untrusted crowdsourcing servers. To address this issue, we propose two novel solutions, namely LoCop and DR_LoCop, which guarantee local differential privacy based on the randomized response technique while synthesizing and releasing high-dimensional crowdsourced data with high data utility. Particularly, LoCop leverages copula theory to synthesize high-dimensional crowdsourced data via univariate marginal distribution and attribute dependence. Univariate marginal distribution is estimated by the Lasso-based regression algorithm from aggregated privacy-preserving bit strings. Dependencies among attributes are modeled as multivariate Gaussian copula. Based on LoCop, the enhanced solution DR_LoCop not only takes advantage of C-vine copula to reflect conditional dependencies among high-dimensional attributes, but also achieves dimension reduction. Extensive experiments on real-world datasets demonstrate that our solutions substantially outperform the state-of-the-art techniques in terms of both data utility and computational overhead. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Di Wu;Qiang He;Xin Luo;Mingsheng Shang;Yi He;Guoyin Wang;
Pages: 793 - 805 Abstract: Neighborhood regularization is highly important for a latent factor (LF)-based Quality-of-Service (QoS)-predictor since similar users usually experience similar QoS when invoking similar services. Current neighborhood-regularized LF models rely prior information on neighborhood obtained from common raw QoS data or geographical information. The former suffers from low prediction accuracy due to the difficulty of constructing the neighborhood based on incomplete QoS data, while the latter requires additional geographical information that is usually difficult to collect considering information security, identity privacy, and commercial interests in real-world scenarios. To address the above issues, this work proposes a posterior-neighborhood-regularized LF (PLF) model for QoS prediction. The main idea is to decompose the LF analysis process into three phases: a) primal LF extraction, where the LFs are extracted to represent involved users/services based on known QoS data, b) posterior-neighborhood construction, where the neighborhood of each user/service is achieved based on similarities between their primal LF vectors, and c) posterior-neighborhood-regularized LF analysis, where the objective function is regularized by both the posterior-neighborhood of users/services and $L_{2}$L2-norm of desired LFs. Experimental results from large scale QoS datasets demonstrate that PLF outperforms state-of-the-art models in terms of both accuracy and efficiency. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Chuan Zhang;Liehuang Zhu;Chang Xu;Kashif Sharif;Kai Ding;Ximeng Liu;Xiaojiang Du;Mohsen Guizani;
Pages: 806 - 818 Abstract: Vehicle platoon, a novel vehicle driving paradigm that organizes a group of vehicles in the nose-to-tail structure, has been considered as a potential solution to reduce traffic congestion and increase travel comfort. In such a platoon system, head vehicles’ performances are usually evaluated by user vehicles’ feedbacks. Selection of an appropriate and reliable head vehicle while not disclosing user vehicles’ privacy has become an interesting problem. In this article, we present a trust-based and privacy-preserving platoon recommendation scheme, called TPPR, to enable potential user vehicles to avoid selecting the malicious head vehicles. The basic concept of TPPR is that each user vehicle holds a trust value, and the reputation score of the head vehicle is calculated via a truth discovery process. To preserve vehicles’ privacy, pseudonyms and Paillier cryptosystem are applied. In addition, novel authentication protocols are designed to ensure that only the valid vehicles (i.e., the vehicles holding the truthful trust values and joining the vehicle platoon) can pass the authentication. A comprehensive security analysis is conducted to prove that the proposed TPPR scheme is secure against several sophisticated attacks in vehicular ad hoc networks. Moreover, extensive simulations are conducted to demonstrate the correctness and effectiveness of the proposed scheme. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Beiqing Chen;Haihang Zhou;Jianguo Yao;Haibing Guan;
Pages: 819 - 832 Abstract: With the increasing attraction of unmanned aerial vehicles (UAVs) in civil, public, and military applications, multi-UAV systems can perform environmental and disaster monitoring, border surveillance, and search and rescue. It is foreseen that these multi-UAV-based applications will be an important trend for edge computing scenarios. However, due to UAVs’ limited energy supplies as well as their continuous increase in the number of sensors, energy efficiency is a critical issue in multi-UAV systems. We believe that the fusion of edge computing and cloud computing can provide effective support for energy savings. This article presents an energy-efficient edge cloud architecture called RESERVE for intelligent multi-UAV. Under RESERVE, we study the energy-efficient computation offloading decision-making problem in a decentralized manner. The problem is formulated as a three-layer game in which the discretionary approach to reaching Nash Equilibrium is presented. Based on the proposed game, we design decentralized algorithms for two different cases. The algorithms can both achieve Nash Equilibrium. Furthermore, we propose a decentralized computation offloading mechanism and analyze the performance of the game by its efficiency ratio. We conduct simulation experiments and design a framework prototype. Evaluation results demonstrate that the proposed game methods can achieve more than 30 percent extra energy consumption reduction compared with the state-of-the-art decentralized algorithm and less than 10 percent performance loss relative to the centralized solution. The prototype framework we have developed proves the concept we propose. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Haoyu Luo;Xiao Liu;Jin Liu;Yun Yang;John Grundy;
Pages: 833 - 846 Abstract: Business cloud workflows are often designed with multiple time constraints for timely response to business requests. To ensure on-time completion of workflow instances, workflow temporal conformance state needs to be constantly monitored and verified at runtime. Considering the fact that there are a large number of workflow instances running in a parallel fashion in many business scenarios, conventional verification approaches for time-related properties based on temporal logic or timed Petri nets are not feasible due to the limitation of low efficiency at runtime. To address this issue, we propose a new approach to automated runtime verification of temporal conformance for parallel workflow instances in a cloud environment. In this article, instead of using response time to verify temporal conformance of every single workflow as in conventional strategies, workflow throughput is employed as the performance measurement to efficiently monitor a large number of parallel workflow instances. On this basis we present a novel conformance verification strategy. This strategy considers the effect of time delay propagation in the cloud workflow systems to accurately verify workflow runtime temporal conformance. Our verification strategy is implemented in a prototype cloud workflow system and the evaluation results show that it outperforms the state-of-the-art workflow temporal verification strategy. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yuanning Gao;Xiaofeng Gao;Xianyue Li;Bin Yao;Guihai Chen;
Pages: 847 - 859 Abstract: The advance of tour recommendation allows people to get well-fit route plans, which contain a sequence of Points of Interest (POIs) based on tourists’ constraints and preferences. Large-scale POIs, called Super-POIs in this article, often contain multiple scenic spots and entrances. Tourists have to specify a suitable tour route inside Super-POI to obtain good tour experience. However, most of existing tour recommendation algorithms ignore the detailed information inside Super-POIs. By taking super-POIs into account, we propose Embedded Tour (eTOUR), a two-layer framework considering route design of POIs (Outer Model) and scenic routes inside the Super-POIs (Inner Model) respectively. To combine two models, an Embedded GRASP-VNS Algorithm is introduced based on an embedding strategy. For Outer Model, we apply Greedy Randomized Adaptive Search Procedure (GRASP) for route construction and Variable Neighborhood Search (VNS) for local improvement. Super-POI is treated as a “meta node” in outer route construction. For Inner Model, the optimal route inside Super-POI obtained by DFS-based Tree Search with Pruning is revised dynamically to adapt to the outer route. Furthermore, we discuss a special case in the Super-POI where a key graph is defined and treated as the “must go” route. We modify the solution of Chinese Postman Problem in this case to reduce the time complexity. Finally, experiments based on two real datasets demonstrate the effectiveness of our proposal. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Mutaz Barika;Saurabh Garg;Andrew Chan;Rodrigo N. Calheiros;
Pages: 860 - 875 Abstract: Big data processing applications are becoming more and more complex. They are no more monolithic in nature but instead they are composed of decoupled analytical processes in the form of a workflow. One type of such workflow applications is stream workflow application, which integrates multiple streaming big data applications to support decision making. Each analytical component of these applications runs continuously and processes data streams whose velocity will depend on several factors such as network bandwidth and processing rate of parent analytical component. As a consequence, the execution of these applications on cloud environments requires advanced scheduling techniques that adhere to end user’s requirements in terms of data processing and deadline for decision making. In this article, we propose two multicloud scheduling and resource allocation techniques for efficient execution of stream workflow applications on multicloud environments while adhering to workflow application and user performance requirements and reducing execution cost. Results showed that the proposed genetic algorithm is an adequate and effective for all experiments. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Andrei Palade;Siobhán Clarke;
Pages: 876 - 890 Abstract: Automatic planning, with dynamic binding and adaptive composition recovery, has been used to tackle complex service provisioning in mobile environments, but given frequent network topology changes, and services with time-dependent QoS, finding composites that can functionally and non-functionally satisfy a user's request remains difficult. Many service composition mechanisms either require a centralised perspective of the environment, or use optimisation mechanisms that trade off computational efficiency for optimality. Stigmergy-based approaches have been used to model decentralised service interactions between service providers, using a community of mobile software agents that share the same goal to approximate the set of QoS-optimal service compositions. Inspired by this model, this article addresses computational efficiency concerns using a collaborative approach to engage multiple communities of agents for provisioning QoS-optimal service compositions in mobile environments. New compositions can emerge from local decisions and interactions with agents from diverse communities. We assess whether having multiple communities improves the diversity and optimality of solutions. We also measure the proposed approach’ efficiency in dealing with incomplete information. The results show that the proposed approach trades optimality for a more diverse set of solutions, at a cost of higher overhead. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xiaoping Li;Wei Yu;Rubén Ruiz;Jie Zhu;
Pages: 891 - 903 Abstract: Electricity prices differ during different time periods and change from place to place. Cloud workflow applications often require geo-distributed data which is transmitted among heterogeneous servers in intra- and inter- data centers. Such varying electricity prices and data transmission time bring great challenges when optimizing the energy cost for scheduling tasks in workflow applications to heterogeneous servers in cloud data centers. In this article, we minimize the total electricity cost in a deadline constrained energy-aware workflow scheduling problem with data being geographically distributed across data centers. A scheduling algorithm is proposed. Strategies are developed to sequence workflow applications, divide deadlines and sort tasks. An adaptive local search method is presented to improve solutions during the search process which dynamically balances intensification using neighborhood structures of increasing size. Components and parameter values are statistically calibrated over a comprehensive set of random instances. The proposed algorithm is compared to modified classical algorithms for similar problems. Experimental results demonstrate the effectiveness of the proposal for the considered problem. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Faruk Hasić;Johannes De Smedt;Seppe vanden Broucke;Estefanía Serral;
Pages: 904 - 917 Abstract: Separating decision modelling from the processes modelling concern recently gained significant support in literature, as incorporating both concerns into a single model impairs the scalability, maintainability, flexibility, and understandability of both processes and decisions. Most notably the introduction of the Decision Model and Notation (DMN) standard by the Object Management Group provides a suitable solution for externalising decisions from processes and automating decision enactments for processes. This article introduces a systematic way of tackling the separation of the decision modelling concern from process modelling by providing a Decision as a Service (DaaS) layered Service-Oriented Architecture (SOA) which approaches decisions as automated and externalised services that processes need to invoke on demand to obtain the decision outcome. The DaaS mechanism is elucidated by a formalisation of DMN constructs and the relevant layer elements. Furthermore, DaaS is evaluated against the fundamental characteristics of the SOA paradigm, proving its contribution in terms of abstraction, reusability, loose coupling, and other pertinent SOA principles. Additionally, the benefits of the DaaS design on process-decision modelling and mining are discussed. Finally, the DaaS design is illustrated on a real-life event log of a bank loan application and approval process, and the SOA maturity of DaaS is assessed. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Haozhe Lin;Yushun Fan;Jia Zhang;Bing Bai;
Pages: 918 - 930 Abstract: Driven by the widespread application of Service-Oriented Architecture (SOA), an increasing number of services and mashups have been developed and published onto the Internet in the past decades. With the number keeping on burgeoning, predicting the tendency of services invocation will provide various roles in service ecosystems with promising opportunities. However, services invocation bear three unique characteristics, which give rise to difficulties in predicting them. First, enormous services show different and complicated traits, like periodicity, nonlinearity and nonstationarity. Second, services providing similar or compensatory functions make up intricate relationship. Third, the combination dependencies between mashups and their comprising component services further amplify the difficulty. Given these factors, we have developed a tailored model Multi-Step Piecewise Recurrent Neural Network (MSP-RNN) to predict the tendency of services invocation. In MSP-RNN, Long Short Term Memory (LSTM) units are used to extract universal features. Based on these features, we have developed a piecewise regressive mechanism to make prediction discriminatingly. Besides, we have developed a multi-step prediction strategy to further enhance prediction accuracy and robustness. Extensive experiments in real-world data set with interpretable analysis show that MSP-RNN predicts the tendency of services invocation more accurately, i.e., by 3.7 percent in terms of symmetric mean absolute percentage error (SMAPE), than state-of-the-art baseline methods. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jia Wang;Xiaoping Li;Rubén Ruiz;Jie Yang;Dianhui Chu;
Pages: 931 - 944 Abstract: Nowadays, energy costs are the most important factor in cloud computing. Therefore, the implementation of energy-aware task scheduling methods is of utmost importance. A task scheduling framework considering deadlines, data locality and resource utilization is proposed to save on energy costs in heterogeneous clusters. The framework consists of task list construction, task scheduling and slot list updating. In terms of deadline constraints, number of job slots allocated and possible processing times of jobs, a new job sequence is proposed to construct an reasonable task list. Tasks are scheduled to promising slots from their rack-local servers, cluster-local servers and remote servers in the produced task scheduling, which greatly improves data locality. After the assignment among tasks and slots, an update of available slots in clusters is proposed not only to find available slots but also to improve server resource utilization using fuzzy logic with the available number of slots according to current CPU, memory and bandwidth utilization. Experimental results show that the proposed heuristic results in lower energy consumption than the adapted existing algorithms with a variable total number of slots. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Manuel Parra-Royon;Ghislain Atemezing;José M. Benítez;
Pages: 945 - 955 Abstract: The recent incorporation of new Data Mining and Machine Learning services within Cloud Computing providers is empowering users with extremely comprehensive data analysis tools including all the advantages of this type of environment. Providers of Cloud Computing services for Data Mining publish the descriptions and definitions in many formats and often not compatible with other providers. From a functional point of view, having the possibility to describe complete Data Mining services is fundamental to maintain the usability and especially the portability of these services independently of the software/hardware support or even the differences between cloud platforms. The main objective of this article is to design a Data Mining service definition which allows to compose with a single and simple definition a complete service, in such way a data mining workflow can be ported and deployed in different providers or even in a Market Place of this type of ready-to-consume services. This article presents a semantic scheme for the definition and description of complete Data Mining services considering both the management of the service by the provider (price, authentication, Service Level Agreement, ...) and the definition of the Data Mining workflow as a service. It represents a solid contribution for paving the way to the standardization and industrialization of Data Mining services. To asses the validity of the scheme a list of services from Data Mining providers have been described and an example of a full service for a Random Forest algorithm has been defined as a service. In addition, a practical scenario has been developed, creating a deployment platform for Data Mining services to give functional support to the scheme, illustrating the practical benefits of the proposal for the end user. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Evagoras Makridis;Kyriakos Deliparaschos;Evangelia Kalyvianaki;Argyrios Zolotas;Themistoklis Charalambous;
Pages: 956 - 969 Abstract: We present robust dynamic resource allocation mechanisms to allocate application resources meeting Service Level Objectives (SLOs) agreed between cloud providers and customers. In fact, two filter-based robust controllers, i.e., $mathcal {H}_{infty }$H∞ filter and Maximum Correntropy Criterion Kalman filter (MCC-KF), are proposed. The controllers are self-adaptive, with process noise variances and covariances calculated using previous measurements within a time window. In the allocation process, a bounded client mean response time ($mathop {mathrm{mRT}}$ mRT ) is maintained. Both controllers are deployed and evaluated on an experimental testbed hosting the RUBiS (Rice University Bidding System) auction benchmark web site. The proposed controllers offer improved performance under abrupt workload changes, shown via rigorous comparison with current state-of-the-art. On our experimental setup, the Single-Input-Single-Output (SISO) controllers can operate on the same server where the resource allocation is performed; while Multi-Input-Multi-Output (MIMO) controllers are on a separate server where all the data are collected for decision making. SISO controllers take decisions not dependent to other system states (servers), albeit MIMO controllers are characterized by increased communication overhead and potential delays. While SISO controllers offer improved performance over MIMO ones, the latter enable a more informed decision making framework for resource allocation problem of multi-tier applicat-ons. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Zheng Xiao;Gang Liu;Dan He;Yang Guo;Jiayi Du;
Pages: 970 - 985 Abstract: Combining a series of atomic services into a composite value-added service offers promise in meeting the increasingly diverse needs of users in the cloud market. However, without considering concurrency of composite service requests among multiple users, existing studies seldom focus on the competition relationship among users and thus lack an incentive mechanism to efficiently provide the request strategy of users. In this article, taking into consideration the multi-user competition and interaction with the cloud provider, we design a composite service reservation framework in which users’ requests can be rationally and efficiently multiplexed. Due to selfishness, each user tends to expect to maximize its own utility in terms of revenue, payoff, and performance of composite service. The optimization problem is modeled from the perspective of game theory and defined as a non-cooperative game. The existence of Nash equilibrium solution of the formulated game is proved by an equivalent variational inequality problem. An iterative proximate algorithm (IPA) is proposed to find a utility-balanced request strategy, which theoretically leads to a Nash equilibrium solution. A series of simulation experiments are conducted to verify our theoretical analysis. The experimental results show that IPA converges to a Nash equilibrium in an acceptable number of iterations. The stable request strategies can improve the utilities of users and the resource utilization of the cloud provider, as the proposed framework empowers users to transfer requests from peak periods into non-peak ones. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xiao Pan;Shili Nie;Haibo Hu;Philip S. Yu;Jingfeng Guo;
Pages: 986 - 999 Abstract: In resource planning scenarios, reverse $k$k nearest neighbor search plays an important role. However, the existing reverse $k$k nearest neighbor search on trajectories only supports spatial features of trajectories. In this article, we introduce reverse $k$k nearest neighbors query on semantic trajectories (R$k$kNNST). Given a query point from a set of geo-textual objects (e.g., POIs), the query finds those trajectories that take this query point as one of their $k$k nearest geo-textual correlative objects. To efficiently answer R$k$kNNST queries, we propose a novel index IMC-tree, which organizes the global and local geo-textual information on semantic trajectories. A branch-and-bound search algorithm DOTA is then designed to traverse IMC-tree with various pruning rules. To speed up the computation of correlative distance, we also design an inverted-f-le-based algorithm to compute without enumerating all combinations of geo-textual objects. Experiments on a real dataset validate the effectiveness and efficiency of our proposed algorithms. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Pingpeng Yuan;Yujie You;Shuang Zhou;Hai Jin;Ling Liu;
Pages: 1000 - 1011 Abstract: Reachability queries ask whether a vertex can reach another vertex on large directed graphs. It is one of the most fundamental graph operators and has attracted researchers in both academics and industry to study it. The main technical challenge is to support fast reachability queries by efficient managing the three main costs: the index construction time, the index size and the query processing time on large/small and sparse/dense graphs. As real world graphs grow bigger in size, these problems remain open challenges that demand high performance solutions. In this article, we propose a Multi-Dimensional Graph Labeling approach (called MGTag) to supporting fast reachability queries. MGTag is novel in three aspects. First, it recursively partitions a graph into multiple subgraphs with disjoint vertex sets, called non-shared graphs, and several inter-partition edges, called cross-edges. Second, we build a four-dimensional label – one dimension of layer, one dimension of non-shared graph and two dimensions of interval for each vertex in non-shared graphs. Finally, with the four-dimensional labeling scheme, we design algorithms to answer reachability queries efficiently. The extensive experiments on 28 large/small and dense/sparse graphs show that building the high dimensional index is quickly and the index size is also competitive compared with most of the state-of-the-art approaches. The results also show that our approach is more scalable and efficient than the state-of-the-art approaches in answering reachability queries on large/small and sparse/dense graphs. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yunling Wang;Shi-Feng Sun;Jianfeng Wang;Joseph K. Liu;Xiaofeng Chen;
Pages: 1012 - 1025 Abstract: Searchable Encryption (SE) enables a data owner to outsource encrypted data to an untrusted server while preserving the keyword search functionality. Typically, the server learns whether or not a query has been performed more than once, which is usually called the search pattern. However, such kind of information leakage might be leveraged to break query privacy. To further reduce such type of leakage and provide strong privacy guarantee, Wang et al. proposed a novel SE scheme based on the Paillier encryption scheme in INFOCOM’15. Unfortunately, their scheme cannot perform keyword search successfully, because the additive homomorphic property is not sufficient for their construction. In this article, we first show that why their scheme fails to return the correct search result, and then propose a new SE scheme by adopting a special additive homomorphic encryption scheme to achieve the multiplicative homomorphic property efficiently. Furthermore, we enhance the security on the user side. Specifically, we use random polynomials with an appropriate degree to guarantee that the user cannot learn anything other than the desired search result. Finally, we present a formal security analysis and implement our scheme on a real-world database, which demonstrates that our construction can achieve the desired security properties with good performance. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Siyu Zhao;Rui Jiang;Bharat Bhargava;
Pages: 1026 - 1035 Abstract: In this article, we propose a revocable lattice-based CP-ABE (Ciphertext-Policy Attribute-Based Encryption) scheme (RL-ABE), which is suitable to be applied in the cloud storage. The RL-ABE scheme can resist quantum algorithm attack and ensure fine-grained access control to the users’ rights in achieving shared data. In addition, our scheme can realize attribute revocation, which can expediently renew users’ attributes to grant or revoke their access rights. Then, we formally prove the security of our scheme based on the hardness of Ring Learning with Error problem(R-LWE) to resist quantum algorithm attack, and prove our scheme can solve security threatens to withstand collusion attacks. Finally, the performance analysis shows the high efficiency of our scheme compared with other related schemes. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Payal Chaudhari;Manik Lal Das;
Pages: 1036 - 1044 Abstract: In modern digital age, enterprise applications typically outsource user data in pubic cloud storage with the objective of availing flexibility and scalability features of cloud infrastructure, and importantly, making business goal more cost effective. Security and privacy concerns pose a challenging task to handle in cloud setup by both service providers and service consumers. In this landscape, before outsourcing the sensitive data on cloud storage, the data should be protected from unauthorized access and the privacy of the users should be preserved as per application requirement. In this article, we present a scheme, termed as KeySea, keyword-based search over attribute-based encrypted data with receiver anonymity. While searching documents pertaining to the target keyword(s), keeping receiver’s anonymity and ensuring data privacy are important features in applications like healthcare, bureaucracy, social engineering, and so on. The construction of the KeySea scheme uses the hidden access policy in attribute-based searchable encryption. The KeySea scheme provides a secure and practical solution to address the issue of privacy-preserving search over encrypted data in the public cloud storage. We show the security strengths of the KeySea scheme and its practicality with experimental results. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
John Thomas Ravan;Shankar M. Banik;Csilla Farkas;
Pages: 1045 - 1058 Abstract: In this article, we propose the transaction scheduling for web service database transactions. Our solution ensures consistency while preserving efficiency. We propose a prediction-based metric that promotes transactions with reliable reputations based on the transactions performance metrics. Performance metrics are based on the transactions likelihood to commit and their execution time. We propose a customized lock management solution to guarantee execution consistency in concurrent web service environments. We formally prove that our solution guarantees consistent execution history of concurrent web transactions and increases concurrency and performance over traditional locking methods. We developed a simulation using a multi-threaded approach. We generated sample workloads of simulated concurrent transactions over seven tests. Our results show that the solution works comparatively with traditional locking and no-locking solutions with the added benefit of ensured consistency in some cases and deadlock avoidance in others. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Tooba Aamir;Hai Dong;Athman Bouguettaya;
Pages: 1059 - 1073 Abstract: The extensive use of social media platforms and overwhelming amounts of imagery data creates unique opportunities for sensing, gathering and sharing information about events. One of its potential applications is to leverage crowdsourced social media images to create a tapestry scene for scene analysis of designated locations and time intervals. The existing attempts however ignore the temporal-semantic relevance and spatio-temporal evolution of the images and direction-oriented scene reconstruction. We propose a novel social-sensor cloud (SocSen) service composition approach to form tapestry scenes for scene analysis. The novelty lies in utilising images and image meta-information to bypass expensive traditional image processing techniques to reconstruct scenes. Metadata, such as geolocation, time, and angle of view of an image are modelled as non-functional attributes of a SocSen service. Our major contribution lies on proposing a context and direction-aware spatio-temporal clustering and recommendation approach for selecting a set of temporally and semantically similar services to compose the best available SocSen services. Analytical results based on real datasets are presented to demonstrate the performance of the proposed approach. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
An Song;Wei-Neng Chen;Xiaonan Luo;Zhi-Hui Zhan;Jun Zhang;
Pages: 1074 - 1088 Abstract: Scientific cloud workflows enable the access to distributed computing resources in cloud environments for executing scientific computing applications. In the literature, most workflow scheduling models assume that each workflow task is mapped to only one service instance. But in computation and data-intensive applications, it is common that the computation resources provided by a single service instance are insufficient for some complicated tasks which contain several closely correlated sub-tasks. To manage such complicated workflows, this article devises a novel workflow model with composite tasks (cWFS). The model views a complicated task as a composite task and allows mapping multiple service instances to a composite task. The data transmission among sub-tasks of a composite task can also be addressed by the proposed model. To solve cWFS problem, we devise a nested particle swarm optimization (N-PSO) that utilizes two kinds of populations, i.e., the outer population and inner population. Since N-PSO is a bit time-consuming, we further devise a Fast version of N-PSO (FN-PSO), which can save more than 60 percent of running time compared with N-PSO. The proposed approaches are evaluated on five real-world workflow types. The experimental results verify that the proposed approaches can solve the new workflow model effectively. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Gary White;Siobhán Clarke;
Pages: 1089 - 1102 Abstract: Accurate short-term forecasts allow dynamic systems to adapt their behaviour when degradation is forecast e.g., transportation forecasting allows for alternative routing of traffic before gridlock. This rationale can be applied to service-oriented computing when creating and managing service applications. Recent approaches to improve reliability in service applications have focused on reducing the time to recovery of application using collaborative filtering-based approaches to make QoS predictions for similar users. In this article, we focus on reducing the time to detection of a failure by forecasting when a service is about to degrade in quality. Previous approaches that have focused on QoS forecasting have used traditional time-series methods that are not designed for sudden peaks caused by network congestion or battery-powered IoT devices that can reduce processing capabilities to extend battery life. More modern recurrent neural network-based approaches such as GRUs and LSTMs have long training times, which are unsuitable for dynamic environments. We propose a noisy echo state network-based approach that has been designed to reduce training time allowing the model to incorporate recent QoS values on devices at the edge. Our results show increased response time forecasting accuracy compared to state of the art approaches when tested on IoT and web services datasets. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Pengcheng Zhang;Huiying Jin;Hai Dong;Wei Song;Athman Bouguettaya;
Pages: 1103 - 1117 Abstract: Mobile Edge Computing is an emerging technology offering low latency responses by deploying edge servers near mobile devices. We propose a novel privacy-preserving QoS forecasting approach – Edge-Laplace QoS (QoS forecasting with Laplace noise in mobile Edge environments) to address the challenges of user mobility and information leakage encountered by QoS forecasting in mobile edge environments. Edge-Laplace QoS is able to accurately and efficiently forecast Quality of Service (QoS) of various Web Services, while effectively protecting user privacy in mobile edge environments. We employ an improved differential privacy method to add dynamic disguises to the original QoS data in the edge environment to protect user data privacy. A collaborative filtering method is adopted to retrieve similar users’ accessing records based on geographic locations of their accessed servers for QoS forecasting. We conduct a set of experiments using several public network data sets. The results show that the efficiency of Edge-Laplace QoS is superior to traditional forecasting approaches. Edge-Laplace QoS is also validated to be more suitable for edge environments than traditional privacy-preserving approaches. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Abubakr Alabbasi;Vaneet Aggarwal;
Pages: 1118 - 1129 Abstract: The demand for real-time cloud applications has seen an unprecedented growth over the past decade. These applications require rapidly data transfer and fast computations. This article considers a scenario where multiple IoT devices update information on the cloud, and request a computation from the cloud at certain times. The time required to complete the request for computation includes the time to wait for computation to start on busy virtual machines, performing the computation, waiting and service in the networking stage for delivering the output to the end user. In this context, the freshness of the information is an important concern and is different from the completion time. This article proposes novel scheduling strategies for both computation and networking stages. Based on these strategies, the age-of-information (AoI) metric and the completion time are characterized. A convex combination of the two metrics is optimized over the scheduling parameters. The problem is shown to be convex and thus can be solved optimally. Moreover, based on the offline policy, an online algorithm for job scheduling is developed. Numerical results demonstrate significant improvement as compared to the considered baselines. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Hua Deng;Zheng Qin;Qianhong Wu;Zhenyu Guan;Hui Yin;
Pages: 1130 - 1142 Abstract: It is becoming fashionable for people to access data outsourced to clouds with mobile devices. To protect data security and privacy, attribute-based encryption (ABE) has been widely used in cloud storage systems. However, one of the main efficiency drawbacks of ABE is the high computation overheads at mobile devices during user revocation and file access. To address this issue, we propose a revocable attribute-based data storage (RADS) scheme equipped with several attracting features. First, our RADS scheme achieves a fine-grained access control mechanism, by which file owners do not need to explicitly specify authorized visitors to their outsourced files. Second, our RADS scheme allows mobile users to authorize the cloud service provider (CSP) to share costly computations in file access, without exposing the file content. Third, our RADS scheme offloads the operations of access-credential update and file re-encryption during revocation process to CSP, leaving all non-revoked users undisturbed. The revocation of RADS achieves a strong data protection, i.e., revoked users can access neither newly uploaded files nor old ones. The security and efficiency of the RADS scheme are validated via both analysis and experimental results. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Lamees M. Al Qassem;Thanos Stouraitis;Ernesto Damiani;Ibrahim M. Elfadel;
Pages: 1143 - 1156 Abstract: The popularity of cloud computing services for delivering and accessing infrastructure on demand has significantly increased over the last few years. Concurrently, the usage of FPGAs to accelerate compute-intensive applications has become more widespread in different computational domains due to their ability to achieve high throughput and predictable latency while providing programmability and improved energy efficiency. Computationally intensive applications such as big data analytics, machine learning, and video processing have been accelerated by FPGAs. With the exponential workload increase in data centers, major cloud service providers have made FPGAs and their capabilities available as cloud services. However, enabling FPGAs in the cloud is not a trivial task due to incompatibilities with existing cloud infrastructure and operational challenges related to abstraction, virtualization, partitioning, and security. In this article, we survey recent frameworks for offering FPGA hardware acceleration as a cloud service, classify them based on their virtualization mode, tenancy model, communication interface, software stack, and hardware infrastructure. We further highlight current FPGAaaS trends and identify FPGA resource sharing, security, and microservicing as important areas for future research. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Kwang Mong Sim;
Pages: 1157 - 1174 Abstract: This article 1) constructs a cloud intelligence model that specifies the desirable intelligent properties of cloud agents, 2) provides a tutorial on agent-based cloud, fog, and edge resource management, and 3) paves the way for designing intelligent interclouds, fogs, and edges. An intelligent intercloud is a “cloud of clouds” populated by a society of agents that automates intercloud resource management activities. By contributing a tutorial on agent-based cloud resource management techniques, this article provides researchers with the foundational knowledge for devising intelligent intercloud resource management techniques. By contributing an architectural blueprint, and suggesting and comparing different intelligent resource management techniques, this article provides an overall design together with pointers to and guidelines for constructing the components of an intelligent intercloud. This article describes a proof-of-concept prototype of an intelligent intercloud and provides an application example of the prototype. The need of IoT devices for reduced response time triggered technological advancements towards fog and edge computing. By providing a tutorial on agent-based fog and edge resource management techniques, this article provides researchers with the foundational knowledge for devising intelligent fog and edge agents. Relevant techniques are also suggested for improving and optimizing the performances of future intercloud, fog, and edge agents. PubDate:
March-April 1 2022
Issue No:Vol. 15, No. 2 (2022)