Subjects -> COMPUTER SCIENCE (Total: 2313 journals)     - ANIMATION AND SIMULATION (33 journals)    - ARTIFICIAL INTELLIGENCE (133 journals)    - AUTOMATION AND ROBOTICS (116 journals)    - CLOUD COMPUTING AND NETWORKS (75 journals)    - COMPUTER ARCHITECTURE (11 journals)    - COMPUTER ENGINEERING (12 journals)    - COMPUTER GAMES (23 journals)    - COMPUTER PROGRAMMING (25 journals)    - COMPUTER SCIENCE (1305 journals)    - COMPUTER SECURITY (59 journals)    - DATA BASE MANAGEMENT (21 journals)    - DATA MINING (50 journals)    - E-BUSINESS (21 journals)    - E-LEARNING (30 journals)    - ELECTRONIC DATA PROCESSING (23 journals)    - IMAGE AND VIDEO PROCESSING (42 journals)    - INFORMATION SYSTEMS (109 journals)    - INTERNET (111 journals)    - SOCIAL WEB (61 journals)    - SOFTWARE (43 journals)    - THEORY OF COMPUTING (10 journals) COMPUTER SCIENCE (1305 journals)                  1 2 3 4 5 6 7 | Last
Similar Journals
 Cluster ComputingJournal Prestige (SJR): 0.374 Citation Impact (citeScore): 2Number of Followers: 1      Hybrid journal (It can contain Open Access articles) ISSN (Print) 1573-7543 - ISSN (Online) 1386-7857 Published by Springer-Verlag  [2469 journals]
• A novel trust prediction approach for online social networks based on
multifaceted feature similarity

Abstract: Online Social Networks (OSNs) have gained popularity in recent years. Millions of people use Facebook, Instagram, Twitter, and LinkedIn. Malicious users can target users using security weaknesses like cloning and Sybil attacks and join their friend list or trusted network. Malicious people can send unwanted friend requests to other users. Before communicating with dubious users, users should know their trust level. In addition, existing social networks do not provide any system to assess the trustworthiness of people that make friend requests. Also, several existing trust models assume that participants’ direct trust ties are known and only focus on particular characteristics. Hence, a holistic model is required to measure explicit trust and infer indirect trust between participants. Using comprehensive feature similarity, we offer a unique OSN trust prediction technique. We choose features based on user interactions, relationships, preferences, behaviours, and activities. The retrieved features are utilised to measure direct and indirect trust between neighbours and non-neighbours. To compare the proposed trust prediction approach to existing approaches, we implement cloning and Sybil attack detection as exemplary applications. The empirical findings and comparisons with other methodologies verify the proposed approach’s effectiveness, efficiency, and superiority.
PubDate: 2022-05-17

• An empirical analysis of image augmentation against model inversion attack
in federated learning

Abstract: Federated Learning (FL) is a technology that facilitates a sophisticated way to train distributed data. As the FL does not expose sensitive data in the training process, it was considered privacy-safe deep learning. However, a few recent studies proved that it is possible to expose the hidden data by exploiting the shared models only. One common solution for the data exposure is differential privacy that adds noise to hinder such an attack, however, it inevitably involves a trade-off between privacy and utility. This paper demonstrates the effectiveness of image augmentation as an alternative defense strategy that has less impact of the trade-off. We conduct comprehensive experiments on the CIFAR-10 and CIFAR-100 datasets with 14 augmentations and 9 magnitudes. As a result, the best combination of augmentation and magnitude for each image class in the datasets was discovered. Also, our results show that a well-fitted augmentation strategy can outperform differential privacy.
PubDate: 2022-05-14

• An intelligent cyber security phishing detection system using deep
learning techniques

Abstract: Recently, phishing attacks have become one of the most prominent social engineering attacks faced by public internet users, governments, and businesses. In response to this threat, this paper proposes to give a complete vision to what Machine learning is, what phishers are using to trick gullible users with different types of phishing attacks techniques and based on our survey that phishing emails is the most effective on the targeted sectors and users which we are going to compare as well. Therefore, more effective phishing detection technology is needed to curb the threat of phishing emails that are growing at an alarming rate in recent years, thus will discuss the techniques of mitigation of phishing by Machine learning algorithms and technical solutions that have been proposed to mitigate the problem of phishing and valuable awareness knowledge users should be aware to detect and prevent from being duped by phishing scams. In this work, we proposed a detection model using machine learning techniques by splitting the dataset to train the detection model and validating the results using the test data , to capture inherent characteristics of the email text, and other features to be classified as phishing or non-phishing using three different data sets, After making a comparison between them, we obtained that the most number of features used the most accurate and efficient results achieved. the best ML algorithm accuracy were 0.88, 1.00, and 0.97 consecutively for boosted decision tree on the applied data sets.
PubDate: 2022-05-14

with real-time virtual machine repair in an edge computing system

PubDate: 2022-05-13

• Real-time event detection and classification in social text steam using
embedding

Abstract: Taming data will always be a significant challenge in online social networks. These networks are rapidly becoming the emerging source for users to explore the primary sources to seek information in the form of events. Rich informational data can be extracted from various social platforms like twitter text streams for direct insights into enduring topics and classifying them based on their similarities. To address the research issues of event detection and classification, we model events as evolving clusters over a period of time. The inability of conventional clustering algorithms to process the data streams mandates the use of a fast yet robust method. Therefore this work employs quick comparisons of data coming from social streams relying on a twin network known as the Siamese network, which can detect the novel event based on clustering by comparing their content dependent feature. We also trained dataset derived from the social text stream from twitter and other sources, where embedding encode every word representation mapped to a vector. This representation of word into real valued vectors provides a specific processing task for event classification. Finally, we compared the proposed technique with the existing methods, and the results obtained through several experiments are a clear indicator of the efficacy of the proposed scheme.
PubDate: 2022-05-13

• QoS-aware online scheduling of multiple workflows under task execution
time uncertainty in clouds

Abstract: Cloud computing, with elasticity and pay-as-you-go pricing, is a suitable platform for executing workflow applications. Workflow as a Service (WaaS) systems provide scientists with an easy-to-use, and cost-effective platform to execute their workflow applications in the cloud at any time or location worldwide. Quality of Service (QoS) is recognized as a key requirement in WaaS. Monetary cost and time are two primary QoS from a clients’ perspective; whereas, energy consumption is considered a significant problem for cloud providers’ profitability and ability to provide low-cost services. Most workflow scheduling studies assume that workflow tasks have a deterministic Execution Time (ET), which is generally unrealistic in the real world. However, there are few approaches for scheduling in WaaS considering deadlines, and monetary costs with uncertain task ET. These studies typically assume that a cloud resource can execute all types of workflow applications without any need for additional software components. However, using containers is a suitable solution to provide an executable environment for the execution of any workflow type on cloud resources. To this end, we present two cost and energy-aware workflow scheduling that consider the uncertainty in tasks’ ETs. Both solutions are designed for WaaS, leveraging containers to enhance resource utilization rate and reduce energy consumption, resource monetary cost, and workflows deadline violations. Simulated experiments demonstrate that our proposed methods outperform two recent state-of-the-art scheduling algorithms in terms of success rate, monetary cost, energy consumption, and resource utilization rate.
PubDate: 2022-05-12

• The research and analysis of efficiency of hardware usage base on HDFS

Abstract: HDFS (Hadoop Distributed File System), as a part of data stored in the Hadoop ecosystem, provides read and write interfaces for many upper-level applications. The read/write performance of HDFS is affected by hardware such as disk, network, and even CPU and memory. The underlying storage system and transmission network of HDFS use high-performance devices, the read/write performance will be improved to a certain extent. However, due to the influence of the complex software stack, the improvement ratio cannot reach the device's own performance's lift ratio. HDFS can use cheap machines to store petabytes of data, equipped with ultra-high-performance hardware devices to improve the performance of HDFS will increase economic expenses and waste resources. In this paper, we analyze the read/write process of HDFS, determine the proportion of software and hardware processes. According to the test environment and methods in this paper, we find that the impact of the storage system on HDFS accounts for 19.7%, and the network accounts for 62.5%. We test the basic performance of various hardware and its application to HDFS, combine hardware utilization analysis, we find that the use of popular storage systems and the networks can improve the write performance of HDFS by 257% and 207%, respectively.
PubDate: 2022-05-11

• An enhanced smart card and dynamic ID based remote multi-server user
authentication scheme

Abstract: Organizations often use smart card-based user authentication for remote access. The research community has put forward dynamic identity based remote user authentication schemes for distributed multi-server environment to safeguard the connection between user and server. Recently, Qiu et al. proposed an efficient smart card based remote user authentication scheme for the multi-server environment, in which they uphold their scheme provides mutual authentication and key agreement, user-anonymity, resistance against various kind of attacks. This paper will manifest that if the adversary is successful in stealing a smart card, then their schemes are vulnerable to masquerade attack, server spoofing attack, and password guessing attack. We overcome their flaws and propose an enhanced anonymous scheme where whenever the user wants to log into a server, the user identity is changed dynamically before login. And also, the scheme resists all possible attacks. We compared our scheme with respect to the related scheme, used BAN logic for verification of correctness of mutual key agreement and AVISPA to prove scheme is safe. We have provided formal security proofs for our scheme.
PubDate: 2022-05-11

• RSM analysis based cloud access security broker: a systematic literature
review

Abstract: A Cloud Access Security Broker (CASB) is a security enforcement point or cloud-based software that is placed between cloud service users and cloud applications of cloud computing (CC) which is used to run the dimensionality, heterogeneity, and ambiguity correlated with cloud services. They permit the organization to amplify the reach of their security approaches past their claim framework to third-party computer programs and storage. In contrast to other systematic literature reviews (SLR), this one is directed at the client setting. To identify and evaluate methods to understand CASB, the SLR discusses the literature, citing a comprehension of the state-of-the-art and innovative characterization to describe. An SLR was performed to compile CASB related experiments and analyze how CASBs are designed and formed. These studies are then analyzed from different contexts, like motivation, usefulness, building approach, and decision method. The SLR has discussed the contrasts present between the studies and implementations, with planning accomplishments conducted with combinations of market-based courses of action, simulation tools, middleware’s, etc. Search words with the keywords, which were extracted from the Research Questions (RQs), were utilized to recognize the essential consideration from the journal papers, conference papers, workshops, and symposiums. This SLR has distinguished 20 particular studies distributed from 2011 to 2021. Chosen studies were evaluated concurring to the defined RQs for their eminence and scope to particular CASB in this way recognizing a few gaps within the literature. Unlike other studies, this one concentrates on the customer's viewpoint. The survey uses a systematic analysis of the literature to discover and classify techniques for realizing CASB, resulting in a comprehensive grasp of the state-of-the-art and a novel taxonomy to describe CASBs. To assemble studies relating to CASB and investigate how CASB are engineered, a systematic literature review was done. These investigations are then evaluated from a variety of angles, including motivation, functionality, engineering approach, and methodology. Engineering efforts were directed at a combination of “market-based solutions”, “middlewares”, “toolkits”, “algorithms”, “semantic frameworks”, and “conceptual frameworks”, according to the study, which noted disparities in the studies’ implementations. For further understanding, the different independent parameters influencing the CASB are studied using PCA (Principal Component Analysis). The outcome of their analysis was the identification of five parameters influencing the PCA analysis. The experimental results were used as input for Research Surface Methodology (RSM) to obtain an empirical model. For this, five-level coding was employed for developing the model and considered three dependent parameters and four center values. For more understanding of these independent variables' influence, on the CASB study, RSM analysis was employed. It was observed from the CCD (Central Composite Design) model that the actual values show significant influence with R2 = 0.90. This wide investigation reveals that CASB is still in a formative state. Even though vital advancement has been carried out in this zone, obvious challenges stay to be tended to, which have been highlighted in this paper.
PubDate: 2022-05-11

• Deep reinforcement learning approach towards a smart parking architecture

Abstract: Finding a vacant parking slot in densely populated areas leads to excessive emission of Carbon Dioxide, fuel, and time wastage. Recently, the Industrial Internet of Things (IIoT) has shown significant potential to strengthen the notion of smart cities equipped with smart parking. In this paper, we propose a Deep Reinforcement Learning (DRL)-based framework for IIoT enabled smart parking system to solve the parking issues. The proposed framework is consist of smart cameras, fog nodes, and a cloud server. The DRL is used in fog devices to classify the vehicles and intelligently allocate the vacant parking slots to vehicles. The smart cameras are deployed at the entry point of the parking space and in the parking lanes as well. The ground cameras capture the image, detect the vehicle and transmit the information to the fog node. On the fog node, the online deep Q-learning algorithm updates the reward score. The proposed framework helps to recognize the vehicle, identify the vacant parking slot for the vehicle in minimum time with high accuracy. We compare the performance of the proposed DRL based technique with the state-of-the-art techniques in terms of accuracy and processing time. Experimental results demonstrate that the proposed DRL based approach not only has high detection accuracy but also minimizes the processing time than the compared techniques.
PubDate: 2022-05-10

• SMAD: SMart assistant during and after a medical emergency case based on
deep learning sentiment analysis

Abstract: The huge cost of emergency situations could have fatal effects on humanity and society, and it could present a genuine threat to both of them. In fact, most people confronted with an emergency could feel psychological trauma, which will, for the most part, change over time as they can exhibit chaotic or even turbulent behaviours. The situation could worsen in the case of a pandemic as fear and anxiety invade and spread in addition to isolation and quarantine. In this paper, we propose to build a smart assistant, called SMAD, that could detect the symptoms of an emergency case as well as symptoms of a mental disorder while analysing the natural language speech of an ordinary citizen, during and after an emergency situation using natural language processing and deep learning sentiment analysis model to track the patient’s mental state during an ongoing conversation. Our proposed smart assistant is an online human-bot interaction that could handle a variety of physical and mental circumstances of any emergency situation. The proposed approach is a smart healthcare service that consists of four interconnected modules: The information understanding module, the data collector module, the action generator module, and the mental analysis module, which is based on the sentiment analysis model performed on a social media dataset using a pre-trained word-embedding model.
PubDate: 2022-05-09

• Novel bat algorithm for QoS-aware services composition in large scale
internet of things

Abstract: The democratization of smart devices over the last decade has given rise what is called the Internet of Thing (IoT). In view of the multitude of functionally equivalent services that have different quality of service (QoS) levels, the services composition is one of the main challenges in the IoT environments where several devices interact with each other to perform a user’s complex task. This paper proposes a QoS-aware services composition approach that exploits a novel bat algorithm (QC-NBA) to compose the best IoT services while considering user’s constraints related to the QoS properties. Unlike most existing bio-inspired services composition approaches, the NBA method includes mechanisms that improve the exploration and exploitation of the composition search space. The bats habitat selection, the Doppler Effect compensation and the self-adaptive local search strategy of the NBA method speed-up the convergence and avoid the local optimum, enhancing therefore the performance of the QC-NBA algorithm in term of execution time and composition quality. The results obtained through the simulation scenarios, show that the QC-NBA approach achieves a good composition in terms of QoS utility and converges faster compared to other services composition baselines.
PubDate: 2022-05-09

• Intelligent energy aware approaches for residential buildings:
state-of-the-art review and future directions

Abstract: In the past decade, the world’s energy consumption is increasing largely, while residential buildings are the primary sector consuming about a quarter of the total energy produced. The researchers have made significant efforts to reduce energy usage in previous years by implementing energy monitoring and prediction techniques. Further, these techniques have been utilized for energy optimization in residential buildings and provide the consumer awareness about the usage patterns. In this paper, intelligent energy aware approaches have been reviewed by focusing on energy monitoring, prediction, optimization and performance evaluation using benchmark energy datasets. This review has been concluded with a discussion on future research directions for improving energy aware approaches in residential buildings.
PubDate: 2022-05-07

• Correction to: High-performance pseudo-anonymization of virtual power
plant data on a CPU cluster

PubDate: 2022-05-05

• A state sharding model on the blockchain

Abstract: Abstract Blockchain is an incrementally updated ledger maintained by distributed nodes with excellent features such as decentralization and immutability. However, the development of blockchain is now limited due to storage redundancy. Specifically, nodes cannot afford to store a large amount of data, and the mechanism of setting many ordinary nodes as light nodes sacrifices the most important decentralization feature of the blockchain. This paper proposes the SSChain sharding scheme, which reduces the storage burden of each node on the blockchain without losing the decentralization feature. First, we shard the state in the blockchain through the data structure of Merkle DAG; then, shard all nodes in the blockchain through DHT technology; finally, store the state in the blockchain in different Node and establish a mapping relationship. With this scheme, each node only needs to store the state data allocated to it by the storage system, and can obtain other state data it needs from other nodes. After experimentation, the solution of this paper is feasible. The performance of storage is better than other current solutions, and the scalability of the blockchain is improved.
PubDate: 2022-04-25

• An enhanced privacy-preserving record linkage approach for multiple
databases

Abstract: Abstract For the purpose of research, organizations often need to share and link data that belongs to a single individual while protecting the privacy, which is referred to as privacy preserving record linkage (PPRL). Various approaches have been developed to tackle this problem, however, it is still a challenging task due to the massive amount of data, multiple data sources, and ‘dirty’ data. Therefore, in this paper, an enhanced approximate multi-party PPRL (MP-PPRL) approach is proposed to improve privacy, scalability, and linkage quality. For privacy, bloom filter (BF) is a better and more efficient masking techniques than others so far. Thus, the records are encoded into BFs to ensure privacy. However, BFs may be compromised through frequency-based attacks. To enhance privacy, a distributed protocol that introduces multiple linkage units (Multi-LUs) to resist frequency-based attacks is proposed. In scalability, we develop a blocking technique based on sorted nearest neighborhood (SNN) approach for clustering similar BFs across multiple databases, called BF-SNN, which dramatically reduces complexity. In linkage quality, a personalized threshold that varies with different levels of ‘dirty’ data is introduced, which provides a more accurate error-tolerance for ‘dirty’ data and consequently improves linkage quality. An analysis and an empirical study are conducted on large real-world datasets to show the benefit of the proposed approach.
PubDate: 2022-04-22

• Leveraging a cloud-native architecture to enable semantic
interconnectedness of data for cyber threat intelligence

Abstract: Abstract Cloud technologies have several merits including the elimination of cost incurred when traditional technologies are adopted. Despite the benefits, the cloud is still facing security challenges thereby calling for cyber threat intelligence capable of identifying threats and providing possible solutions. However, dependence on traditional security mechanisms and approaches for security solutions within cloud environments presents challenges. This calls for cloud-native solutions which leverages cloud features for design and development of solutions for data and applications hosted and running within the cloud. Past studies have suggested the adoption of semantic technologies for cloud-based security mechanisms. However, the semantic processing of data faces challenges of data interconnectedness due to aggregation of data from diverse heterogenous sources. Hence, this study proposes a cloud-native architecture capable of connecting security-related data from different sources in the cloud to enhance cyber threat intelligence. It presents a proof-of-concept implementation of the proposed solution on Amazon AWS cloud, within an auto-scaling group for scalability and across multiple availability zones for high availability.
PubDate: 2022-04-20

• An intelligent system for blood donation process optimization - smart
techniques for minimizing blood wastages

Abstract: Abstract Blood transfusion is a continuous demand, as it is widely required for many medical surgeries and critical operations. Therefore, there is a need to manage the whole process of supplying blood from blood donors to the hospitals and transfusion centers. Many researchers were recently interested in the operations and supply chain management of blood products, they considered the operations and supply chain management of blood products for the purpose of minimizing the blood wastage. As a result of the the inverse relationship between blood donations and blood products demand, more occasional blood shortages can be expected. This research proposes an intelligent system that entails the recruitment of donors that are available to donate blood products on a short notice. The proposed system optimizes the blood donation process by preventing blood shortages and minimizing the wastage of blood units with regards to expiration, and proves promising results. A set of optimization equations have been built for optimizing the process of blood donation to reduce the blood wastage and prevent blood shortage. It considers as well the new insights from the medical literature on the deterioration of stored blood products, as the use of older red blood cells is linked to poorer clinical outcomes.
PubDate: 2022-04-19

• Video transcoding at the edge: cost and feasibility perspective

Abstract: Abstract The developments in smartphones, high data rates, and substantial video data traffic have increased the burden on cellular networks. Consequently, this burden significantly affects the Quality of Experience of the cellular users leading to an increased network delay for the diverse video content requests. To accommodate the requests from different users with varying requirements, one of the promising solutions is to cache videos in the near vicinity of users and transcode them online. The online transcoding is performed at the edge level of the cellular network to minimize the network delay and use the bandwidth efficiently. However, the feasibility of online transcoding is significantly affected by various factors, such as the codecs, configurations of virtual machines, the cost incurred, and estimated time to complete the transcoding task, among other parameters. Although online transcoding is discussed in the literature adequately, few studies discuss the feasibility of online transcoding while considering all the aforementioned critical parameters. This study examined the effects of a diverse range of critical parameters on the feasibility of online transcoding. We performed extensive simulations on the local machine environment to study various possible factors affecting online transcoding in detail. We then transcoded the same videos on Amazon Elastic Cloud Computing (EC2) Virtual Machines (VMs) to further study realistic cloud settings with fine-tuned configurations. Our experiments show the superior performance of some codecs and the effects of machine configurations on transcoding tasks duration. We aim to provide a benchmark for practitioners and researchers considering online transcoding in real-time multimedia applications.
PubDate: 2022-04-18

• PI-sqrt: novel parallel implementations of in-place sequence rotation on
multicore systems

Abstract: Abstract The huge data volumes and the emergence of new parallel architectures, e.g. multicore CPUs lead to revisiting classic computer science topics such as in-place sequence rotation. In-place sequence rotation is a basic step in several fundamental computing tasks. The sequential algorithms of the in-place sequence rotation effect are classic and well-studied, which are classified into three classes. Recently, Intel introduced the parallel standard template library (STL) implementation for multicore CPU systems; it has an in-place rotation function based on the rotation by copy, but its space complexity is $$O\left( n\right)$$ . In this work, we propose the blend rotation, which is a parallel-friendly and in-place algorithm that combines the merits of these three rotation algorithm classes. Besides, we propose a set of for Parallel In-place SeQuence RoTation (PI-sqrt) implementations. The performance of PI-sqrt is examined through several experiments. To the best of our knowledge, the obtained running times show that the implementations of blend and reversal rotations are by far the fastest parallel implementations; they are faster on average, through different experiments, by 7.85 $$\times$$ and 5.52 $$\times$$ , respectively, compared to the parallel rotation function of Intel parallel STL.
PubDate: 2022-04-18

JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762