Journal Cover
Information Sciences
Journal Prestige (SJR): 1.635
Citation Impact (citeScore): 5
Number of Followers: 549  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 0020-0255
Published by Elsevier Homepage  [3185 journals]
  • Gated Recurrent Neural Network with Sentimental Relations for Sentiment
           Classification
    • Abstract: Publication date: Available online 18 June 2019Source: Information SciencesAuthor(s): Chaotao Chen, Run Zhuo, Jiangtao Ren Gated recurrent neural networks (GRNNs) have been very successful in sentiment classification due to their ability to preserve semantics over time. However, modeling sentimental relations such as negation and intensification under a recurrent architecture remains a challenge. In this work, we introduce a gated recurrent neural network with sentimental relations (GRNN-SR)1to capture the sentimental relations’ information from sentiment modifier context and model their effects in texts. At each time step, GRNN-SR separately encodes the information of sentiment polarity and sentiment modifier context. The new sentiment inputs are modified multiplicatively by the previous encoded sentiment modifier context before they are updated into current states of sentiment polarity, which is more effective than the approach of traditional GRNNs. The experimental results show that our model not only can capture sentimental relations but also is an improvement over state-of-the-art gated recurrent neural network baselines.
       
  • Face Hallucination via Multiple Feature Learning with Hierarchical
           Structure
    • Abstract: Publication date: Available online 18 June 2019Source: Information SciencesAuthor(s): Licheng Liu, Han Liu, Shutao Li, C. L. Philip Chen In the past few years, neighbor-embedding (NE) based methods have been widely exploited for face hallucination. However, the existing NE based methods in spatial domain just employ single type of features for data representation, ignoring the compensatory information among multiple image features, resulting in bias in high resolution (HR) face image reconstruction. To tackle such problem, this paper presents a novel Multiple feature Learning model with Hierarchical Structure (MLHS) for face hallucination. Compared with conventional NE based methods, the proposed MLHS makes full use of multi-level information of face images, which can effectively remedy the flaw caused by just using single type of spatial pixel features, and adopts hierarchical structure to better maintain the manifold consistency hypothesis between the HR and low resolution (LR) patch spaces. The multiple learning strategy and hierarchical structure admit the proposed MLHS to well reconstruct the face details such as eyes, nostrils and mouth. The validity of the proposed MLHS method is confirmed by the comparison experiments in some public face databases.
       
  • TOPSIS method based on a fuzzy covering approximation space: an
           application to biological nano-materials selection
    • Abstract: Publication date: Available online 18 June 2019Source: Information SciencesAuthor(s): Kai Zhang, Jianming Zhan, Yiyu Yao The applications of nanomaterials in biomedicine have received increasing attentions. Therefore, how to choose the most suitable material among a variety of bio-nanomaterials with similar properties is a significant problem. Aiming at this problem, we describe it as a multi-criteria decision-making problem in a fuzzy covering approximation space and solve it via the TOPSIS method. Firstly, combining fuzzy neighborhood operators with fuzzy rough set models, we propose two pairs of covering-based fuzzy rough set models and study the basic properties together with classifications of these two pairs of models. Subsequently, we reveal a new method for determining objective weights using the first pair of covering-based fuzzy rough set models. Combining the objective criteria weight determination method with the TOPSIS method, we provide a solution to solve the problem of bone transplant replacement materials selection. Then, we discuss the experimental effects of our proposed method and the other eight different decision-making methods in practical problems to explain the rationality and enforceability of our approach. Finally, we validate the performance of our approach with the support of ideas of cross-validation and hypothesis testing.
       
  • Genetic Programming Performance Prediction and its Application for
           Symbolic Regression Problems
    • Abstract: Publication date: Available online 17 June 2019Source: Information SciencesAuthor(s): Samaneh Sadat Mousavi Astarabadi, Mohammad Mehdi Ebadzadeh Predicting the performance of Genetic Programming (GP) helps us identify whether it is an appropriate approach to solve the problem at hand. However, previous studies show that measuring the difficulty of a problem for GP and predicting GP performance are challenging issues. This paper presents a theoretical analysis of GP performance prediction problem and suggests an upper bound for GP performance. It means that the error of the best solution that is found by GP for a given problem is less than the proposed upper bound. To evaluate the proposed upper bound experimentally, a wide range of synthetic and real symbolic regression problems with different dimensions are solved by GP and consequently, a lot of actual GP performances are collected. Comparing the actual GP performances with their corresponding upper bounds shows that the proposed upper bounds are not violated for both synthetic and real symbolic regression problems. Then, the proposed upper bound is used to guide GP search. The results show that the proposed approach can find better results in comparison to Multi Gene Genetic Programming (MGGP).
       
  • Robust Face Hallucination via Locality-constrained Multiscale Coding
    • Abstract: Publication date: Available online 17 June 2019Source: Information SciencesAuthor(s): Na Li, Licheng Liu, Shutao Li, Hui Lin Face hallucination (FH) is to produce face images with High Resolution (HR) from Low Resolution (LR) observations. Unfortunately, most existing FH methods fail to make full use of the local geometrical information, especially when the LR images are corrupted by noise. Inspired by the observation that regions with large scales can provide much useful information, in this paper we propose a Robust Locality-constrained Multiscale Coding (RLMC) based method to forecast HR face images while suppressing noise and outliers. In RLMC, a weight vector is used in the loss function to ease the effect of outliers in data representation. Furthermore, inspired by the observation that abundant local information can be exploited by jointly representing overlapping patches with multiple scales. Simultaneously encoding multiple scale patches encourages different scales to share complementary information, which admits the proposed method to generate more appropriate coefficients for super-resolution reconstruction. Experimental results verified the effectiveness of the proposed method in terms of both quantitative measurements and visual impressions.
       
  • A Double PUF-based RFID Identity Authentication Protocol in
           Service-Centric Internet of Things Environments
    • Abstract: Publication date: Available online 17 June 2019Source: Information SciencesAuthor(s): Wei Liang, Songyou Xie, Jing Long, Kuan-Ching Li, Dafang Zhang, Keqin Li The rapid development of sensing, automation and communication technologies has led to the proliferation of the Internet of Things (IoT), providing recognized promising opportunities to build complex industrial systems and applications, leveraging the growing ubiquity of Radio Frequency Identification (RFID) and wireless sensor devices. With the pervasiveness of the interconnected systems encompassed with an ever-growing number of RFID-enabled devices being deployed, RFID security is an issue of high concern. As lightweight security encryption primitive, Physical Unclonable Function (PUF) is used to protect the information security of low-cost devices. Unfortunately, they are vulnerable to attacks, so countermeasures should be employed in the design. Aimed at low-cost and security of connected IoT devices to satisfy various security requirements of RFID technology in IoT, a two-stage multiple-choice arbiter (TSMCA)-based PUF in RFID systems is proposed, referred as TSMCA PUF. It is aimed at the design of a double PUF-based bidirectional RFID identity authentication protocol that permits the realization of bidirectional authentication between a server and a tag for the IC authentication in low-cost RFID systems, where the exclusive-OR (XOR) and character padding operations are adopted to generate the response of the PUF; the string-matching method is used in authentication, without exposing the PUF response to the verifier. Evaluation and analysis show that the advantages over conventional schemes include reduced area, higher randomness, and high stability, yet experimental results depict that the proposed protocol is promising resilient against attacks and practical for the deployment of low-cost hardware.
       
  • Multiple ellipse fitting of densely connected contours
    • Abstract: Publication date: Available online 17 June 2019Source: Information SciencesAuthor(s): Hui Li Multiple ellipse fitting is challenging and at the same time essential as it has a variety of applications in biology, chemistry, and nanotechnology. Accurate, effective, and reliable approach for the fitting problem has been always desirable. In this paper, we address a category of multiple ellipse fitting problem which fits densely connected contours. We propose a framework rather than design an algorithm for the problem. The framework streamlines five processes which include: sorting the contour points, doing ellipse fitting in sliding windows, detecting the context anomaly, performing clustering, and obtaining multiple ellipses through second ellipse fitting. The framework is evaluated in a real-world application of handprint identification and various synthetic datasets. Experimental results show that the framework can extract multiple ellipses from contours with satisfactory accuracy and efficiency.Graphical abstractGraphical abstract for this article
       
  • A simple PID-based strategy for particle swarm optimization algorithm
    • Abstract: Publication date: Available online 17 June 2019Source: Information SciencesAuthor(s): Zhenglong Xiang, Daomin Ji, Heng Zhang, Hongrun Wu, Yuanxiang Li The particle swarm optimization (PSO) algorithm, which updates particles by considering their past momentum and current direction, has demonstrated its power in several optimization applications. However, the updating strategy followed by the standard PSO mainly aims to learn from the global optimum, which often leads to PSO suffering from premature convergence. Using the past momentum can result in the overshoot problem, which usually slows down convergence in complex optimization problems. Inspired by the massive success of the proportional-integral-derivative (PID) controller in automatic control, we first establish a connection between the PSO process and the PID controller-based control system. Thereafter, we propose a PID-based strategy for PSO (PBS-PSO) to accelerate convergence and adjust the search direction to get out of local optima. The proposed PBS-PSO utilizes the past, current, and change in global best together to update the search direction. We conduct experiments on the CEC2013 test suite benchmark. The experimental results demonstrate the effectiveness of our proposed PBS-PSO algorithm. Most importantly, we find that the proposed PID-controller-based strategy has good generalization ability because it can be combined with other PSO variants to improve convergence performance in most cases.
       
  • RDMA-Driven MongoDB: An Approach of RDMA Enhanced NoSQL Paradigm for
           Large-Scale Data Processing
    • Abstract: Publication date: Available online 17 June 2019Source: Information SciencesAuthor(s): Bobo Huang, Li Jin, Zhihui Lu, Ming Yan, Jie Wu, Patrick C.K. Hung, Qifeng Tang With the rapid development of big data and data center networks, NoSQL database has won great popularity for its excellent performance in accelerating the performance of many online and offline big data applications, such as HBase, Cassandra and MongoDB. However, due to massive and frequent Create/Update/Retrieval/Delete (CURD) operations, the traditional TCP/IP protocol stack has difficulty to provide the required request rates and response latency for the large-scale NoSQL system. For example, large-scale data migration or synchronization among multiple clusters in a data center results in competition for network bandwidth with high delay. To mitigate such transmission bottleneck, we propose an approach of RDMA-driven document NoSQL Paradigm’ RDMA_Mongo, based on MongoDB. The performance of CURD operations is enhanced by one-sided Remote Direct Memory Access (RDMA) primitives (such as RDMA Read/Write) without involving the TCP/IP stack or CPU. Evaluation under RDMA-enabled network demonstrates that RDMA_Mongo significantly improves the CURD performance, compared with plain MongoDB. The results show that the average insert throughput increases by approximately 30%, the average delete throughput by over 30%, the update by up to 17% and the query throughput by 15% when facing large-scale data requests.
       
  • Top distance regularized projection and dictionary learning for person
           re-identification
    • Abstract: Publication date: Available online 17 June 2019Source: Information SciencesAuthor(s): Huafeng Li, Jiajia Xu, Jinting Zhu, Dapeng Tao, Zhengtao Yu Person re-identification (PRID) is integral to many smart surveillance systems. However, owing to the visual ambiguities arising from the variability in viewing angles and illumination, and the presence of occlusions, PRID continues to present many challenges, especially when only a single image per view is available for each person. To overcome this problem, we propose a top distance regularized projection and dictionary learning (DL) model for PRID. The model incorporates both projection and DL to form a unified optimization framework to enhance the effectiveness of both these types of learning. Thus, the dictionary and projection matrix are jointly learned within this framework. In particular, the learned projection maps the coding coefficient into a discriminative space and minimizes the distance between the same persons across non-overlapping views such that the dictionary and projection can be discriminated. Moreover, we exploit listwise distances to capture all pairwise similarities. Based on this design, we derive a top distance regularization term to refine the solution space of the DL model such that the discriminative ability of the learned projection matrix and dictionary are further improved. Experiments on different challenging datasets demonstrate the effectiveness of our method and its superiority over a few current state-of-the-art approaches.
       
  • Quantum vision representations and multi-dimensional quantum transforms
    • Abstract: Publication date: October 2019Source: Information Sciences, Volume 502Author(s): Hai-Sheng Li, Shuxiang Song, Ping Fan, Huiling Peng, Hai-ying Xia, Yan Liang Quantum vision representation (QVR) is the foundation of quantum vision information processing, which is a possible solution to store and process massive visual data efficiently. In this paper, firstly, quantum image representations are divided into three categories based on different methods of color information storage. Secondly, in order to systematize quantum image representation, we propose five new methods. Thirdly, we develop models of QVR by extending three categories of quantum image representations into corresponding QVRs. Next, we design and implement 1D, 2D, and 3D quantum transforms based on QVR for the first time. Simulation experiments demonstrate that proposed multi-dimensional quantum transforms are effective. In conclusion, this paper develops a model of QVR and provides a feasible scheme for multi-dimensional quantum transforms to be applied in quantum vision information processing.
       
  • Comparative evaluation of region query strategies for DBSCAN clustering
    • Abstract: Publication date: October 2019Source: Information Sciences, Volume 502Author(s): Severino F. Galán Clustering is a technique that allows data to be organized into groups of similar objects. DBSCAN (Density-Based Spatial Clustering of Applications with Noise) constitutes a popular clustering algorithm that relies on a density-based notion of cluster and is designed to discover clusters of arbitrary shape. The computational complexity of DBSCAN is dominated by the calculation of the ϵ-neighborhood for every object in the dataset. Thus, the efficiency of DBSCAN can be improved in two different ways: (1) by reducing the overall number of ϵ-neighborhood queries (also known as region queries), or (2) by reducing the complexity of the nearest neighbor search conducted for each region query. This paper deals with the first issue by considering the most relevant region query strategies for DBSCAN, all of them characterized by inspecting the neighborhoods of only a subset of the objects in the dataset. We comparatively evaluate these region query strategies (or DBSCAN variants) in terms of clustering effectiveness and efficiency; additionally, a novel region query strategy is introduced in this work. The results show that some DBSCAN variants are only slightly inferior to DBSCAN in terms of effectiveness, while greatly improving its efficiency. Among these variants, the novel one outperforms the rest.
       
  • Long short-term memory-based deep recurrent neural networks for target
           tracking
    • Abstract: Publication date: Available online 15 June 2019Source: Information SciencesAuthor(s): Chang Gao, Junkun Yan, Shenghua Zhou, Pramod K. Varshney, Hongwei Liu Target tracking is a difficult estimation problem due to target motion uncertainty and measurement origin uncertainty. In this paper, we consider the target tracking problem in the presence of only target motion uncertainty. The traditional approaches to address this uncertainty, such as multiple model approaches, can suffer performance degradation when there is a model mismatch. The statistical accuracy of conventional model-based methods is also usually limited because of the measurement errors and insufficient data for the estimation. In this paper, deep neural network-based methods are proposed to handle target motion uncertainty due to their strong capability of fitting any mapping as long as there are sufficient training data. Specifically, a recurrent neural network-based structure is proposed to estimate the true states that is consistent with the sequential manner of target tracking. In addition, it is expected that better performance will be achieved due to access to true states during the training of the networks. We propose two networks that are based on different principles and are capable of real-time tracking. An approach to further reduce the computational load is also introduced. Simulation results show that the proposed methods can handle the target motion uncertainty well and provide better estimation accuracy.
       
  • Sparsity in function and derivative approximation via the Empirical
           Feature Space
    • Abstract: Publication date: Available online 14 June 2019Source: Information SciencesAuthor(s): Sumit Soman, Jayadeva, Rajat Thakur, Mayank Sharma, Suresh Chandra Several practical applications require estimation of the values of a function and its derivative at specific sample locations.This is a challenging task particularly when the explicit forms of the function and its derivative are not known. There have been a few methods proposed in the literature to learn an approximant that simultaneously uses values of a function as well as values of its derivatives or partial derivatives. These methods typically use Support Vector Regression (SVR) and solve a Quadratic Programming Problem (QPP) for the task, which results in a learning model that can estimate the function and derivative values. In this paper, we propose an alternative novel approach that focuses on introducing sparsity in such a learning model, that is based on minimizing the model complexity in the Empirical Feature Space (EFS). Sparsity in such a model is useful when it needs to be evaluated a large number of times as it entails lower computational cost compared to a dense model. The proposed approach, called the EFSRD (EFS Regression for Function and Derivative approximation), involves solving a Linear Programming Problem (LPP). On a number of benchmark examples, EFSRD learns models that offer comparable or better performance, while learning models that are nearly a fourth the size of those obtained by existing approaches.
       
  • Smoothed Self-Organizing Map for robust clustering
    • Abstract: Publication date: Available online 13 June 2019Source: Information SciencesAuthor(s): Pierpaolo D’Urso, Livia De Giovanni, Riccardo Massari In this paper a Self-Organizing Map (SOM) robust to the presence of outliers, the Smoothed SOM (S-SOM), is proposed. S-SOM improves the properties of input density mapping, vector quantization, and clustering of the standard SOM in the presence of outliers by upgrading the learning rule in order to smooth the representation of outlying input vectors onto the map. The upgrade of the learning rule is based on the complementary exponential distance between the input vector and its closest codebook. The convergence of the S-SOM to a stable state is proved. Three comparative simulation studies and a suggestive application to digital innovation data show the robustness and effectiveness of the proposed S-SOM. Supplementary materials for this article are available.
       
  • Privacy-Preserving Authentication for General Directed Graphs in
           Industrial IoT
    • Abstract: Publication date: Available online 13 June 2019Source: Information SciencesAuthor(s): Fei Zhu, Wei Wu, Yuexin Zhang, Xiaofeng Chen Previous solutions on security and privacy in Industrial Internet of Things (IIoT) focus primarily on the data or the deployment and communication problems of devices. However, the management and authentication of IoT devices for IIoT systems has received less attention. To facilitate the research on this area, we take the IIoT control system as an example. It is crucial to authenticate the relationships among devices to ensure that data can only be accessed by authenticated parties. In this paper, we provide a feasible solution to address the aforementioned issue. Considering that a directed graph can be used to intuitively represent such relationships, we present a dynamically updatable privacy-preserving authentication scheme for general directed graphs. Specifically, our scheme is based on the cryptographic accumulator and an underlying standard digital signature scheme. We formalize its security notions and prove its security under the adaptive chosen-message attacks. The efficiency of our scheme is evaluated in order to illustrate its practically for real-world applications.
       
  • Semantic Periodic Pattern Mining from Spatio-temporal Trajectories
    • Abstract: Publication date: Available online 13 June 2019Source: Information SciencesAuthor(s): Dongzhi Zhang, Kyungmi Lee, Ickjai Lee Rapid development in GPS tracking techniques produces a large number of spatio-temporal trajectory data. The analysis of these data provides us with a new opportunity to discover behavioural patterns. Spatio-temporal periodic pattern mining is finding temporal regularities for interesting places. Mining periodic patterns from spatio-temporal trajectories reveals useful and important information about people’s regular and recurrent movements and behaviours. Existing periodic pattern mining algorithms suffer from two main drawbacks. They assume regularly sampled and evenly spaced trajectory data as input which is unlike real world data, traditional methods also fail to consider background aspatial information despite many applications requiring a semantic interpretation of movement behaviours. In this paper, we propose a new semantic periodic pattern mining algorithm from spatio-temporal trajectories that overcomes these two drawbacks from past studies. Experimental results with real world datasets demonstrate the efficiency and effectiveness of our proposed method.
       
  • Result Diversification in Image Retrieval Based on Semantic Distance
    • Abstract: Publication date: Available online 11 June 2019Source: Information SciencesAuthor(s): Wei Lu, Mengqi Luo, Zhenyu Zhang, Guobiao Zhang, Heng Ding, Haihua Chen, Jiangping Chen User requirements for result diversification in image retrieval have been increasing with the explosion of image resources. Result diversification requires that image retrieval systems are made capable of handling semantic gaps between image visual features and semantic concepts, and providing both relevant and diversified image results. Context information, such as captions, descriptions, and tags, provides opportunities for image retrieval systems to improve their result diversification. This study explores a mechanism for improving result diversification using the semantic distance of image social tags. We design and compare nine strategies that combine three different semantic distance algorithms (WordNet, Google Distance, and Explicit Semantic Analysis) with three re-ranking algorithms (MMR, xQuAD, and Score Difference) for result diversification. In order to better prove the effectiveness of our strategy of applying semantic information, we also make use of visual features of images for result diversification experiment and make comparison. Our data for experimentation were extracted from 269,648 images selected from the NUS-WIDE datasets with manually annotated subtopics. Experimental results affirm the effectiveness of applying semantic information for improving result diversification in image retrieval. In particular, WordNet-based semantic distance combined with the Score Difference (WordNet-DivScore) outperformed other strategies in diversifying image retrieval results.
       
  • Hybrid evolutionary search for the traveling repairman problem with
           profits
    • Abstract: Publication date: Available online 11 June 2019Source: Information SciencesAuthor(s): Yongliang Lu, Jin-Kao Hao, Qinghua Wu The Traveling Repairman Problem with Profits is a node routing problem, where a repairman visits a subset of nodes of a weighted graph in order to maximize the collected time-dependent profits. In this work, we present the first population-based hybrid evolutionary search algorithm for solving the problem that combines: (i) a randomized greedy construction method for initial solution generation, (ii) a dedicated variable neighborhood search for local optimization, (iii) two crossover operators for solution recombination with an adaptive rule for crossover selection. Computational results on six sets of 120 benchmark instances from the literature demonstrate that the proposed algorithm achieves a high performance - it improves the best-known results (new lower bounds) for 39 instances, while matching the best-known results for the remaining cases. We investigate several main algorithmic ingredients to understand their impacts on the performance of the algorithm.
       
  • Feature selection using neighborhood entropy-based uncertainty measures
           for gene expression data classification
    • Abstract: Publication date: Available online 11 June 2019Source: Information SciencesAuthor(s): Lin Sun, Xiaoyu Zhang, Yuhua Qian, Jiucheng Xu, Shiguang Zhang Gene expression data classification is an important technology for cancer diagnosis in bioinformatics and has been widely researched. Due to the large number of genes and the small sample size in gene expression data, feature selection based on neighborhood rough sets is a key step for improving the performance of gene expression data classification. However, some quantitative measures of feature sets may be nonmonotonic in neighborhood rough sets, and many feature selection methods based on evaluation functions yield high cardinality and low predictive accuracy. Therefore, investigating effective and efficient heuristic reduction algorithms is necessary. In this paper, a novel feature selection method based on neighborhood rough sets using neighborhood entropy-based uncertainty measures for cancer classification from gene expression data is proposed. First, some neighborhood entropy-based uncertainty measures are investigated for handling the uncertainty and noise of neighborhood decision systems. Then, to fully reflect the decision-making ability of attributes, the neighborhood credibility and neighborhood coverage degrees are defined and introduced into decision neighborhood entropy and mutual information, which are proven to be nonmonotonic. Moreover, some of the properties and relationships among these measures are derived, which is helpful for understanding the essence of the knowledge content and the uncertainty of neighborhood decision systems. Finally, the Fisher score method is employed to preliminarily eliminate irrelevant genes to significantly reduce complexity, and a heuristic feature selection algorithm with low computational complexity is presented to improve the performance of cancer classification using gene expression data. Experiments on ten gene expression datasets show that our proposed algorithm is indeed efficient and outperforms other related methods in terms of the number of selected genes and the classification accuracy, especially as the size of the genes increases.
       
  • Combining pre- and post-model information in the uncertainty
           quantification of non-deterministic models using an extended Bayesian
           melding approach
    • Abstract: Publication date: Available online 11 June 2019Source: Information SciencesAuthor(s): Lechang Yang, Yanling Guo Due to the increasing complexity of manufacturing process and the diversity of information sources, it is not rare in practical engineering that multiple priors are simultaneously available on the same quantity. To address this issue, which occurs due to inconsistent information from different sources, we propose a probability framework to quantify the uncertainty of a general propagation model. An extended Bayesian melding approach is developed to eliminate the limitations inherent in traditional Bayesian methods. It is found that the aggregation error, which is caused by inconsistent information from multi-sources, can be alleviated by combining the pre- and post- model information. Novel features of our approach involve a modified sampling importance resampling algorithm in which a distribution mixture technique is adopted to reduce the computational cost. To meet practical engineering requirements, this approach is extended to a non-deterministic scenario that has not been covered by existing studies. We use several case studies to validate our proposal as well as its benefits in practical applications.
       
  • A multiple surrogate assisted multi/many-objective multi-fidelity
           evolutionary algorithm
    • Abstract: Publication date: Available online 11 June 2019Source: Information SciencesAuthor(s): Ahsanul Habib, Hemant K. Singh, Tapabrata Ray Engineering design commonly involves optimization of multiple conflicting performance objectives. During the optimization process, the performance of each candidate design/solution is evaluated using a model which may be empirical, numerical, experimental, etc., among other forms. The accuracy of the underlying model in representing the real-world behavior is referred to as fidelity. A low-fidelity model may be quick to evaluate but not very accurate; whereas a high-fidelity model may be computationally expensive to evaluate but provides an accurate estimate of the true performance. The paradigm of utilizing the low and high-fidelity models’ information to identify the high-fidelity optimal solution(s) is known as multi-fidelity optimization. This study delves into multi-fidelity optimization for problems which contain multiple objectives and where iterative solvers such as finite element analysis, computational fluid dynamics, etc. are used for performance evaluation. By stopping the solver at various stages before convergence, lower-fidelity performance estimates can be obtained at reduced computational cost. Most of the existing multi-fidelity methods can only deal with two fidelities (high and low) and a single objective. To overcome this research gap, we present a novel multi-objective evolutionary algorithm that can deal with multiple (arbitrary) number of fidelities by effectively utilizing pre-converged low-fidelity information. The proposed algorithm uses multiple surrogate models to capture the underlying function(s) with enhanced precision. A decomposition-based scheme is deployed for improved scalability in higher number of objectives. A classifier assisted pre-selection method is used to screen potential non-dominated solutions for efficient use of the computational budget. Additionally, a set of multi-fidelity, multi/many objective benchmark problems with different Pareto front types is also introduced to aid a systematic benchmarking. Numerical experiments are presented to highlight the efficacy of proposed approach.
       
  • Aggregation on ordinal scales with the Sugeno integral for biomedical
           applications
    • Abstract: Publication date: Available online 11 June 2019Source: Information SciencesAuthor(s): Gleb Beliakov, Marek Gagolewski, Simon James The Sugeno integral is a function particularly suited to the aggregation of ordinal inputs. Defined with respect to a fuzzy measure, its ability to account for complementary and redundant relationships between variables brings much potential to the field of biomedicine, where it is common for measurements and patient information to be expressed qualitatively. However, practical applications require well-developed methods for identifying the Sugeno integral’s parameters, and this task is not easily expressed using the standard optimisation approaches. Here we formulate the objective function as the difference of two convex functions, which enables the use of specialised numerical methods. Such techniques are compared with other global optimisation frameworks through a number of numerical experiments.
       
  • A Feasible-Ratio Control Technique for Constrained Optimization
    • Abstract: Publication date: Available online 10 June 2019Source: Information SciencesAuthor(s): Ruwang Jiao, Sanyou Zeng, Changhe Li In constrained optimization problems (COPs), a crucial issue is that most constraint-handling evolutionary algorithms (EAs) approach the optimum either mainly from feasible regions or mainly from infeasible regions. This may result in bias in search of feasible and infeasible solutions. To address this issue, we propose a feasible-ratio control technique which controls the ratio of feasible solutions in the population. By using the control technique, an EA can maintain the search balance from feasible and infeasible regions. Based on this technique, we propose a constraint-handling EA, named FRC-CEA. It consists of two-stage optimization. In the first stage, an enhanced dynamic multi-objective evolutionary algorithm (DCMOEA) with the feasible-ratio control technique is adopted to handle constraints. In the second stage, a commonly used differential evolution (DE) is used to speed up the convergence. The performance of the proposed method is evaluated and compared with six state-of-the-art constraint-handling algorithms on two sets of benchmark test suites. Experimental results suggest that the proposed method outperforms or is highly competitive against the compared algorithms on most test problems.
       
  • Functional Broadcast Encryption with Applications to Data Sharing for
           Cloud Storage
    • Abstract: Publication date: Available online 10 June 2019Source: Information SciencesAuthor(s): Huige Wang, Yuan Zhang, Kefei Chen, Guangye Sui, Yunlei Zhao, Xinyi Huang Cloud storage services provide data owners an efficient and flexible way to share data. Among the shared data, some of them are very sensitive, and should be prevented for any leakage. Should users conventionally encrypt the data, however, flexibly sharing is lost. Public-key encryption with access control (PEAC) resolves this tension. Most of existing PEAC schemes only support the data owner to control either the parts of data to be accessed by other users (file-based PEAC), or the membership of users that access the entire data set (receiver-based PEAC). However, in reality a PEAC scheme with both file-based and receiver-based functionalities is required to ensure the efficiency, flexibility, and fine-grainess of the data sharing service. In this paper, we introduce a primitive of functional broadcast encryption (FBE). FBE is a manifestation of PEAC that enables a data owner to share a set of data files to a group of users, where only a specific subset of data files can be accessed and decrypted by a specific subgroup of users. We describe a construction for FBE based on indistinguishability obfuscation (iO). Security analysis demonstrates that the proposed scheme achieves selective IND-CCA security, and a comprehensive performance analysis shows the proposed scheme is efficient.
       
  • Uncertainty measures for interval set information tables based on interval
           δ-similarity relation
    • Abstract: Publication date: Available online 10 June 2019Source: Information SciencesAuthor(s): Yimeng Zhang, Xiuyi Jia, Zhenmin Tang, Xianzhong Long The notion of uncertainty measure is one of the most important topics in rough set theory and has been studied in different kinds of information tables. However, few studies have focused on the interval set information table, which is regarded as one of the generalized models of single-valued information tables. This paper aims at studying the uncertainty measurements for interval set information tables. Firstly, an interval δ-similarity relation is induced based on the similarity degree. The similarity relation induces the granules, which form a covering in interval set information tables. Secondly, four types of granularity measures are defined to measure the granularity of a covering. Thirdly, the concepts of accuracy and roughness in rough set theory are respectively extended to δ-accuracy and δ-roughness for interval set information tables. Furthermore, four new combinations ofuncertainty measures by considering proposed granularity measures and δ-accuracy and δ-roughness are defined and analyzed. Theoretical analysesand experimental results illustrate that the proposed measures are effective and accurate for interval set information tables.
       
  • Comparative Analysis of Features Extracted from EEG Spatial, Spectral and
           Temporal Domains for Binary and Multiclass Motor Imagery Classification
    • Abstract: Publication date: Available online 10 June 2019Source: Information SciencesAuthor(s): Seung-Bo Lee, Hyun-Ji Kim, Hakseung Kim, Ji-Hoon Jeong, Seong-Whan Lee, Dong-Joo Kim The electroencephalogram (EEG) remains the predominant source of neurophysiological signals for motor imagery-based brain-computer interfaces (MI-BCIs). Various features can be derived from three distinctive domains (i.e., spatial, temporal and spectral); however, the efficacies of the existing feature extraction methods when discriminating complex multiclass MI tasks have yet to be reported. This study investigates the performances of EEG feature extraction techniques from varying domains against different levels of complex, multiclass MI tasks. Ten healthy volunteers underwent multiple complex MI tasks via a robotic arm (i.e., hand grasping and wrist twisting; grasp, spread, pronation and supination). The discrimination performances of various feature extraction (i.e., common spatial patterns (CSP), time domain parameters (TDP), and power spectral density (PSD)) and classification methods for EEG were tested to perform binary (hand grasping/wrist twisting), ternary ((A) grasp/spread/wrist twisting and (B) hand grasping/pronation/supination) and quaternary (grasp/spread/pronation/supination) discrimination. Based on the available data, the combination of shrinkage-regularized linear discriminant analysis (SRLDA) and TDP achieved the highest accuracy. The findings suggest that multiclass complex MI-BCI task discrimination could gain more benefit from analyzing simple and symbolic features such as TDP rather than more complex features such as CSP and PSD.
       
  • Note on Entropies of Hesitant Fuzzy Linguistic Term Sets and Their
           Applications
    • Abstract: Publication date: Available online 9 June 2019Source: Information SciencesAuthor(s): Cuiping Wei, Peng Li, Rosa M. Rodríguez Hesitant fuzzy linguistic term set (HFLTS) is very useful in depicting the situations where people are hesitant to provide their opinions or assessments. In a HFLTS, it should be considered two types of uncertainty, fuzziness and hesitation. This paper is aimed to investigate the problem of how apply different uncertainty facets in different decision making settings. First, a new construction method of a fuzzy entropy for HFLTSs is proposed and it is compared with other methods already introduced in the literatures. Afterwards, these entropy formulas are used to propose two algorithms for deriving the criteria weights and experts weights. Different from the existing applications, it is stressed that in the process of deriving the criteria weights, only the hesitancy of the HFLTS should be considered, while in the process of deriving the experts weights with hesitant fuzzy preference relation information, both the fuzziness and hesitancy of the evaluation information should be involved.
       
  • Hyper-Laplacian Regularized Nonlocal Low-rank Matrix Recovery for
           Hyperspectral Image Compressive Sensing Reconstruction
    • Abstract: Publication date: Available online 9 June 2019Source: Information SciencesAuthor(s): Jize Xue, Yongqiang Zhao, Wenzhi Liao, Jonathan Cheung-Wai Chan Sparsity prior is a powerful tool for compressive sensing reconstruction (CSR) of hyperspectral image (HSI). However, conventional HSI-CSR strategies are not tuned to extracting refine spatial and spectral sparsity prior. Moreover, these CSR techniques are weak in preserving edges and suppressing artifacts. To alleviate these issues, this paper represents a first effort to characterize the spatial and spectral knowledge using the structure-based sparsity prior. Specifically, we introduce the nonlocal low-rank matrix recovery model and the hyper-Laplacian prior to encode the spatial and spectral structured sparsity, respectively. The key advantage of the proposed method, termed as hyper-Laplacian regularized nonlocal low-rank matrix recovery (HyNLRMR), is to adopt insightful property, namely the nonlocal self-similarity across the spatial domain and the consistency along the spectral domain. Then, the alternative direction multiplier method (ADMM) is designed to effectively implement the proposed algorithm. Experimental results on various HSI datasets verify that the proposed algorithm can significantly outperform existing state-of-the-art HSI-CSR methods.
       
  • Some properties of fuzzy implications based on copulas
    • Abstract: Publication date: Available online 8 June 2019Source: Information SciencesAuthor(s): Piotr Helbin, Michał Baczyński, Przemysław Grzegorzewski, Wanda Niemyska In 2011 Grzegorzewski introduced two new families of fuzzy implication functions called probabilistic implications and probabilistic S-implications. They are based on copulas and make a bridge between probability theory and fuzzy logic. Another family of fuzzy conditional implication operators was proposed by Dolati et al. in 2013. In this paper we consider some properties of these three classes of fuzzy implications like the law of contrapositions and the law of importation. Moreover, we examine intersections of these families of implications with R-implications, (S, N)-implications, QL-operations and Yager’s f- and g-generated implications.
       
  • Conditional Importance Sampling for Particle Filters
    • Abstract: Publication date: Available online 8 June 2019Source: Information SciencesAuthor(s): Qingming Zhang, Buhai Shi, Yuhao Zhang In this paper, we present a new importance sampling method, namely the conditional importance sampling (CIS). This new method uses a conditional density as a proposal density and exploits rejection sampling, adaptively neglecting samples whose importance weights are relatively low. The CIS improves the efficiency of estimation without creating bias. We apply the CIS to the bootstrap filter to obtain a new algorithm, named the conditional bootstrap filter, which achieves higher estimation efficiency than the bootstrap filter and shows advantages over some other filters in our simulations.
       
  • Granularity-driven sequential three-way decisions: a cost-sensitive
           approach to classification
    • Abstract: Publication date: Available online 8 June 2019Source: Information SciencesAuthor(s): Yu Fang, Cong Gao, Yiyu Yao Sequential three-way decision (S3WD) is a multi-stage decision-making progress that emulates human cognition. S3WD employs coarse-to-fine information granularity and uses various types of costs to determine thresholds to make appropriate decisions. To date, however, few existing models discuss the construction of granules or consider costs in the decision process. This paper describes efforts to develop a granularity-driven sequential three-way decision model to address these two issues, and handle dual-constraint satisfaction problems considering both decision process and decision result costs. A key feature of the model is the incorporation of information granularity into the decision-making process. The model also incorporates a cost structure that accounts for the costs of both the decision process and the decision result. Using this model, we design two algorithms to minimize the cost of the decision process or the cost of the decision result. Our experimental results validate the effectiveness of the algorithms and the viability of the new model.
       
  • A Novel Privacy Preserving Method for Data Publication
    • Abstract: Publication date: Available online 8 June 2019Source: Information SciencesAuthor(s): Chaobin Liu, Shixi Chen, Shuigeng Zhou, Jihong Guan, Yao Ma Privacy has received increasing concerns in publication of datasets that contain sensitive information. Preventing privacy disclosure and providing useful information to legitimate users for data mining are conflicting goals. Generalization and randomized response methods were proposed in database community to tackle this problem. However, both of them have postulated the same prior belief for all transactions, which might be wrong modeling and lead to privacy breach. Besides, generalization and randomized response methods usually require a privacy controlling parameter to control the tradeoff between privacy and data quality, which may put the data publishers in a dilemma. In this paper, a novel privacy preserving method for data publication is proposed based on conditional probability distribution and machine learning techniques, which can achieve different prior beliefs for different transactions. A basic cross sampling algorithm and a complete cross sampling algorithm are designed respectively for the settings of single sensitive attribute and multiple sensitive attributes, and an improved complete algorithm is developed by using Gibbs sampling, in order to enhance data utility when data are not sufficient. Our method can offer stronger privacy guarantee, while, as shown in the extensive experiments, retaining better data utility.
       
  • Locally Differentially Private Item-based Collaborative Filtering
    • Abstract: Publication date: Available online 8 June 2019Source: Information SciencesAuthor(s): Taolin Guo, Junzhou Luo, Kai Dong, Ming Yang Recently, item-based collaborative filtering has attracted a lot of attention. It recommends to users new items which may be of interests to them, based on their reported historical data (i.e., the items they have already been interested in). The reported historical data leads to significant privacy risks in case that the recommending service is not fully trusted. Many researches have focused on developing differential privacy mechanisms to protect personal data in various recommendations. However, most of these mechanisms can not ensure accuracy of the recommendations. The main reason for this problem is that these methods compute similarity directly from the perturbation data. The computed similarity is thus always inaccurate and this inaccurate similarity finally leads to inaccurate recommendation results. In this paper, we propose a locally differentially private item-based collaborative filtering framework, which protects users’ private historical data on the user-side, and on the server-side reconstructs the similarity to ensure recommendation accuracy. The similarities are reconstructed for every pair of items, by estimating the number of users who have rated neither, either one, or both of them. The final recommendation is generated by the reconstructed similarities. Experimental results show that our proposed method significantly outperforms the state-of-the-art methods in terms of the recommendation accuracy and the trade-off between privacy and accuracy.
       
  • Cost-sensitive active learning through statistical methods
    • Abstract: Publication date: Available online 8 June 2019Source: Information SciencesAuthor(s): Min Wang, Yao Lin, Fan Min, Dun Liu Clustering-based active learning splits the data into a number of blocks and queries the labels of most the representative instances. When the cost of labeling and misclassification are considered, we also face a key issue: How many labels should be queried for a given block. In this paper, we present theoretical and practical statistical methods to handle this issue. The theoretical statistical method calculates the optimal number of query labels for a predefined label distribution. Considering label distributions for different clustering qualities, we obtain three hypothetical models, namely Gaussian, Uniform, and V models. The practical statistical method calculates empirical label distribution of the cluster blocks. Considering four popular clustering algorithms, we use symmetry and curve fitting techniques on 30 datasets to obtain empirical distributions. Inspired by three-way decision, we design an algorithm called the cost-sensitive active learning through statistical methods (CATS). Experiments were performed on 12 binary-class datasets for both the distribution evaluation and learning task. The results of significance tests verify the effectiveness of CATS and its superior performance with respect to state-of-the-art cost-sensitive active learning algorithms.
       
  • Synthetic-Analytic Behavior-based Control Framework: Constraining Velocity
           in Tracking for Nonholonomic Wheeled Mobile Robots
    • Abstract: Publication date: Available online 8 June 2019Source: Information SciencesAuthor(s): Marlen Meza-Sánchez, Eddie Clemente, M.C. Rodríguez-Liñán, Gustavo Olague This work presents a genetic programming control design methodology that extends the traditional behavior–based control strategy towards a synthetic-analytic perspective. The proposed approach considers the internal and external dynamics of the system, providing solutions to a general structure, and including analytic functions, which can be studied within the Control Theory framework. The method is illustrated for the tracking control problem under bounded velocity restrictions of a nonholonomic wheeled mobile robot. A classic Control Theory (CT) based controller that solves the tracking problem (but not the velocity constraint requirement) is chosen from the literature; based on its stability properties, a modified structure where the search of suitable analytic basis behaviors, fulfilling both control objectives simultaneously, can be introduced. The proposed framework takes the form of a learning process based on Genetic Programming (GP) which generates a set of nonlinear tracking controllers satisfying pre-specified velocity bounds. A collection of 9113 suitable nonlinear solutions were obtained to augment the ground controller. Simulations and real–time experiments are performed to illustrate the effectiveness of the methodology through the testing of the models with the best performance, as well as those with lower structural complexity.
       
  • Heterogeneous linear multi-agent consensus with nonconvex input
           constraints and switching graphs
    • Abstract: Publication date: Available online 8 June 2019Source: Information SciencesAuthor(s): Xue Li, Cancan Li, Yikang Yang This study focuses on a constrained consensus problem of heterogeneous linear multi-agent systems with nonconvex input constraints and switching graphs. A distributed control algorithm with the time-varying gain is presented to ensure the control input of each agent to stay in a nonconvex set and make all agents reach to a common point. Through the method of nonnegative matrices, we obtain the sufficient conditions for the consensus of the system. It is shown that all agents can reach a consensus as long as the union of communication graphs contains spanning trees. Finally, simulations are provided to show the effectiveness of the proposed algorithm.
       
  • A Topic Enhanced Approach to Detecting Multiple Standpoints in Web Texts
    • Abstract: Publication date: Available online 7 June 2019Source: Information SciencesAuthor(s): Junjie Lin, Qingchao Kong, Wenji Mao, Lei Wang Internet has become the most popular platform for people to exchange opinions and express stances. The stances implied in web texts indicate people's fundamental beliefs and viewpoints. Understanding the stances people take is beneficial and critical for many security and business related applications, such as policy design, emergency response and marketing management. Most previous work on stance detection focuses on identifying the supportive or unsupportive attitudes towards a specific target. However, another important type of stance detection, i.e. multiple standpoint detection, has been largely ignored. Multiple standpoint detection aims to identify the distinct standpoints people hold among multiple parties, which reflects people's intrinsic values and judgments. When expressing standpoints, people tend to discuss diverse topics, and the word usage in the topics of different standpoints often varies a lot. As topics can provide latent information for identifying various standpoints, in this paper, we propose a topic-based approach to detecting multiple standpoints in Web texts, by enhancing generative classification model as well as feature representation of texts. In addition, we develop an adaptive process to determine parameter values in our approach automatically. Experimental studies on several real-world datasets verify the effectiveness of our proposed approach in detecting multiple standpoints.
       
  • Quantum homomorphic encryption scheme with flexible number of evaluator
           based on (k, n)-threshold quantum state sharing☆
    • Abstract: Publication date: Available online 6 June 2019Source: Information SciencesAuthor(s): Xiu-Bo Chen, Yi-Ru Sun, Gang Xu, Yi-Xian Yang Quantum homomorphic encryption provides the ability to perform calculations on encrypted data without decrypting them. The number of evaluators in most previous schemes is 1, and in other schemes it is n. In this paper, we propose a novel quantum homomorphic encryption scheme with flexible number of evaluator. One of our main contributions is that the number of evaluators is d (k≤d≤n), here n≥2. These evaluators are allowed to alternately complete the evaluation of single-qubit unitary operations on the shared encrypted sequence. Other work is that for n=1, we also give the quantum homomorphic encryption scheme that one evaluator can complete the evaluation. Then, for (3, 5)-case and (2, 2)-case, we give two examples (i.e. example-I and example-II) to illustrate the flexibility of evaluators. Finally, it is shown that the scheme is secure from analyzing the private key, plaintext sequence and encrypted sequence.
       
  • An ideal point based many-objective optimization for community detection
           of complex networks
    • Abstract: Publication date: Available online 6 June 2019Source: Information SciencesAuthor(s): Sahar Tahmasebi, Parham Moradi, Siamak Ghodsi, Alireza Abdollahpouri Community detection is one of the major topics in the study of complex networks, which aims to uncover their structural properties. Recently, many evolutionary methods have been successfully employed to identify communities of complex networks. Community detection has been treated so far as a single or multi-objective problem in evolutionary-based approaches. Since each objective covers a specific aspect of the network's properties, it could result in identification of better community structures to investigate the problem with more than two objectives. In this paper, we proposed a method referred to as MaOCD that formulates community detection as a many objective task. MaOCD uses an ideal-point based strategy to guide the population towards an optimal community structure. The main purpose is to take advantage of optimizing several objectives simultaneously and using a representation that reduces the search space. This enhances the convergence of the method, and automatically determines the number of modules. We introduced a novel metric called IGDC that gives multi/many-objective community detection methods the capability of being comparable regarding multiple objectives. Several experiments were carried out on synthetic and real-world datasets to show the performance of our method. The results demonstrated that MaOCD successfully detected the communities in the network structure compared to some state-of-the-art single and multi-objective methods.
       
  • Diagnostic Assessment of Deep Learning Algorithms for Diabetic Retinopathy
           Screening
    • Abstract: Publication date: Available online 5 June 2019Source: Information SciencesAuthor(s): Tao Li, Yingqi Gao, Kai Wang, Song Guo, Hanruo Liu, Hong Kang Diabetic retinopathy (DR), the leading cause of blindness for working-age adults, is generally intervened by early screening to reduce vision loss. A series of automated deep-learning-based algorithms for DR screening have been proposed and achieved high sensitivity and specificity ( > 90%). However, these deep learning models do not perform well in clinical applications due to the limitations of the existing publicly available fundus image datasets. In order to evaluate these methods in clinical situations, we collected 13,673 fundus images from 9,598 patients. These images were divided into six classes by seven graders according to image quality and DR level. Moreover, 757 images with DR were selected to annotate four types of DR-related lesions. Finally, we evaluated state-of-the-art deep learning algorithms on collected images, including image classification, semantic segmentation and object detection. Although we obtain an accuracy of 0.8284 for DR classification, these algorithms perform poorly on lesion segmentation and detection, indicating that lesion segmentation and detection are quite challenging. In summary, we are providing a new dataset named DDR for assessing deep learning models and further exploring the clinical applications, particularly for lesion recognition.
       
  • DEPLEST: A Blockchain-based Privacy-preserving Distributed Database toward
           User Behaviors in Social Networks
    • Abstract: Publication date: Available online 4 June 2019Source: Information SciencesAuthor(s): Yun Chen, Hui Xie, Kun Lv, Shengjun Wei, Changzhen Hu Social networks record a significant amount of user behavior data every day. By analyzing this behavior data, companies or attackers use it for marketing or more questionable purposes. We propose a blockchain-based model to protect the privacy of users’ data in such big data environments. Traditional blockchain methods require too many resources for this task, so we propose a model that secures sensitive user information in a distributed blockchain and passes nonsensitive information through to the primary system in order to manage the blockchain size. Our DEPLEST algorithm performs these synchronization operations to keep local database storage and computational capacity within the limits of individual users’ devices. We also propose a consensus protocol for blockchain ledger maintenance that runs well on typical client systems and prove that this protocol has excellent Byzantine fault tolerance (BFT). Our experimental results show that DEPLEST meets the architectural and performance needs and that our consensus protocol outperforms the existing proof of work (PoW) and proof of stake (PoS) methods in this application.
       
  • A meta-learning recommender system for hyperparameter tuning: predicting
           when tuning improves SVM classifiers
    • Abstract: Publication date: Available online 4 June 2019Source: Information SciencesAuthor(s): Rafael G. Mantovani, André L.D. Rossi, Edesio Alcobaça, Joaquin Vanschoren, André C.P.L.F. de Carvalho For many machine learning algorithms, predictive performance is critically affected by the hyperparameter values used to train them. However, tuning these hyperparameters can come at a high computational cost, especially on larger datasets, while the tuned settings do not always significantly outperform the default values. This paper proposes a recommender system based on meta-learning to identify exactly when it is better to use default values and when to tune hyperparameters for each new dataset. Besides, an in-depth analysis is performed to understand what they take into account for their decisions, providing useful insights. An extensive analysis of different categories of meta-features, meta-learners, and setups across 156 datasets is performed. Results show that it is possible to accurately predict when tuning will significantly improve the performance of the induced models. The proposed system reduces the time spent on optimization processes, without reducing the predictive performance of the induced models (when compared with the ones obtained using tuned hyperparameters). We also explain the decision-making process of the meta-learners in terms of linear separability-based hypotheses. Although this analysis is focused on the tuning of Support Vector Machines, it can also be applied to other algorithms, as shown in experiments performed with decision trees.
       
  • Event-Triggered Adaptive Dynamic Programming for Discrete-Time
           Multi-Player Games
    • Abstract: Publication date: Available online 4 June 2019Source: Information SciencesAuthor(s): Ziyang Wang, Qinglai Wei, Derong Liu For multi-player games, the event-triggered adaptive dynamic programming (ADP) method with single triggering condition cannot be used. In this paper, an event-triggered ADP method with multiple triggering conditions is developed for multi-player non-zero-sum (NZS) games. Triggering conditions are designed for each player, and the control inputs will be updated only when the relevant conditions are satisfied. Besides, the developed method is implemented by single-network structure. Therefore, the computational burden can be reduced effectively. Additionally, the stability is analyzed for event-triggered multi-player systems. Finally, two examples are employed to show the effectiveness of the developed method.
       
  • Arc Detection and Recognition in Pantograph-Catenary System based on
           Convolutional Neural Network
    • Abstract: Publication date: Available online 4 June 2019Source: Information SciencesAuthor(s): Shize Huang, Yachan Zhai, Miaomiao Zhang, Xiaoxue Hou The pantograph-catenary system is crucial to the transmission of electrical power from catenary lines to electrified trains. The occurrence of arcs could damage or interrupt railway operations. We propose a CNN-based model to detect arcs and recognize their magnitudes. First, we decompose the pantograph videos recorded by a camera fixed on China High-Speed train into continuous frames, and grayscale-process and segment those images to obtain an arc image set. Then, we divide the image set into training samples and test samples. And the training samples is further divided into three classifications labeled as 0, 1, 2, which are used to train CNN model. The accuracy of CNN trained result reaches 0.95, and the loss function converges to 0.083. Second, we use the trained network to detect arcs in the images of the test samples, and convert arc detection results to a time series of arc scores. Thus the occurrence of arcs and their magnitudes can be determined. Finally, we conduct experiments to compare our approach with other models. The results demonstrate the approach's high accuracy, robustness, and high speed, when dealing with images taken from unstable surroundings. It could be applied to other EMU models or environments with adjusted parameters.
       
  • Optimal performance of LTI systems over power constrained erasure channels
    • Abstract: Publication date: Available online 4 June 2019Source: Information SciencesAuthor(s): Xiao-Wei Jiang, Xiang-Yong Chen, Ming Chi, Ming-Feng Ge In this paper, the optimal performance of multiple-input multi-output (MIMO) linear time-invariant (LTI) plant is investigated. The communication channel is modeled as a power constrained channel with packet dropouts. The covariance of error signal between reference input and system’s output is chosen as the performance index. Based on the frequency domain analysis method, the exact expressions of the tracking performance limitation are derived. The results reveal that the best performance of NCSs not only has strong connection with both the nonminimum phase zeros and unstable poles of the plant, but also has close relation with the essential feature of reference input signal and communication parameters. Finally, a simulation example is discussed to validate the conclusions.
       
  • A consistency and consensus-based method for group decision making with
           hesitant fuzzy linguistic preference relations
    • Abstract: Publication date: Available online 4 June 2019Source: Information SciencesAuthor(s): Zhiming Zhang, Shyi-Ming Chen Hesitant fuzzy linguistic preference relations (HFLPRs) can well denote the decision makers (DMs)’ hesitant qualitative comparison judgments. Considering that consistency and consensusaretwo key issues in group decisionmaking (GDM), this paper discusses GDM with HFLPRs within the context of multiplicative consistency and consensus. First, a multiplicative consistency concept for linguistic preference relations (LPRs) is presented, and a method for obtaining the priority weight vectors of LPRs is offered. Then, amethodof deriving the priority weight vector ofa HFLPR is proposed. Furthermore, the consistency index of a HFLPR is defined, an acceptable multiplicative consistency definition for HFLPRs is offered. Subsequently, for a HFLPR which hasunacceptable consistency, an algorithm to improve its consistency level is developed. With respect to GDM with HFLPRs, a method to calculate the DMs’ weights is provided. Then, individual HFLPRs are integrated into a collective HFLPR. After that, a consensus index is defined, and an algorithm to improve the consensus is proposed that can guarantee the multiplicative consistency of individual HFLPRs. Moreover, an automatic algorithm to GDM with HFLPRs is developed. Lastly, we apply the new methods to two practical problems to verify their operability and applicability.
       
  • Constrained three-way approximations of fuzzy sets: From the perspective
           of minimal distance
    • Abstract: Publication date: Available online 3 June 2019Source: Information SciencesAuthor(s): Jie Zhou, Duoqian Miao, Can Gao, Zhihui Lai, Xiaodong Yue Three-way approximations of fuzzy sets aim at abstracting fuzzy sets into three pair-wise disjoint categories which facilitate semantic-oriented interpretations and reduce computing burden. Shadowed sets are a schema of three-way approximations of fuzzy sets which are formed based on a specific optimization mechanism. Among different principles guiding the construction of shadowed sets, the criterion of minimum distance offers a new insight within the framework of three-way decision theory. In this paper, the essential mathematical properties of the objective function used as a criterion to construct three-way approximations of fuzzy sets based on the principle of minimal distance, as well the characteristics of the optimal solutions, are analyzed. It is demonstrated that this optimization objective function is continuous but nonconvex with respect to the optimized variables. The nonconvex property makes the solution difficult and different approximate region partitions are obtainable even under the same optimization model. Therefore, further criteria are required to select final partition thresholds and make the construction process well-defined. To address this limitation, the notion of constrained three-way approximations of fuzzy sets is proposed from the perspective of minimal distance. Moreover, a constructive algorithm is provided to obtain the proposed constrained three-way approximations rather than using a direct enumeration method, and its performance is illustrated by considering some typical fuzzy sets along with some data from UCI repository.
       
  • An uncertain Z-number multicriteria group decision-making method with
           cloud models
    • Abstract: Publication date: Available online 3 June 2019Source: Information SciencesAuthor(s): Hong-gang Peng, Hong-yu Zhang, Jian-qiang Wang, Lin Li This study focuses on developing an innovative multicriteria group decision-making (MCGDM) method to handle practical problems based on information description, information fusion and information measure. Given that various uncertain data exist in the process of managing problems, uncertain Z-numbers are introduced to represent evaluation information and measure information reliability simultaneously. Processing uncertain information requires the effective support of reliable tools; then, cloud models can be used to address uncertain Z-number information. First, uncertain Z-numbers are converted, and a novel concept of Z-trapezium-normal clouds (ZTNCs) is introduced. Next, the operations, power aggregation operators, and distance and likelihood measures of ZTNCs are defined. Subsequently, the likelihood power aggregation operator and the priority vector of the likelihood matrix are proposed. Moreover, a new MCGDM method is developed by combining the two kinds of proposed power aggregation operators with the likelihood measure. Finally, an illustrative example of new energy evaluation is provided to demonstrate the developed method, and its feasibility and validity are further verified by a comparative analysis with other existing methods.
       
  • A New Resource Allocation Strategy Based on the Relationship Between
           Subproblems for MOEA/D
    • Abstract: Publication date: Available online 3 June 2019Source: Information SciencesAuthor(s): Peng Wang, Wen Zhu, Haihua Liu, Bo Liao, Lijun Cai, Xiaohui Wei, Siqi Ren, Jialiang Yang Multi-objective evolutionary algorithms based on decomposition (MOEA/D) decomposes a multi-objective optimization problem (MOP) into a set of simple scalar objective optimization sub-problems and solves them in a collaborative way. Since the sub-problems are different in optimization difficulty and computational resource demanding, it is critical to reasonably allocate computational resources among them, which can optimize the usage of resources and improve the performance of an algorithm. This paper proposes a new resource allocation strategy based on the relationship between sub-problems for MOEA/D. A probability vetor is maintained based on the relationship between sub-problems, which is used to guide the selection of sub-problems for optimization. In the optimization process, we explored the role of priority optimization of boundary sub-problems and used it to assist in the update of probability vector in the early optimization phase. A steady-state algorithm is designed and tested experimentally. The results suggest that the designed algorithms have some advantages over existing state-of-the-art algorithms.
       
  • Geometric SMOTE A geometrically enhanced drop-in replacement for SMOTE
    • Abstract: Publication date: Available online 3 June 2019Source: Information SciencesAuthor(s): Georgios Douzas, Fernando Bacao Classification of imbalanced datasets is a challenging task for standard algorithms. Although many methods exist to address this problem in different ways, generating artificial data for the minority class is a more general approach compared to algorithmic modifications. SMOTE algorithm, as well as any other oversampling method based on the SMOTE mechanism, generates synthetic samples along line segments that join minority class instances. In this paper we propose Geometric SMOTE (G-SMOTE) as a enhancement of the SMOTE data generation mechanism. G-SMOTE generates synthetic samples in a geometric region of the input space, around each selected minority instance. While in the basic configuration this region is a hyper-sphere, G-SMOTE allows its deformation to a hyper-spheroid. The performance of G-SMOTE is compared against SMOTE as well as baseline methods. We present empirical results that show a significant improvement in the quality of the generated data when G-SMOTE is used as an oversampling algorithm. An implementation of G-SMOTE is made available in the Python programming language.
       
  • Pointwise directional increasingness and geometric interpretation of
           directionally monotone functions
    • Abstract: Publication date: Available online 3 June 2019Source: Information SciencesAuthor(s): Mikel Sesma-Sara, Laura De Miguel, Antonio Francisco Roldán López de Hierro, Julio Lafuente, Radko Mesiar, Humberto Bustince The relaxation of monotonicity requirements is a trend in the theory of aggregation functions. In the recent literature, we can find several relaxed forms of monotonicity, such as weak, directional, cone, ordered directional and strengthened directional monotonicity. All these forms of monotonicity are global properties in the sense that they are imposed for all the points in the domain of a function. In this work, we introduce a local notion of monotonicity called pointwise directional monotonicity, or directional monotonicity at a point. Based on this concept, we characterize all the previously defined notions of monotonicity and, in the final part of the paper, we present some geometric aspects of the global weaker forms of monotonicity, stressing their relations and singularities.
       
  • Minimals Plus: An improved algorithm for the random generation of linear
           extensions of partially ordered sets
    • Abstract: Publication date: Available online 3 June 2019Source: Information SciencesAuthor(s): Elías F. Combarro, Julen Hurtado de Saracho, Irene Díaz In this work, Minimals Plus, an algorithm for the random generation of linear extensions from a poset is introduced. It improves a previously existing heuristic algorithm, Minimals, and its recent modification, Bottom-Up. Minimals Plus shares all the strengths of Bottom-Up and none of its weaknesses: it can be applied to any poset, has a fast initialization step (while Bottom-Up may require exponential time), and is exact at least when Bottom-Up is exact. In addition to mathematically proving these properties, we also conduct experiments on almost two hundred thousand different posets (including all non-isomorphic posets of up to nine elements and some posets directly related to fuzzy measures) to check the behavior of Minimals Plus. In addition, the results show evidence that Minimals Plus can help traditional Markov-based methods to mix faster and that it can be also used to estimate the number of linear extensions of a poset.
       
  • Optimal autonomous architecture for uncertain processes management
    • Abstract: Publication date: Available online 3 June 2019Source: Information SciencesAuthor(s): Shideh Saraeian, Babak Shirazi, Homayun Motameni An uncertain Business Process Management System (BPMS) capability is Business Processes (BPs) management in the presence of uncertain factors. This ability should be defined by different uncertain computer-based components inside the classic BPMSs operations. This study proposed autonomous and combinatorial optimal process management architecture to increase the ability, flexibility, and accuracy of uncertain processes management. The autonomous architecture based on the bi-level optimization approach has been constructed inward a meta-model of multi-agent system technology, optimal Neural Network and Cellular Learning Automata in different agents. A case study of an uncertain business process evolving the closed loop supply chain was studied. The results of the simulated case and the statistical evaluation of it, have been demonstrated the robustness and accuracy of this new proposed architecture.
       
  • Multi-Objective Evolutionary Algorithm with Prediction in the Objective
           Space
    • Abstract: Publication date: Available online 1 June 2019Source: Information SciencesAuthor(s): Elaine Guerrero-Peña, Aluízio Fausto Ribeiro Araújo Multi-objective optimization problem resolution using Evolutionary Algorithms (EAs) has not yet been completely addressed. Issues such as the population diversity loss and the EA sensitivity to the Pareto front shape affect the algorithm performance. Various EAs include knowledge acquisition for the evolutionary process to deal with such problems. Several issues are crucial for the addition of knowledge using a probabilistic model; among these, we mention the time at which the algorithm should update the model, and which information is suitable for constructing the model. To handle these issues, we propose the Non-dominated Sorting Differential Evolution improvement with Prediction in the Objective Space (OSP-NSDE). When several premises based on the Approximated Hypervolume metric are achieved, the OSP-NSDE triggers the Objective Space Prediction (OSP) strategy. The OSP identifies trends in the movements of non-dominated individuals in the objective space, and then rapidly determines promising regions in the search space and generates a new population considering such regions. Regular variation operators are used to produce the offspring whenever the OSP condition is not satisfied. The OSP-NSDE effectiveness was verified using 31 well-known functions and three real-world problems, and compared with EA-based algorithms and others with collective intelligence.
       
  • Finite-time containment control for nonlinear multi-agent systems with
           external disturbances
    • Abstract: Publication date: Available online 31 May 2019Source: Information SciencesAuthor(s): Hui Lü, Wangli He, Qing-Long Han, Xiaohua Ge, Chen Peng This paper is concerned with the finite-time containment control for a second-order nonlinear multi-agent system in the presence of external disturbances. First, two finite-time containment control protocols are skillfully developed, of which one is based on a terminal sliding mode and the other is based on a non-singular terminal sliding mode. Second, criteria for designing desired containment control protocols are derived such that the containment performance of the resulting closed-loop leader-following multi-agent system can be guaranteed within a finite time horizon. It is shown that the settling time of the closed-loop system convergence can be estimated under the proposed protocols. Furthermore, finite-time containment control in the scenario of general switching and directed topology is also addressed and the corresponding result is derived. Finally, three illustrative examples are given to verify the effectiveness of the proposed finite-time containment control method.
       
  • Moderate deviation and restricted equivalence functions for measuring
           similarity between data
    • Abstract: Publication date: Available online 30 May 2019Source: Information SciencesAuthor(s): A.H. Altalhi, J.I. Forcén, M. Pagola, E. Barrenechea, H. Bustince, Zdenko Takáč In this work we study the relation between moderate deviation functions, restricted dissimilarity functions and restricted equivalence functions. We use moderate deviation functions in order to measure the similarity or dissimilarity between a given set of data. We show an application of moderate deviate functions used in the same way as penalty functions to make a final decision from a score matrix in a classification problem.
       
  • A Bayesian Possibilistic C-Means Clustering Approach for Cervical Cancer
           Screening
    • Abstract: Publication date: Available online 30 May 2019Source: Information SciencesAuthor(s): Fang-Qi Li, Shi-Lin Wang, Gong-Shen Liu Recently, a lot of attention has been given to the treatment of cervical cancer due to its high lethality and morbidity. Early screening of this disease is of vital importance. In this paper, we propose an automatic cervical cancer screening algorithm that analyzes the related risk factors to provide preliminary diagnostic information for medical practitioners. In cervical cancer screening, a number of risk factors are considered to be highly private or sensitive, and many patients elect not to provide the corresponding information. Such severe amount of missing attributes leads to great difficulties for many automatic screening algorithms. To solve this problem, a Bayesian possibilistic C-means (BPCM in short) clustering algorithm is proposed to discover the representative patterns from the complete data and to estimate the missing values of a specific sample using its closest representative pattern. After the data completion step, a two-stage fuzzy ensemble learning scheme is proposed to derive the final screening result. In the first stage, the bootstrap aggregation (bagging in short) procedure is adopted to sample the entire class-imbalanced dataset into a number of class-balanced subsets. In the second stage, a number of weak classifiers are trained on each subset and a fuzzy logic based approach is designed to analyze the classification results of the weak classifiers and to obtain the final classification result. Experiments have been conducted on a dataset containing 858 patients. From the experiment results, it can be observed that the proposed BPCM can effectively discover the underlying patterns and is reliable in estimating the missing attribute compared with the traditional approaches. Moreover, by applying the proposed fuzzy ensemble learning scheme, the final classification results on the completed data by BPCM are promising (an accuracy of 76% or a positive sensitivity of 79%) under the severe missing-attribute scenario (only 6% samples with complete data).
       
  • On Solutions and Representations Of Spiking Neural P Systems With Rules On
           Synapses
    • Abstract: Publication date: Available online 30 May 2019Source: Information SciencesAuthor(s): Francis George C. Cabarle, Ren Tristan A. de la Cruz, Dionne Peter P. Cailipan, Defu Zhang, Xiangrong Liu, Xiangxiang Zeng Spiking neural P systems, or SN P systems, are parallel and nondeterministic computing models inspired by spike processing of neurons. A variant of SN P systems known as SN P systems with rules on synapses, or RSSN P systems, makes use of the neuroscience idea where synapses or links between neurons perform spike processing instead of neurons. The spike processing in synapses instead of in neurons can allow RSSN P systems to have a smaller complexity due to their richer semantics, as compared to SN P systems. In this work we are first to provide the following: definitions of complexity classes of problems solved by RSSN P systems, depending if the problem has a uniform or nonuniform type of solution; both types of solutions to the NP-complete problem Subset sum; matrix representation and simulation algorithm for RSSN P systems. Such representation and algorithm can aid in practical use of RSSN P systems. We also provide small computer simulations based on our representation and algorithm. Our simulations show that the nonuniform and uniform solutions to Subset sum are better suited in the sequential CPU and the parallel GPU computer, respectively. Lastly, we remark several directions for investigations of RSSN P systems.
       
  • Weight Selection Strategies for Ordered Weighted Average Based Fuzzy Rough
           Sets
    • Abstract: Publication date: Available online 29 May 2019Source: Information SciencesAuthor(s): Sarah Vluymans, Neil Mac Parthaláin, Chris Cornelis, Yvan Saeys Fuzzy rough set theory models both vagueness and indiscernibility in data, which makes it a very useful tool for application to various machine learning tasks. In this paper, we focus on one of its robust generalisations, namely ordered weighted average based fuzzy rough sets. This model uses a weighted approach in the definition of the fuzzy rough operators. Although its efficacy and competitiveness with state-of-the-art machine learning approaches has been well established in several studies, its main drawback is the difficulty in choosing an appropriate weighting scheme. Several options exist and an adequate choice can greatly enhance the suitability of the ordered weighted average based fuzzy rough operators. In this work, we develop a clear strategy for the weighting scheme selection based upon the underlying characteristics of the data. The advantages of the approach are presented in a detailed experimental study focusing. Rather than to propose a classifier, our aim is to present a strategy to select a suitable weighting scheme for ordered weighted average based fuzzy rough sets in general. Our weighting scheme selection process allows users to take full advantage of the versatility offered by this model and performance improvements over the traditional fuzzy rough set approaches.
       
  • Detecting Adversarial Examples via Prediction Difference for Deep Neural
           Networks
    • Abstract: Publication date: Available online 29 May 2019Source: Information SciencesAuthor(s): Feng Guo, Qingjie Zhao, Xuan Li, Xiaohui Kuang, Jianwei Zhang, Yahong Han, Yu-an Tan Deep Neural Networks (DNNs) perform effectively in many computer vision tasks. However, DNNs are found to be vulnerable to adversarial examples which are generated by adding imperceptible perturbations to original images. To address this problem, we propose a novel defense method, transferability prediction difference (TPD), to drastically improve the adversarial robustness of DNNs with small sacrificing verified accuracy. We find out that the adversarial examples have lager prediction difference for various DNN models due to their various complicated decision boundaries, which can be used to identify the adversarial examples by converging decision boundaries to a prediction difference threshold. We adopt the K-means clustering algorithm on benign data to determine transferability prediction difference threshold, by which we can detect adversarial examples accurately and efficiently. Furthermore, TPD method neither modifies the target model nor needs to take knowledge of adversarial attacks. We perform four state-of-the-art adversarial attacks (FGSM, BIM, JSMA and C&W) to evaluate TPD models trained on MNIST and CIFAR-10 and the average detection accuracy is 96.74% and 86.61%. The results show that TPD model has high detection ratio on the demonstrably advanced white-box adversarial examples while keeping low false positive rate on benign examples.
       
  • A Decomposition based Evolutionary Algorithm with Direction Vector
           Adaption and Selection Enhancement
    • Abstract: Publication date: Available online 29 May 2019Source: Information SciencesAuthor(s): Jiajun Zhou, Xifan Yao, Felix T.S. Chan, Liang Gao, Xuan Jing, Xinyu Li, Yingzi Lin, Yun Li To further improve the search ability of the decomposition based many/multi-objective evolutionary algorithm (MOEA/D) in the tackling many-objective optimization problems (MaOPs) possessing complex characteristics (e.g., disconnected, degenerate, inverted, extremely convex or differently-scaled), we suggest an adaptive MOEA/D with better versatility, where the weight vector adaption and selection mechanism are improved. Firstly, a new niche-guided scheme by considering both the vector angle and Euclidean distance is proposed to leverage the search direction adaption upon different evolution phases, which is expected to be more robust for handling different types of irregular Pareto fronts (PFs). Secondly, in mating selection, a coordinated selection scheme aided by a multi-criterion decision procedure is utilized to enhance the effectiveness of recombination. Finally, in environmental selection, a steady state replacement strategy considering both the ensemble ranking of favorite subproblems with respect to solutions and improvement region restriction of subproblems is employed to alleviate misleading selection. Comparison experiments on benchmark MaOPs with diverse characteristics have been performed and the empirical results demonstrate the superiority of our proposal. The effects of direction vector adaption mechanism and other pertinent enhancements are also investigated.
       
  • Variance-Constrained H ∞ State Estimation for Time-Varying Multi-Rate
           Systems with Redundant Channels: The Finite-Horizon Case
    • Abstract: Publication date: Available online 28 May 2019Source: Information SciencesAuthor(s): Licheng Wang, Zidong Wang, Guoliang Wei, Fuad E. Alsaadi This paper deals with the H∞ state estimation problem for a class of networked multi-rate time-varying systems with estimation error variance constraint. The redundant channel transmission scheme is employed to reduce the packet dropout rate and improve the quality of the data delivery. By utilizing the lifting technique, an augmented estimation error system is established with a uniform sampling rate. The objective of this paper is to design a time-varying state estimator such that, in the simultaneous presence of the asynchronous sampling, probabilistic packet dropouts as well as stochastic noises, the error dynamics of the state estimation satisfies both the prescribed H∞ performance requirement and the prescribed estimation error variance constraints. Through intensive stochastic analysis, sufficient conditions are established to ensure the existence of the desired estimator whose parameters are determined by solving a set of recursive linear matrix inequalities. A numerical example is presented to show the validity of the proposed estimation strategy.
       
  • Three-way decision making approach to conflict analysis and resolution
           using probabilistic rough set over two universes
    • Abstract: Publication date: Available online 28 May 2019Source: Information SciencesAuthor(s): Bingzhen Sun, Xiangtang Chen, Liye Zhang, Weimin Ma Conflict analysis aims to identify the intrinsic reasons and find a feasible consensus strategy for a conflict situation. Rough set theory was used to study conflict analysis decision-making in the late 90s. The basic way to express the attitudes of every agent are against, favorable and neutral for any issue in the original Pawlak conflict analysis model. The notion of three-way decision (3WD) was initially developed as a means to interpret decision rules induced in probabilistic rough sets. In this paper, we first present the framework of three-way decision (3WD) using probabilistic rough set over two universes. With respect to the probabilistic positive, negative and boundary regions over two universes, we build the rules for making a decision of acceptance, rejection and non-commitment, respectively. So, there is an one-to-one correspondence between the three attitudes of every agent for any issue in a conflict situation and the three decisions in the probabilistic rough set over two universes. Based on this, we present an improved Pawlak conflict analysis model by using the principle of three-way decision based on probabilistic rough set over two universes. We construct the conflict decision-making information system under the framework of two universes. Then we define the favorable issues set and against issues set of any agent between the agent set and the dispute set over conflict decision-making information system, respectively. Furthermore, according to the principle of Bayesian risk decision-making process over two universes, we calculate the threshold value parameters used in the lower and upper approximations of a feasible consensus strategy over conflict decision-making information system. Finally, we present the decision rules and the algorithm of finding a feasible consensus strategy for conflict situation based on three-way decision-making with the probabilistic approximations over two universes. Compared with the original Pawlak conflict analysis model, the proposed model not only provides a new perspective and methodology to handle the conflict analysis problems but also overcomes the limitations of the original model. Lastly, we illustrate the idea and basic principles established in this paper by analyzing a conflict decision-making scenario.
       
  • Min-max attribute-object bireducts: On unifying models of reducts in rough
           set theory
    • Abstract: Publication date: Available online 28 May 2019Source: Information SciencesAuthor(s): Xi-Ao Ma, Yiyu Yao A decision table describes a finite set of objects OB by using a finite set of condition attributes C and a finite set of decision attributes D. Pawlak defines attribute reducts by considering the entire decision table. As a generalization, we introduce the notion of min-max attribute-object bireducts of a sub-table restricted by a pair (B, X) of a subset of condition attributes B and a subset of objects X. A pair (R, Z) is a min-max attribute-object bireduct of (B, X) if and only if R is a minimal subset of B such that R and B make the same correct decisions for objects in X and Z is a maximal subset of X for which B can make the correct decisions. We propose the notion of the decidability of objects and introduce the decidable region of a set of objects as a generalization of the positive region of the set. We define and interpret a min-max attribute-object bireduct based on the decidable region. Min-max attribute-object bireducts offer a general model and existing models of attribute reducts are special cases. The results lead to a unified framework for studying four types of attribute reducts.
       
  • Building a Dynamic Searchable Encrypted Medical Database for Multi-client
    • Abstract: Publication date: Available online 24 May 2019Source: Information SciencesAuthor(s): Lei Xu, Chungen Xu, Joseph K. Liu, Cong Zuo, Peng Zhang E-medical record is an emerging health information exchange model based on cloud computing. As cloud computing allows companies and individuals to outsource their data and computation, the medical data is always stored at a third party such as cloud, which brings a variety of risks, such as data leakage to the untrusted cloud server, unauthorized access or modification operations. To assure the confidentiality of the data, the data owner needs to encrypt the sensitive data before uploading to the third party. Yet, issues like encrypted data search, flexible access and control on sensitive data have also remained the most significant challenges. In this paper, we investigate a novel searchable encrypted e-medical framework for multi-client which provides both confidentiality and searchability. Different from previous privacy protecting works in secure data outsourcing, we focus on providing a fine-grained access control encrypted data search scheme including clients and data. Our scheme also enables secure data update of the encrypted database by leveraging a secure dynamic searchable encryption. Furthermore, we implement the proposed scheme based on some existed cryptography library, and conduct several experiments on a selected dataset to evaluate its performance. The results demonstrate that our scheme provides a balance between security and efficiency.
       
  • A Trajectory Privacy-preserving Scheme Based on a Dual-K Mechanism for
           Continuous Location-based Services
    • Abstract: Publication date: Available online 23 May 2019Source: Information SciencesAuthor(s): Shaobo Zhang, Xinjun Mao, Kim-Kwang Raymond Choo, Tao Peng, Guojun Wang Location-based services (LBSs) have increasingly provided by a broad range of devices and applications, but one associated risk is location disclosure. To solve this problem, a commonly method is to adopt K-anonymity in the centralized architecture based on a single trusted anonymizer. However, this strategy may compromise user privacy involving continuous LBSs. In this study, we propose a dual-K mechanism (DKM) to protect the users’ trajectory privacy for continuous LBSs. The proposed DKM method firstly inserted multiple anonymizers between the user and the location service provider (LSP), and K query locations are sent to different anonymizers to achieve K-anonymity. Simultaneously, we combined the dynamic pseudonym and the location selection mechanisms to improve user trajectory privacy. Hence, neither the LSP nor the anonymizer can obtain the user trajectory. Security analyses demonstrates that our proposed scheme can effectively enhance user trajectory privacy protection, and the simulation results prove that the DKM scheme can preserve user trajectory privacy with low overhead on a single anonymizer.
       
  • Exploiting Location-related Behaviors without the GPS Data on Smartphones
    • Abstract: Publication date: Available online 22 May 2019Source: Information SciencesAuthor(s): Fenghua Li, Xinyu Wang, Ben Niu, Hui Li, Chao Li, Lihua Chen The concerns about location privacy has received considerable attention with the development of Location-based Services (LBSs) over the recent years. Most smartphone users ignore the fact that Apps can infer their locations through accessing WiFi list, although they carefully set location-related permissions to preserve their privacy. Therefore, it is crucial to the public to investigate severe such consequence of WiFi list leakage. In this paper, we develop a tracking scheme for Android, called TrackU, which validate that it is possible to obtain user’s location data as well as their location-related behaviors, just by monitoring the WiFi list without any GPS data. Firstly, it periodically scans available Access Points (APs) nearby and queries the geo-location of the device from LBS providers. Secondly, a drift adjusting algorithm proposed obtains the exact locations considering a set of factors, such as historical location information, average signal strength and changing of WiFi list. To preserve the battery life, an optimization is made to dynamically adjust the positioning interval. Based on the obtained data, we design an activity detection algorithm precisely to infer users’ daily activities, and identify their travel modes, i.e., hovering, walking, and vehicles. Finally, we implement TrackU and carry out a series of experiments with 39 volunteers from seven cities of China. The experiment results show that our design can detect 91.6% of activities by monitoring the WiFi list, and accurately recognize 94.6% of user’s travel mode.
       
  • Efficient Privacy Preservation of Big Data for Accurate Data Mining
    • Abstract: Publication date: Available online 22 May 2019Source: Information SciencesAuthor(s): M.A.P. Chamikara, P. Bertok, D. Liu, S. Camtepe, I. Khalil Computing technologies pervade physical spaces and human lives, and produce vast amount of data that is available for analysis. However, there is a growing concern that potentially sensitive data may become public if the collected data are not appropriately sanitized before being released for investigation. Although there are more than a few privacy-preserving methods available, they are not efficient, scalable or have problems with data utility, and/or privacy. This paper addresses these issues by proposing an efficient and scalable nonreversible perturbation algorithm, PABIDOT, for privacy preservation of big data via optimal geometric transformations. PABIDOT was tested for efficiency, scalability, resilience, and accuracy using nine datasets and five classification algorithms. Experiments show that PABIDOT excels in execution speed, scalability, attack resilience and accuracy in large-scale privacy-preserving data classification when compared with two other, related privacy-preserving algorithms.
       
  • Targeting customers for profit: An ensemble learning framework to support
           marketing decision-making
    • Abstract: Publication date: Available online 21 May 2019Source: Information SciencesAuthor(s): Stefan Lessmann, Johannes Haupt, Kristof Coussement, Koen W. De Bock Marketing messages are most effective if they reach the right customers. Deciding which customers to contact is an important task in campaign planning. The paper focuses on empirical targeting models. We argue that common practices to develop such models do not account sufficiently for business goals. To remedy this, we propose profit-conscious ensemble selection, a modeling framework that integrates statistical learning principles and business objectives in the form of campaign profit maximization. Studying the interplay between data-driven learning methods and their business value in real-world application contexts, the paper contributes to the emerging field of profit analytics and provides original insights how to implement profit analytics in marketing. The paper also estimates the degree to which profit-concious modeling adds to the bottom line. The results of a comprehensive empirical study confirm the business value of the proposed ensemble learning framework in that it recommends substantially more profitable target groups than several benchmarks.
       
  • A distributionally robust credibilistic optimization method for the
           economic-environmental-energy-social sustainability problem
    • Abstract: Publication date: Available online 17 May 2019Source: Information SciencesAuthor(s): Xuejie Bai, Xiang Li, Ruru Jia, Yankui Liu The existing literature on sustainable development problems mainly focuses on multicriteria decision analyses, and ignores the impact of uncertain information on optimal labor allocation decisions. It is necessary for sustainable development to implement suitable policies under uncertainty that integrate several competing aspects including economic, environmental, energy and social criteria. Based on type-2 fuzzy theory, this paper develops a distributionally robust optimization method for sustainable development problems. In our new model, the uncertain per capita gross domestic product (GDP), per capita electricity consumption and per capita greenhouse gas (GHG) emissions are characterized by parametric interval-valued (PIV) possibility distributions and their associated uncertainty distribution sets. Under two assumptions on the underlining decision-making environment, the robust counterpart of the original distributionally robust fuzzy sustainable development model is formally established. To solve the proposed robust sustainable development model, this paper discusses the computational issue concerning the infinitely many integral objective functions and credibilistic constraints, and turns the robust counterpart model into its computationally tractable equivalent deterministic submodels. Taking advantage of the structural characteristics of the equivalent submodels, a domain decomposition method is designed to find the robust optimal solution that can protect against distribution uncertainty. Finally, this paper applies the proposed optimization method for the key economic sectors of the United Arab Emirates (UAE) to provide quantitative justification in planning future labor and resource allocation.
       
  • Combining Unsupervised and Supervised Learning in Credit Card Fraud
           Detection
    • Abstract: Publication date: Available online 16 May 2019Source: Information SciencesAuthor(s): Fabrizio Carcillo, Yann-Aël Le Borgne, Olivier Caelen, Yacine Kessaci, Frédéric Oblé, Gianluca Bontempi Supervised learning techniques are widely employed in credit card fraud detection, as they make use of the assumption that fraudulent patterns can be learned from an analysis of past transactions. The task becomes challenging, however, when it has to take account of changes in customer behavior and fraudsters’ ability to invent novel fraud patterns. In this context, unsupervised learning techniques can help the fraud detection systems to find anomalies. In this paper we present a hybrid technique that combines supervised and unsupervised techniques to improve the fraud detection accuracy. Unsupervised outlier scores, computed at different levels of granularity, are compared and tested on a real, annotated, credit card fraud detection dataset. Experimental results show that the combination is efficient and does indeed improve the accuracy of the detection.
       
  • HOBA: A Novel Feature Engineering Methodology for Credit Card Fraud
           Detection with a Deep Learning Architecture
    • Abstract: Publication date: Available online 16 May 2019Source: Information SciencesAuthor(s): Xinwei Zhang, Yaoci Han, Wei Xu, Qili Wang Credit card transaction fraud costs billions of dollars to card issuers every year. A well-developed fraud detection system with a state-of-the-art fraud detection model is regarded as essential to reducing fraud losses. The main contribution of our work is the development of a fraud detection system that employs a deep learning architecture together with an advanced feature engineering process based on homogeneity-oriented behavior analysis (HOBA). Based on a real-life dataset from one of the largest commercial banks in China, we conduct a comparative study to assess the effectiveness of the proposed framework. The experimental results illustrate that our proposed methodology is an effective and feasible mechanism for credit card fraud detection. From a practical perspective, our proposed method can identify relatively more fraudulent transactions than the benchmark methods under an acceptable false positive rate. The managerial implication of our work is that credit card issuers can apply the proposed methodology to efficiently identify fraudulent transactions to protect customers’ interests and reduce fraud losses and regulatory costs.
       
  • Uncertain Database Retrieval with Measure-Based Belief Function Attribute
           Values
    • Abstract: Publication date: Available online 19 April 2019Source: Information SciencesAuthor(s): Ronald R. Yager, Naif Alajlan, Yakoub Bazi We discuss how the Dempster-Shafer belief structure provides a framework for modeling an uncertain value x˜ from some domain X. We note how it involves a two-step process: the random determination of one focal element (set) guided by a probability distribution and then the selection of x˜ from this focal element in some unspecified manner. We generalize this framework by allowing the selection of the focal element to be determined by a random experiment guided by a fuzzy measure. In either case the anticipation that x˜ lies in some subset E is interval-valued, [Bel(E), Pl(E)]. We next look at database retrieval and turn to issue of determining if a database entity with an uncertain attribute value satisfies a desired value. Here we model our uncertain attribute value as x˜ and our desired value as a subset E. In this case the degree of satisfaction of the query E by the entity is [Bel(E), Pl(E)]. In order to compare these interval-valued satisfactions we use the Golden rule representative value to turn the intervals into scalars. We describe an application involving retrieval from a uncertain database.
       
  • Predicting the Active Period of Popularity Evolution: A Case Study on
           Twitter Hashtags
    • Abstract: Publication date: Available online 17 April 2019Source: Information SciencesAuthor(s): Jianyi Huang, Yuyuan Tang, Ying Hu, Jianjiang Li, Changjun Hu The active period of popularity evolution indicates how long online content receives continuous attention from people. Although predicting popularity evolution has largely been explored, researches on predicting active period still remain open. If we know the duration of active period ahead of time, caching systems, online advertising, etc. can run more effectively. Therefore, predicting active period is of great importance, but it is a non-trivial task because of the two major challenges. First, numerous factors can influence the duration of active period. To predict active period accurately, it's difficult to consider what factors and how to embed them in DNN model. Second, the triggering time to predict different active periods must be decided carefully, because the durations of active periods differed from one another. This paper addresses these two challenges, focusing on Twitter hashtags as a case study. To deal with the first challenge, a DNN-based prediction framework is proposed, embedding dynamic and static factors by using LSTM and CNN respectively. To deal with the second challenge, an appropriate value of cumulative popularity is set to trigger predicting active period. Experimental and comparative results show the superiority of our prediction solution, comparing with spikeM and SVR.
       
  • Efficient and accurate 3D modeling based on a novel local feature
           descriptor
    • Abstract: Publication date: Available online 12 April 2019Source: Information SciencesAuthor(s): Bao Zhao, Juntong Xi Registration is a key step in 3D modeling. In this paper, we propose an efficient and accurate 3D modeling algorithm composed of pairwise registration and multi-view registration. In pairwise registration, we propose a novel local descriptor named divisional local feature statistics (DLFS) which is generated by first dividing a local space into several partitions along projected radial direction, and then performing the statistics of one spatial and three geometrical attributes on each partition. For improving the compactness of DLFS, a principal component analysis (PCA) technique is used to compress it. Based on the compressed DLFS descriptor together with a game theoretic matching technique and two variants of ICP, the pairwise registration is efficiently and accurately performed. On this basis, a multi-view registration is performed by combining shape growing based registration technique and simultaneous registration method. In this process, a correspondence transition technique is proposed for efficiently and accurately estimating the overlap ratio between any two inputting scans. Extensive experiments are conducted to verify the performance of our algorithms. The results show that the DLFS descriptor has strong robustness, high descriptiveness and efficiency. The results also show that the proposed 3D modeling algorithm is very efficient and accurate.
       
  • Quantitative Analysis for Capabilities of Vehicular Fog Computing
    • Abstract: Publication date: Available online 10 April 2019Source: Information SciencesAuthor(s): Xuefeng Xiao, Xueshi Hou, Xinlei Chen, Chenhao Liu, Yong Li With the growing trend of making vehicles smarter, the idea of utilizing vehicles as the infrastructures for communication and computation has triggered great interest. There have been increasingly efforts integrating the connected vehicles into a cloud computing system, but the performance of such system is restricted by its high latency. To solve this problem, a new computing paradigm, fog computing has been proposed to better exploit potential computing resources of connected vehicles with a collaborative multitude of end-user clients or near-user edge devices [8]. The fog computing differs from cloud computing by its proximity to end users, dense geographical distribution and support for mobility. However, current studies on fog computing based vehicular system mainly focus on its reliability and security issues instead of investigating realistic scenarios. To the best of our knowledge, this paper is the first to propose vehicular fog computing by studying its capabilities using realistic data acquired from tens of thousands of taxis in Beijing, China. A mathematical model is developed for vehicular fog computing, based on which we can make prediction of potential computing capacity of a vehicular fog and analyze the impact of communication range on the capacity. Then we present temporal and spatial distribution of potential computation capacity of vehicular fog computing in a city-wide scale. Our study quantitatively reveals the capabilities of vehicular fog computing at different scales, which offers insightful guidelines for the related system and protocol designs in the future.
       
  • Objective Reduction for Visualising Many-Objective Solution Sets
    • Abstract: Publication date: Available online 8 April 2019Source: Information SciencesAuthor(s): Liangli Zhen, Miqing Li, Dezhong Peng, Xin Yao Visualising a solution set is of high importance in many-objective optimisation. It can help algorithm designers understand the performance of search algorithms and decision makers select their preferred solution(s). In this paper, an objective reduction-based visualisation method (ORV) is proposed to view many-objective solution sets. ORV attempts to map a solution set from a high-dimensional objective space into a low-dimensional space while preserving the distribution and the Pareto dominance relation between solutions in the set. Specifically, ORV sequentially decomposes objective vectors which can be linearly represented by their positively correlated objective vectors until the expected number of preserved objective vectors is reached. ORV formulates the objective reduction as a solvable convex problem. Extensive experiments on both synthetic and real-world problems have verified the effectiveness of the proposed method.
       
  • Sequential three-way decisions via multi-granularity
    • Abstract: Publication date: Available online 30 March 2019Source: Information SciencesAuthor(s): Jin Qian, Caihui Liu, Duoqian Miao, Xiaodong Yue Three-way decisions provide a trisecting-and-acting framework for complex problem solving. For a cost-sensitive decision-making problem under multiple levels of granularity, sequential three-way decisions have come into being. Within this framework, how to act upon the three pair-wise disjoint regions is the most important issue. To this end, we propose a generalized model of sequential three-way decisions via multi-granularity in this paper. Subsequently, we adopt the typical aggregation strategies to implement the following five kinds of multigranulation sequential three-way decisions—the weighted arithmetic mean multigranulation sequential three-way decisions, the optimistic multigranulation sequential three-way decisions, the pessimistic multigranulation sequential three-way decisions, the pessimistic-optimistic multigranulation sequential three-way decisions and the optimistic-pessimistic multigranulation sequential three-way decisions. Furthermore, we discuss the rightness and rationality of the five kinds of multigranulation sequential three-way decisions and also analyze the relationships and differences between them. Finally, the experimental results demonstrate that the first four different multigranulation sequential three-way decisions are effective. These models will accelerate and enrich the development of multigranulation three-way decisions.
       
  • Sequential Three-Way Decision Based on Multi-Granular Autoencoder Features
    • Abstract: Publication date: Available online 25 March 2019Source: Information SciencesAuthor(s): Libo Zhang, Huaxiong Li, Xianzhong Zhou, Bing Huang Autoencoder network is an efficient representation learning method. In general, a finer feature set obtained from autoencoder leads to a lower error rate and lower total misclassification cost. However, the network is usually trained for a long time to obtain a finer feature set, leading to a high time cost and total cost. To address this issue, a Sequential Three-Way Decision (S3WD) model is developed to balances the misclassification cost and the time cost in autoencoder based classifications and decisions. To implement the tradeoff strategy, it is necessary to extract a multi-granular feature set. In the network, the associated discriminative information in the extracted features increases with training epochs, which constructs a multi-granular feature structure. An autoencoder-based multi-granular feature description definition is presented. Based on the definition, an autoencoder composed of restricted Boltzmann machines is adopted to extract the multi-granular features. Then, a new cost-sensitive S3WD model is proposed, which aims to find the optimal granule level with the lowest total cost. Finally, the experiments demonstrate the effectiveness of the proposed approaches.
       
  • An adaptive penalty-based boundary intersection method for many-objective
           optimization problem
    • Abstract: Publication date: Available online 20 March 2019Source: Information SciencesAuthor(s): Yutao Qi, Dazhuang Liu, Xiaodong Li, Jiaojiao Lei, Xiaoying Xu, Qiguang Miao Compared with domination-based methods, the multi-objective evolutionary algorithm based on decomposition (MOEA/D) is less prone to the difficulty caused by an increase in the number of objectives. It is a promising algorithmic framework for solving many-objective optimization problems (MaOPs). In MOEA/D, the target MaOP is decomposed into a set of single-objective problems by using a scalarizing function with evenly specified weight vectors. Among the available scalarizing functions, penalty-based boundary intersection (PBI) with an appropriate penalty parameter is known to perform well. However, its performance is heavily influenced by the setting of the penalty factor (θ), which can take a value from zero to +∞. A limited amount of work has thus far considered the choice of an appropriate value of θ. This paper presents a comprehensive experimental study on WFG and WFG-extend problems featuring two to 15 objectives. A range of values of θ is investigated to understand its influence on the performance of the PBI-based MOEA/D (MOEA/D-PBI). Based on the observations, the range of values of θ are divided into three sub-regions, and a two-stage adaptive penalty scheme is proposed to adaptively choose an appropriate value from 0.001 to 8000 during an optimization run. The results of experiments show that, the robustness of MOEA/D-PBI can be significantly enhanced using the proposed scheme.
       
  • Accurate Imagery Recovery Using a Multi-Observation Patch Model
    • Abstract: Publication date: Available online 18 March 2019Source: Information SciencesAuthor(s): Lei Zhang, Wei Wei, Qinfeng Shi, Chunhua Shen, Anton van den Hengel, Yanning Zhang A patch-based non-local self-similarity prior underpins many of the current state-of-the-art results in image-recovery problems. The associated methods tend to exploit such priors either heuristically in terms of the correlations between similar patches, or implicitly using hand-crafted models. Both approaches have a limited ability to represent image-specific self-similarity statistics, which limits the accuracy of the results. To address this problem, we propose a novel multi-observation patch model (MOPM) for image recovery. The MOPM enables the recovery of a clean patch from multiple noisy observations by using a linear filtering operation on a specific manifold. More importantly, it can be adaptively learned from the intermediate recovered image with a latent variable-based Bayesian learning approach. Thus, the MOPM obtains better representation of the image-specific internal statistics. In addition, the MOPM is naturally integrated into a half-quadratic splitting framework, in which the MOPM can be constantly refined through iterations and ultimately produce promising results. The experimental results on denoising and compressive sensing demonstrate the effectiveness of the MOPM for image recovery.
       
  • 2 Hypergraph+and+Its+Applications&rft.title=Information+Sciences&rft.issn=0020-0255&rft.date=&rft.volume=">Robust ℓ 2 − Hypergraph and Its Applications
    • Abstract: Publication date: Available online 13 March 2019Source: Information SciencesAuthor(s): Taisong Jin, Zhengtao Yu, Yue Gao, Shengxiang Gao, Xiaoshuai Sun, Cuihua Li Hypergraph, an important learning tool to modulate high-order data correlations, has a wide range of applications in machine learning and computer vision. The key issue of the hypergraph-based applications is to construct an informative hypergraph, in which the hyperedges effectively represent the high-order data correlations. In practice, the real-world data is usually sampled from a union of non-linear manifolds. Due to the issues of noise and data corruptions, many data samples deviate from the underlying data manifolds. To construct an informative hypergraph that represents real-world data distribution well, we propose a hypergraph model (ℓ2-Hypergraph). Our model generates each hyperedge by solving an affine subspace ridge regression problem, where the samples with non-zero representation coefficients are used for hyperege generation. Specifically, to be robust to sparse noise and corruptions, a sparse constraint is imposed on data errors. We have conducted image clustering and classification experiments on real-world datasets. The experimental results demonstrate that our hypergraph model is superior to the existing hypergraph construction methods in both accuracy and robustness to sparse noise.
       
  • Reliable location allocation for hazardous materials
    • Abstract: Publication date: Available online 7 March 2019Source: Information SciencesAuthor(s): Jie Fan, Lean Yu, Xiang Li, Changjing Shang, Minghu Ha Decision-making on location allocation has gained considerable attention since it involves strategic and operational policies with mid-term and long-term effects. In this paper, a reliable location allocation mechanism in the context of hazardous materials is proposed, considering that the depots are subject to the risk of disruption that may be caused by many factors, including hazardous materials depot accident, depot maintenance upgrade, equipment fault, and power outage. According to the characteristics of hazardous materials, this study aims to minimize the systemic risk of storage and transportation under budget constraints. The problem is formulated as an integer linear programming model to simultaneously determine: (i) the optimal depot locations; (ii) the amount of hazardous materials stored in each located depot; (iii) the optimal allocation (transportation) plans; and (iv) the contingency plans for depot disruption. Numerical examples demonstrate that the proposed modeling method leads to 8.33% risk reduction and 1.92% cost savings compared with traditional location allocation without disruption consideration. This reveals the necessity and importance of taking reliability into account and making contingency plans in disruption scenarios regarding hazardous materials location allocation decisions.
       
  • Fast Hypervolume Approximation Scheme Based on a Segmentation Strategy
    • Abstract: Publication date: Available online 27 February 2019Source: Information SciencesAuthor(s): Weisen Tang, Hai-Lin Liu, Lei Chen, Kay Chen Tan, Yiu-ming Cheung Hypervolume indicator based evolutionary algorithms have been reported to be very promising in many-objective optimization, but the high computational complexity of hypervolume calculation in high dimensions restrains its further applications and developments. In this paper, we develop a fast hypervolume approximation method with both improved speed and accuracy than the previous approximation methods via a new segmentation strategy. The proposed approach consists of two crucial process: segmentation and approximation. The segmentation process recursively finds areas easy to be measured and quantified from the original geometric figure as many as possible, and then divides the measurement of the rest areas into several subproblems. In the approximation process, an improved Monte Carlo simulation is developed to estimate these subproblems. Those two processes are mutually complementary to simultaneously improve the accuracy and the speed of hypervolume approximation. To validate its effectiveness, experimental studies on four widely-used instances are conducted and the simulation results show that the proposed method is ten times faster than other comparison algorithms with a same measurement error. Furthermore, we integrate an incremental version of this method into the framework of SMS-EMOA, and the performance of the integrated algorithm is also very competitive among the experimental algorithms.
       
  • APS: Attribute-Aware Privacy-Preserving Scheme in Location-Based Services
    • Abstract: Publication date: Available online 22 February 2019Source: Information SciencesAuthor(s): Weihao Li, Chen Li, Yeli Geng As one of the most significant factors for privacy protection, side information has been considered in designing privacy-preserving schemes in Location-Based Services (LBSs) over recent years. However, most existing schemes consider this concept through a straightforward way, such as query probability. In this paper, we consider the basic attribute associating with each location and design an Attribute-aware Privacy-preserving Scheme (APS) to enhance mobile user’s location privacy. Specifically, we first extract basic attributes from the local map, and specialize the Attribute-Aware Side Information (AASI). Then we build an attribute-based hierarchical tree (A-tree), which classifies locations into different categories in term of each location’s attribute. Based on such information, we design APS, which consists of two algorithms, Voronoi Dividing Algorithm (VDA) and Dummy Determining Algorithm (DDA). In VDA, we divide the local map into different Voronoi polygons based on the properties of Voronoi Diagram, which guarantees the selected locations are dispersed. In DDA, we utilize the Four Color Map Theorem to color these Voronoi polygons, which helps mobile users to choose the dummy locations as far as possible. Therefore, our APS provides an optimal dummy set to protect mobile user’s location privacy and query privacy. Finally, thorough analysis and evaluation results illustrate the effectiveness and efficiency of our proposed scheme.
       
  • tcc2vec: RFM-Informed Representation Learning on Call Graphs for Churn
           Prediction
    • Abstract: Publication date: Available online 20 February 2019Source: Information SciencesAuthor(s): Sandra Mitrović, Bart Baesens, Wilfried Lemahieu, Jochen De Weerdt Applying social network analytics for telco churn prediction has become indispensable for almost a decade. However, in the current literature, the uptake does not reflect in a significantly increased leverage of the available information that these networks convey. First, network featurization in general is a very cumbersome process due to the complex nature of networks and the lack of a respective methodology. This results in ad hoc approaches and hand-crafted features. Second, deriving certain structural features in very large graphs is computationally expensive and, as a consequence, often neglected. Third, call networks are mostly treated as static in spite of their inherently dynamic nature. In this study, we propose tcc2vec, a panoptic approach aiming at devising representation learning (to address the first problem) on enriched call networks that integrate interaction and structural information (to overcome the second problem), which are being sliced in different time periods in order to account for different temporal granularities (hence addressing the third problem). In an extensive experimental analysis, insights are provided regarding an optimal choice of interaction and temporal granularities, as well as representation learning parameters.
       
  • A Novel Approach for Panel Data: An Ensemble of Weighted Functional Margin
           SVM Models
    • Abstract: Publication date: Available online 20 February 2019Source: Information SciencesAuthor(s): Bi̇rsen Eygi̇ Erdogan, Süreyya Özöğür - Akyüz, Pınar Karadayı Ataş Ensemble machine learning methods are frequently used for classification problems and it is known that they may boost the prediction accuracy. Support Vector Machines are widely used as base classifiers during the construction of different types of ensembles. In this study, we have constructed a weighted functional margin classifier ensemble on panel financial ratios to discriminate between solid and unhealthy banks for Turkish commercial bank case. We proposed a novel ensemble generation method enhanced by a pruning strategy to increase the prediction performance and developed a novel aggregation approach for ensemble learning by using the idea of weighted sums. The prediction performances are compared with a panel logistic regression which is considered a benchmark method for panel data. The results show that the proposed ensemble method is more successful than the straight SVM and the classical generalized linear model approach.
       
  • Privacy-preserving Task Recommendation with Win-win Incentives for Mobile
           Crowdsourcing
    • Abstract: Publication date: Available online 11 February 2019Source: Information SciencesAuthor(s): Wenjuan Tang, Kuan Zhang, Ju Ren, Yaoxue Zhang, Xuemin (Sherman) Shen Mobile crowdsourcing enables mobile requesters to publish tasks, which can be accomplished by workers with awards. However, existing task allocation schemes face tradeoff between effectiveness and privacy preservation, and most of them lack consideration of win-win incentives for both requesters and workers participation. In this paper, we propose a privacy-preserving task recommendation scheme with win-win incentives in crowdsourcing through developing advanced attribute-based encryption with preparation/online encryption and outsourced decryption technologies. Specifically, we design bipartite matching between published tasks and participant workers, to recommend tasks for eligible workers with interests and provide valuable task accomplishment for requesters in a win-win manner. Furthermore, our scheme reduces encryption cost for requesters by splitting encryption into preparation and online phases, as well as shifts most of the decryption overhead from the worker side to the service platform. Privacy analysis demonstrates requester and worker privacy preservation under chosen-keyword attack and chosen-plaintext attack. Performance evaluation shows cost-efficient computation overhead for requesters and workers.
       
  • APCN: A Scalable Architecture for Balancing Accountability and Privacy in
           Large-scale Content-based Networks
    • Abstract: Publication date: Available online 29 January 2019Source: Information SciencesAuthor(s): Yuxiang Ma, Yulei Wu, Jun Li, Jingguo Ge Balancing accountability and privacy has become extremely important in cyberspace, and the Internet has evolved to be dominated by content transmission. Several research efforts have been devoted to contributing to either accountability or privacy protection, but none of them has managed to consider both factors in content-based networks. An efficient solution is therefore urgently demanded by service and content providers. However, proposing such a solution is very challenging, because the following questions need to be considered simultaneously: 1) How can the conflict between privacy and accountability be avoided' 2) How is content identified and accountability performed based on packets belonging to that content' 3) How can the scalability issue be alleviated on massive content accountability in large-scale networks' To address these questions, we propose the first scalable architecture for balancing Accountability and Privacy in large-scale Content-based Networks (APCN). In particular, an innovative method for identifying content is proposed to effectively distinguish the content issued by different senders and from different flows, enabling the accountability of a content based on any of its packets. Furthermore, a new idea with double-delegate (i.e., source and local delegates) is proposed to improve the performance and alleviate the scalability issue on content accountability in large-scale networks. Extensive NS-3 experiments with real trace are conducted to validate the efficiency of the proposed APCN. The results demonstrate that APCN outperforms existing related solutions in terms of lower round-trip time and higher cache hit rate under different network configurations.
       
  • A privacy-preserving cryptosystem for IoT E-healthcare
    • Abstract: Publication date: Available online 28 January 2019Source: Information SciencesAuthor(s): Rafik Hamza, Zheng Yan, Khan Muhammad, Paolo Bellavista, Faiza Titouna Privacy preservation has become a prerequisite for modern applications in the cloud, social media, Internet of things (IoT), and E- healthcare systems. In general, health and medical data contain images and medical information about the patients and such personal data should be kept confidential in order to maintain the patients’ privacy. Due to limitations in digital data properties, traditional encryption schemes over textual and structural one-dimension data cannot be applied directly to e-health data. In addition, when personal data are sent over the open channels, patients may lose privacy of data contents. Hence, a secure lightweight keyframe extraction method is highly required to ensure timely, correct, and privacy-preserving e-health services. Besides this, it is inherently difficult to achieve a satisfied level of security in a cost-effective way while considering the constraints of real-time e-health applications. In this paper, we propose a privacy preserving chaos-based encryption cryptosystem for patients’ privacy protection. The proposed cryptosystem can protect patient’s images from a compromised broker. In particular, we propose a fast probabilistic cryptosystem to secure medical keyframes that are extracted from wireless capsule endoscopy procedure using a prioritization method. The encrypted images produced by our cryptosystem exhibits randomness behavior, which guarantee computational efficiency as well as a highest level of security for the keyframes against various attacks. Furthermore, it processes the medical data without leaking any information, thus preserving patient’s privacy by allowing only authorized users for decryption. The experimental results and security analysis from different perspectives verify the excellent performance of our encryption cryptosystem compared to other recent encryption schemes.
       
  • PRTA: a Proxy Re-encryption based Trusted Authorization Scheme for Nodes
           on CloudIoT
    • Abstract: Publication date: Available online 28 January 2019Source: Information SciencesAuthor(s): Mang Su, Bo Zhou, Anmin Fu, Yan Yu, Gongxuan Zhang In CloudIoT platform, the data is collected and shared by different nodes of Internet of Things(IoT), and data is processed and stored based on cloud servers. It has increased the abilities of IoT on information computation. Meanwhile, it also has enriched the resource in cloud and improved integration of the Internet and human world. All of this offer advantages as well as the new challenges of information security and privacy protection. As the energy limitation of the nodes in IoT, they are particularly vulnerable. It is much easier to hijack the nodes than to attack the data center for hackers. Thus, it is a crucial and urgent issue to realize the trusted update of authorization of nodes. When some nodes are hijacked, both of the behaviors to upload data to servers and to download information from servers should be forbidden. Otherwise, it might cause the serious damage to the sensitive data and privacy of servers. In order to solve this problem, we proposed a Proxy Re-encryption based Trusted Authorization scheme for nodes on CloudIoT(PRTA). PRTA is based on the proxy re-encryption (PRE), and the cloud server will play the roles of data storing and re-encrypting, which would reach the full potential of cloud computing and reduce the cost of nodes. The node’s status is taken as one of the parameters for data re-encryption and it is under the authorization servers’ control, which could ensure the security and reliability of the data and be beneficial for the privacy protection in CloudIoT. Also, the authorization servers are divided into the downloading and uploading kinds, which will make the application range much wider.
       
  • A Novel Many-objective Evolutionary Algorithm Based on Transfer Matrix
           with Kriging model
    • Abstract: Publication date: Available online 17 January 2019Source: Information SciencesAuthor(s): Lianbo Ma, Rui Wang, Shengminjie Chen, Shi Cheng, Xingwei Wang, Zhiwei Lin, Yuhui Shi, Min Huang Due to the curse of dimensionality caused by the increasing number of objectives, it is very challenging to tackle many-objective optimization problems (MaOPs). Aiming at this issue, this paper proposes a novel many-objective evolutionary algorithm, called Tk-MaOEA, based on transfer matrix assisted by Kriging model. In this approach, for the global space optimization, a transfer matrix is used as a map tool to reduce the number of objectives, which can simplify the optimization process. For the objective optimization, the Kriging model is incorporated to further reduce the computation cost. In addition, the fast non-dominated sorting and farthest-candidate selection (FCS) methods are used to guarantee the diversity of solutions. Comprehensive experiments on a set of benchmark functions have been conducted. Experimental results show that Tk-MaOEA is effective for solving complex MaOPs.
       
  • Multi-granularity three-way decisions with adjustable hesitant fuzzy
           linguistic multigranulation decision-theoretic rough sets over two
           universes
    • Abstract: Publication date: Available online 12 January 2019Source: Information SciencesAuthor(s): Chao Zhang, Deyu Li, Jiye Liang The notion of hesitant fuzzy linguistic term sets (HFLTSs), which enables experts to utilize a few possible linguistic terms to evaluate varieties of common qualitative information, plays a significant role in handling situations in cases where these experts are hesitant in offering linguistic expressions. For addressing the challenges of information analysis and information fusion in hesitant fuzzy linguistic (HFL) group decision making, in accordance with the multi-granularity three-way decisions paradigm, the primary purpose of this study is to develop the notion of multigranulation decision-theoretic rough sets (MG-DTRSs) into the HFL background within the two-universe framework. Having revisited the relevant literature, we first propose a hybrid model named adjustable HFL MG-DTRSs over two universes by introducing an adjustable parameter for the expected risk appetite of experts, in which both optimistic and pessimistic versions of HFL MG-DTRSs over two universes are special cases of the adjustable version. Second, some of the fundamental properties of the proposed model are discussed. Then, on the basis of the presented hybrid model, a group decision making approach within the HFL context is further constructed. Finally, a practical example, a comparative analysis, and a validity test concerning person-job fit problems are explored to reveal the rationality and practicability of the constructed decision making rule.
       
  • An Angle Dominance Criterion for Evolutionary Many-Objective Optimization
    • Abstract: Publication date: Available online 7 January 2019Source: Information SciencesAuthor(s): Yuan Liu, Ningbo Zhu, Kenli Li, Miqing Li, Jinhua Zheng, Keqin Li It is known that Pareto dominance encounters difficulties in many-objective optimization. This strict criterion could make most individuals of a population incomparable in a high-dimensional space. A straightforward approach to tackle this issue is modify the Pareto dominance criterion. This is typically done by relaxing the dominance region. However, this modification is often associated with one or more parameters of determining the relaxation degree, and the performance of the corresponding algorithm could be sensitive to such parameters. In this paper, we propose a new dominance criterion, angle dominance, to deal with many-objective optimization problems. This angle dominance criterion can provide sufficient selection pressure towards the Pareto front and be exempt from the parameter tuning. In addition, an interesting property of the proposed dominance criterion, in contrast to existing dominance criteria, lies in its capability to reflect an individual’s extensity in the population. The angle dominance is integrated into NSGA-II (instead of Pareto dominance) and has demonstrated high competitiveness in many-objective optimization in comparison with a range of peer algorithms.
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-