for Journals by Title or ISSN for Articles by Keywords help

Publisher: Springer-Verlag (Total: 2354 journals)

 Applied Intelligence   [SJR: 0.777]   [H-I: 43]   [11 followers]  Follow         Hybrid journal (It can contain Open Access articles)    ISSN (Print) 1573-7497 - ISSN (Online) 0924-669X    Published by Springer-Verlag  [2354 journals]
• Answer set programming for non-stationary Markov decision processes
• Authors: Leonardo A. Ferreira; Reinaldo A. C. Bianchi; Paulo E. Santos; Ramon Lopez de Mantaras
Pages: 993 - 1007
Abstract: Non-stationary domains, where unforeseen changes happen, present a challenge for agents to find an optimal policy for a sequential decision making problem. This work investigates a solution to this problem that combines Markov Decision Processes (MDP) and Reinforcement Learning (RL) with Answer Set Programming (ASP) in a method we call ASP(RL). In this method, Answer Set Programming is used to find the possible trajectories of an MDP, from where Reinforcement Learning is applied to learn the optimal policy of the problem. Results show that ASP(RL) is capable of efficiently finding the optimal solution of an MDP representing non-stationary domains.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0988-y
Issue No: Vol. 47, No. 4 (2017)

• BBBCO and fuzzy entropy based modified background subtraction algorithm
for object detection in videos
• Authors: Manisha Kaushal; Baljit Singh Khehra
Pages: 1008 - 1021
Abstract: Background subtraction (BS) is one of the most commonly used methods for detecting moving objects in videos. In this task, moving objectpixels are extracted by subtracting the current frame from a background frame. The obtained difference is compared against a threshold value to classify pixels as belonging to the foreground or background regions. The threshold plays a crucial role in this categorization and can impact the accuracy and preciseness of the object boundaries obtained by the BS algorithm. This paper proposes an approach for enhancing and optimizing the performance of the standard BS algorithm. This approach uses the concept of fuzzy 2-partition entropy and Big Bang–Big Crunch Optimization (BBBCO). BBBCO is a recently proposed evolutionary optimization approach for providing solutions to problems operating on multiple variables within prescribed constraints. BBBCO enhances the standard BS algorithm by framing the problem of parameter detection for BS as an optimization problem, which is solved using the concept of fuzzy 2-partition entropy. The proposed method is evaluated using videos from benchmark datasets and a number of statistical metrics. The method is also compared with standard BS and another recently proposed method. The results show the promise of the proposed method.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0912-5
Issue No: Vol. 47, No. 4 (2017)

• One-class support higher order tensor machine classifier
• Authors: Yanyan Chen; Liyun Lu; Ping Zhong
Pages: 1022 - 1030
Abstract: One-class classification problems have been widely encountered in the fields that the negative class patterns are difficult to be collected, and the one-class support vector machine is one of the popular algorithms for solving them. However, one-class support vector machine is a vector-based learning algorithm, and it cannot work directly when the input pattern is a tensor. This paper proposes a tensor-based maximum margin classifier for one-class classification problems, and develops a One-Class Support Higher Order Tensor Machine (HO-OCSTM) which can separate most of the target patterns from the origin with the maximum margin in the higher order tensor space. HO-OCSTM directly employs the higher order tensors as the input patterns, and it is more proper for small sample study. Moreover, the direct use of tensor representation has the advantage of retaining the structural information of data, which helps improve the generalization ability of the proposed algorithm. We implement HO-OCSTM by the alternating projection method and solve a convex quadratic programming similar to the standard one-class support vector machine algorithm at each iteration. The experimental results have shown the high recognition accuracy of the proposed method.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0945-9
Issue No: Vol. 47, No. 4 (2017)

• A robust formulation for twin multiclass support vector machine
• Authors: Julio López; Sebastián Maldonado; Miguel Carrasco
Pages: 1031 - 1043
Abstract: Multiclass classification is an important task in pattern analysis since numerous algorithms have been devised to predict nominal variables with multiple levels accurately. In this paper, a novel support vector machine method for twin multiclass classification is presented. The main contribution is the use of second-order cone programming as a robust setting for twin multiclass classification, in which the training patterns are represented by ellipsoids instead of reduced convex hulls. A linear formulation is derived first, while the kernel-based method is also constructed for nonlinear classification. Experiments on benchmark multiclass datasets demonstrate the virtues in terms of predictive performance of our approach.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0943-y
Issue No: Vol. 47, No. 4 (2017)

• MAPJA: Multi-agent planning with joint actions
• Authors: Satyendra Singh Chouhan; Rajdeep Niyogi
Pages: 1044 - 1058
Abstract: In this paper, we suggest a domain independent approach, MAPJA, to solve multi-agent planning problems with cooperative goals involving joint actions. We consider the capability of agents where capability is represented using a numeric value. The state-of-the-art multi-agent planners cannot handle joint actions. We have implemented and evaluated MAPJA on some benchmark planning domains and the experimental results are quite promising. We have also compared the performance of MAPJA with an existing approach, that transforms multi-agent planning problems into single-agent (classical) planning problems. The implementation results show that MAPJA outperforms the existing approach.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0938-8
Issue No: Vol. 47, No. 4 (2017)

• A hybrid bio-inspired algorithm and its application
• Authors: Abdolreza Hatamlou
Pages: 1059 - 1067
Abstract: Clustering is one of the attractive and major tasks in data mining that is used in many applications. It refers to group together data points, which are similar to one another based on some criteria. One of the efficient algorithms which applied on data clustering is particle swarm optimization (PSO) algorithm. However, PSO often leads to premature convergence and its performance is highly depended on parameter tuning and many efforts have been done to improve its performance in different ways. In order to improve the efficiency of the PSO on data clustering, it is hybridized with the big bang-big crunch algorithm (BB-BC) in this paper. In the proposed algorithm, namely PSO-BB-BC, PSO is used to explore the search space for finding the optimal centroids of the clusters. Whenever PSO loses its exploration, to prevent premature convergence, BB-BC algorithm is used to diversify the particles. The performance of the hybrid algorithm is compared with PSO, BB-BC and K-means algorithms using six benchmark datasets taken from the UCI machine learning repository. Experimental results show that the hybrid algorithm is superior to other test algorithms in all test datasets in terms of the quality of the clusters.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0951-y
Issue No: Vol. 47, No. 4 (2017)

• A hybrid and scalable multi-agent approach for patient scheduling based on
Petri net models
• Authors: Fu-Shiung Hsieh
Pages: 1068 - 1086
Abstract: Scheduling patients in a hospital is a challenging issue due to distributed organizational structure, dynamic medical workflows, variability of resources and the computational complexity involved. It calls for a sustainable architecture and a flexible scheduling scheme that can dynamically allocate available resources to promptly react to patients in a hospital and deliver healthcare services timely. The objectives of this paper are to propose a viable and systematic approach to develop a scalable and sustainable scheduling system based on multi-agent system (MAS) to shorten patient stay in a hospital and plan schedules based on the medical workflows and available resources. To develop a patient scheduling system, we combine MAS architecture, contract net protocol (CNP), workflow specification models based on Petri nets and the cooperative distributed problem solving concept. To achieve interoperability and sustainability, Petri Net Markup Language (PNML) and XML are used to specify precedence constraints of operations in medical workflows and capabilities of resource agents, respectively. Agent communication language (ACL) and CNP are used to achieve communication and negotiation/mutual selection of agents. A collaborative algorithm is invoked by individual agents to optimize the schedules locally based on a problem formulation automatically obtained by Petri net models. We have developed a scheduling system based on a FIPA compliant MAS platform to solve the dynamic patient scheduling problem. To illustrate the benefit of our approach, we compare the performance of our method with a heuristic rule commonly used in practice. In addition, we also analyze and verify scalability of our approach by experiments.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0935-y
Issue No: Vol. 47, No. 4 (2017)

• A template matching approach based on the behavior of swarms of locust
• Authors: Adrián González; Erik Cuevas; Fernando Fausto; Arturo Valdivia; Raúl Rojas
Pages: 1087 - 1098
Abstract: For many image processing applications (such as feature tracking, object recognition, stereo matching and remote sensing), the technique known as Template Matching (TM) plays an important role for the localization and recognition of objects or patterns within a digital image. A TM approach seeks to find a position within a source image which yields to the best possible resemblance between a given sub-image (typically referred as image template) and a corresponding region of such source image. TM involves two critical aspects: similarity measurement and search strategy. In this sense, the simplest available TM method involves an exhaustive computation of the Normalized Cross-Correlation (NCC) value (similarity measurement) over all pixel locations of the source image (search strategy). Unfortunately, this approach is strongly restricted due to the high computational cost implied in the evaluation of the NCC coefficient. Recently, several TM methods based on evolutionary approaches have been proposed as an alternative to reduce the number of search locations in the TM process. However, the lack of balance between exploration and exploitation related to the operators employed by many of such approaches makes TM to suffer from several critical flaws, such as premature convergence. In the proposed approach, the swarm optimization method known as Locust Search (LS) is applied to solve the problem of template matching. The unique evolutionary operators employed by LS method’s search strategy allows to explicitly avoid the concentration of search agents toward the best-known solutions, which in turn allows a better exploration of the valid image’s search region. Experimental results show that, in comparison to other similar methods, the proposed approach achieves the best balance between estimation accuracy and computational cost.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0937-9
Issue No: Vol. 47, No. 4 (2017)

• Using a hybrid of fuzzy theory and neural network filter for single image
dehazing
• Authors: Hsueh-Yi Lin; Cheng-Jian Lin
Pages: 1099 - 1114
Abstract: When photographs are being taken in an outdoor environment, the medium in air will cause light attenuation and further reduce image quality, and this impact is especially obvious in a hazy environment. Reduction of image quality results in the loss of information, which renders an image recognition system unable to identify objects in the image. In order to eliminate the hazy effect on images and improve the visual quality, this paper presents an efficient method combining the fuzzy inference system and the neural network filter to solve image dehazing. During dehazing, the fuzzy inference system is adopted to estimate the variations in light attenuation, and the erosion of morphological operation and the neural network filter are used to eliminate the halation and achieve optimization in transmission map refinement. Finally, the brightest 1% of the atmospheric light is utilized to calculate the color vector of atmospheric light to eliminate color cast. Experimental results indicate that the proposed method is superior to other dehazing methods.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0942-z
Issue No: Vol. 47, No. 4 (2017)

• A whitelist and blacklist-based co-evolutionary strategy for defensing
against multifarious trust attacks
• Authors: Shujuan Ji; Haiyan Ma; Yongquan Liang; Hofung Leung; Chunjin Zhang
Pages: 1115 - 1131
Abstract: With electronic commerce becoming increasingly popular, the problems of trust have become one of the main challenges in the development of electronic commerce. Although various mechanisms have been adopted to guarantee trust between customers and sellers (or platforms), trust and reputation systems are still frequently attacked by deceptive, collusive, or strategic agents. Therefore, it is difficult to keep these systems robust. It has been mentioned that a combined usage of both trust and distrust propagation can lead to better results. However, little work has been known to realize this insight successfully. Besides, literatures either use a social network with trust/distrust information or use one advisor list in evaluating all sellers, which leads to the lack of pertinence and inaccuracy of evaluation. This paper proposes a defensing strategy called WBCEA, in which, each buyer agent is modeled with two attributes (i.e., the trustworthy facet and the untrustworthy facet) and two lists (i.e., the whitelist and the blacklist). Based on the social network that are constructed and maintained according to its whitelist and blacklist, the honest buyer agent can find trustable buyers and evaluate the candidate sellers according to its own experience and ratings of trustable buyers. Experiments are designed and implemented to verify the accuracy and robustness of this strategy. Results show that our strategy outperforms existing ones, especially when majority of buyers are dishonest in the electronic market.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0944-x
Issue No: Vol. 47, No. 4 (2017)

• Multi-resolution gray-level image enhancement using particle swarm
optimization
• Authors: Ali Mohammad Nickfarjam; Hossein Ebrahimpour-Komleh
Pages: 1132 - 1143
Abstract: This paper presents a multi-resolution method for gray-level image enhancement using Particle Swarm Optimization (PSO). The enhancement optimization procedure is a non-linear problem with various constraints. The proposed image enhancement algorithm (MGE-PSO) generates a whole pyramid of differently sized image in order to utilize more information for improvement process. In fact, MGE-PSO employs the ability of image pyramid to determine informative parts of an image for visual perception. When an image is downscaled, area of homogeneous regions is decreased and informative pixels of input image can be selected easier. The PSO uses averaged variance value of all pixels included in the informative and non-informative classes of each level in image pyramid to move through search space for finding the best intensity values of pixels to transfer maximum visual perception. Experimental results on Berkeley dataset demonstrate the superiority of the proposed MGE-PSO to other methods. Beside, detailed analysis of selection criterion used in PSO are available.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0931-2
Issue No: Vol. 47, No. 4 (2017)

• An optimization method for task assignment for industrial manufacturing
organizations
• Authors: Ni Li; Yuhong Li; Mengyuan Sun; Haipeng Kong; Guanghong Gong
Pages: 1144 - 1156
Abstract: An industrial manufacturing organization is an aggregation of collaborative units and employees during the process of product development and production. The rapid growing degree of product complexity has resulted in a rising scale of corresponding manufacturing organizations. An effective and optimal schema is essential for assigning human resources to tasks to save costs. This paper proposes an optimization method for this task assignment issue based on a dynamic industrial manufacturing process model and an improved quantum genetic algorithm (QGA) with heuristic information: the heuristic QGA (HQGA). The dynamic process model adopts a hierarchical network to illustrate task composition in a complex industrial manufacturing process and dynamically describes the task completing process on the basis of individual performance and cooperative performance. To reduce the complexity of the fitness evaluation and assignment optimization in the model, the HQGA is presented for three types of optimization objective functions. The HQGA introduces a heuristic principle to accelerate convergence toward an optimal solution, where quantum bitbased encoding design can reflect the degree of participation of different individuals for different tasks. In four case studies, the HQGA successfully completed task assignment based on our dynamic process model and showed better optimization performance compared with conventional QGA and GA.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0940-1
Issue No: Vol. 47, No. 4 (2017)

• Hybrid artificial bee colony algorithm based approaches for two ring
• Authors: Alok Singh; Jayalakshmi Banda
Pages: 1157 - 1168
Abstract: This paper presents hybrid artificial bee colony algorithm based approaches for two $$\mathcal {NP}$$ -hard problems arising in optical ring networks. These two problems falls under the category of ring loading problems. Given a set of data transfer demands between different pair of nodes, the first problem consists in routing the demands on the ring in either clockwise or counter-clockwise directions so that the maximum data transmitted through any link in either directions is minimized. The second problem, on the other hand, discriminates between the data transmitted in one direction from the other and consists in minimizing the maximum data transmitted in one particular direction through any link. The first problem is referred to as weighted ring edge-loading problem in the literature, whereas the latter as weighted ring arc-loading problem. Computational results on the standard benchmark instances show the effectiveness of our approaches on both the problems.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0950-z
Issue No: Vol. 47, No. 4 (2017)

• A two-stage discretization algorithm based on information entropy
• Authors: Liu-Ying Wen; Fan Min; Shi-Yuan Wang
Pages: 1169 - 1185
Abstract: Discretization is an important and difficult preprocessing task for data mining and knowledge discovery. Although there are numerous discretization approaches, many suffer from certain drawbacks. Local approaches are efficient, but their generalization ability is weak. Global approaches consider all attributes simultaneously, but they have high time and space complexities. In this paper, we propose a two-stage discretization (TSD) algorithm based on information entropy. In the local discretization stage, we independently select k strong cuts for each attribute to minimize conditional entropy. The goal is to rapidly reduce the cardinality of the attributes, with minor information loss. In the global discretization stage, cuts for all attributes are considered simultaneously to form a scaled decision system. The minimal cut set that preserves the positive region is finally selected. We tested the new algorithm and seven popular algorithms on 28 datasets. Compared with other approaches, our algorithm has the best generalization ability, with a good information preserving ability, the highest classification accuracy, and reasonable time consumption.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0941-0
Issue No: Vol. 47, No. 4 (2017)

• An automated approach to estimate human interest
• Authors: Tanveer Ahmed; Abhishek Srivastava
Pages: 1186 - 1207
Abstract: Can we model and estimate interest' In general, when an individual engages with an object, say Facebook, Instagram, a Mobile game, or anything else, we know that there is some interest that the person has in the object. However, we do not have a procedure that can tell us by “how much” of a factor is the person interested. Simply put, can we find a “number” for someone’s interest' In this article, we propose the design of a framework that can handle this issue. We formulate the interest estimation problem as a state estimation problem and deduce interest indirectly from the activity. Activity, stimulated by interest, is measured via a subjective-objective weighted approach. Further, we present a novel continuous-time model for interest by drawing inspiration from Physics and Economics simultaneously. We model interest along the Ornstein-Uhlenbeck process in Physics and improve the performance by borrowing ideas from Stochastic Volatility Models in Economics. Subsequently, we employ particle filter to solve the interest estimation problem. To validate the feasibility of the proposed theory in practice, we investigate the model by conducting numerical simulations on real-world datasets. The results demonstrate good performance of the framework, and thus match the theoretical expectations from the method. Lastly, we implement the framework in practice and deploy it as a RESTful service, thereby providing a uniform interface for accessing the procedure via any remote or local application.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0947-7
Issue No: Vol. 47, No. 4 (2017)

• Decentralizing and coevolving differential evolution for large-scale
global optimization problems
• Authors: Ruoli Tang
Pages: 1208 - 1223
Abstract: This paper presents a novel decentralizing and coevolving differential evolution (DCDE) algorithm to address the issue of scaling up differential evolution (DE) algorithms to solve large-scale global optimization (LSGO) problems. As most evolutionary algorithms (EAs) display their weaknesses on LSGO problems due to the exponentially increasing complexity, the cooperative coevolution (CC) framework is often used to overcome such weaknesses. However, the cooperative but greedy coevolution of CC sometimes gives inferior performance, especially on non-separable and multimodal problems. In the proposed DCDE algorithm, to balance the search behavior between exploitation and exploration, the original population is decomposed into several subpopulations in ring connection, and the multi-context vectors according to this connection are introduced into the coevolution. Moreover, a novel “DE/current-to-SP-best-ring/1” mutation operation is also adopted in the DCDE. On a comprehensive set of 1000- dimensional benchmarks, the performance of DCDE compared favorably against several state-of-the-art LSGO algorithms. The experimental analysis results suggest that DCDE is a highly competitive optimization algorithm on LSGO problems, especially on some non-separable and multimodal problems.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0953-9
Issue No: Vol. 47, No. 4 (2017)

• FSPTwigFast: Holistic twig query on fuzzy spatiotemporal XML data
• Authors: Luyi Bai; Yin Li; Jiemin Liu
Pages: 1224 - 1239
Abstract: With spatiotemporal applications increasing, a large amount of spatiotemporal data emerges. Because temporal and spatial attributes are often vague, research on fuzzy spatiotemporal data, especially querying fuzzy spatiotemporal data, has attracted a lot of attention. However, although fuzzy logic is incorporated in querying fuzzy spatiotemporal data and querying fuzzy data in XML, relatively little work has been carried out in querying fuzzy spatiotemporal data in XML. In this paper, we propose an algorithm, called FSPTwigFast, to match fuzzy spatiotemporal XML twig pattern. We represent fuzzy spatiotemporal data by adding temporal and spatial attributes associating with fuzziness in crisp data. We extend Dewey code to mark fuzzy spatiotemporal data for special process and determine structure relationship of fuzzy spatiotemporal nodes in XML documents. Our technique uses streams to store leaf nodes in XML document corresponding to leaf query nodes, which are filtered to delete unmatched nodes. After filtering, output lists are built for every matched leaf node. Finally, the experimental results demonstrate the performance advantages of our approach.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0949-5
Issue No: Vol. 47, No. 4 (2017)

• Mining top-k high-utility itemsets from a data stream under sliding window
model
• Authors: Siddharth Dawar; Veronica Sharma; Vikram Goyal
Pages: 1240 - 1255
Abstract: High-utility itemset mining has gained significant attention in the past few years. It aims to find sets of items i.e. itemsets from a database with utility no less than a user defined threshold. The notion of utility provides more flexibility to an analyst to mine relevant itemsets. Nowadays, a continuous and unbounded stream of data is generated from web-clicks, transaction flow from retail stores, sensor networks, etc. Mining high-utility itemsets from a data stream is a challenging task as the incoming stream of data has to be processed on the fly with time and storage memory constraints. The number of high-utility itemsets depends on the user-defined threshold. A large number of itemsets can be generated at very low threshold values and vice versa. It can be a tedious task to set a threshold value to get a reasonable number of itemsets. Top-k high-utility itemset mining was coined to address this issue. k is the number of high-utility itemsets in the result set as defined by the user. In this paper, we propose a data structure and an efficient algorithm for mining top-k high-utility itemsets from a data stream. The algorithm has a single phase that does not generate any candidates, unlike many algorithms that work in two phases, i.e., candidate generation followed by candidates verification. We conduct extensive experiments on several real and synthetic datasets. Experimental results demonstrate that our proposed algorithm performs 20 to 80 times better on sparse datasets and 300 to 700 times on dense datasets than the state-of-the-art algorithm in terms of computation time. Furthermore, our proposed algorithm requires less memory compared to the state-of-the-art algorithm.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0939-7
Issue No: Vol. 47, No. 4 (2017)

• Shape classification using spectral graph wavelets
• Authors: Majid Masoumi; A. Ben Hamza
Pages: 1256 - 1269
Abstract: Spectral shape descriptors have been used extensively in a broad spectrum of geometry processing applications ranging from shape retrieval and segmentation to classification. In this paper, we propose a spectral graph wavelet approach for 3D shape classification using the bag-of-features paradigm. In an effort to capture both the local and global geometry of a 3D shape, we present a three-step feature description framework. First, local descriptors are extracted via the spectral graph wavelet transform having the Mexican hat wavelet as a generating kernel. Second, mid-level features are obtained by embedding local descriptors into the visual vocabulary space using the soft-assignment coding step of the bag-of-features model. Third, a global descriptor is constructed by aggregating mid-level features weighted by a geodesic exponential kernel, resulting in a matrix representation that describes the frequency of appearance of nearby codewords in the vocabulary. Experimental results on two standard 3D shape benchmarks demonstrate the effectiveness of the proposed classification approach in comparison with state-of-the-art methods.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0955-7
Issue No: Vol. 47, No. 4 (2017)

• Community detection in attributed networks based on heterogeneous vertex
interactions
• Authors: Xin Wang; Jianglong Song; Kai Lu; Xiaoping Wang
Pages: 1270 - 1281
Abstract: Community detection is attracting more attention on social network analysis. It is to cluster densely connected nodes into communities. In attributed networks where nodes have attributes, community detection should take both topology and attributes into account. Traditional community detection algorithms only focus on the topological structure. They do not take advantage of attributes so their performance is limited. Besides, most community detection algorithms for attributed networks are far from satisfactory because of accuracy and algorithm complexity. Moreover, most of the algorithms depend on users to specify the community number, which also impacts the performance. Based on a high-performance community detection algorithm named Attractor, we propose Hetero-Attractor which can detect communities in attributed networks. It expands the sociological model of Attractor and generates a heterogeneous network from the attributed network. Hetero-Attractor analyzes the new network based on the interactions between vertices. By these interactions, the topological information and attribute information not only play a role in the community detection but also interact with each other to reach a balanced result. It also develops a novel way to analyze the heterogeneous network. The experiments demonstrate that our algorithm performs better by utilizing the attribute information, and outperforms other methods both in terms of accuracy as well as scalability, with a maximum promotion of 60% in accuracy.
PubDate: 2017-12-01
DOI: 10.1007/s10489-017-0948-6
Issue No: Vol. 47, No. 4 (2017)

JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327

Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs