for Journals by Title or ISSN for Articles by Keywords help

Publisher: Springer-Verlag (Total: 2352 journals)

 Artificial Intelligence Review   [SJR: 0.948]   [H-I: 48]   [14 followers]  Follow         Hybrid journal (It can contain Open Access articles)    ISSN (Print) 1573-7462 - ISSN (Online) 0269-2821    Published by Springer-Verlag  [2352 journals]
• A review on the application of structured sparse representation at image
annotation
• Authors: Vafa Maihami; Farzin Yaghmaee
Pages: 331 - 348
Abstract: Abstract The increasing number of images on the Web and other information environments, needs efficient management and suitable retrieval especially by computers. Image annotation is a process which produces words for a digital image based on its content. Users prefer an image search based on text queries and keywords which has increased the use of image annotation. In this paper, we discuss the applicability of structured sparse representations at image annotation. First the components of image annotation and sparse representation are reviewed. Then, we survey the structure of sparse representation based on the image annotation algorithms. Next, the comparison of algorithm has been presented. Finally the paper concludes with some major challenges and open issues in image annotation using structured sparse representations.
PubDate: 2017-10-01
DOI: 10.1007/s10462-016-9502-x
Issue No: Vol. 48, No. 3 (2017)

• Evolution or revolution: the critical need in genetic algorithm based
testing
• Authors: Anupama Surendran; Philip Samuel
Pages: 349 - 395
Abstract: Abstract Software testing is one of the most inevitable processes in software development. The field of software testing has seen an extensive use of search based techniques in the last decade. Among the search based techniques, it is the metaheuristic techniques such as genetic algorithm that has garnered the major share of attention from researchers. Looking at the large body of work that has happened and is happening in this field, we feel that it is high time someone studied how well genetic algorithm based techniques fare in practical testing process. Method: In this work, we present a roadmap to the future of genetic algorithm based software testing, based on a review of literature. We have mainly reviewed the works which use genetic algorithm for software test data generation. This independent review is designed to direct the attention of future researchers to the deficiencies of genetic algorithm based testing, their possible solutions and the extent to which they are correctable. The observations from the selected primary studies highlight the issues faced when genetic algorithm is applied in software testing. The observations form the review reveal that the type of genetic algorithm used, fitness function design, population initialization and parameter settings does impact the quality of solution obtained in software testing using genetic algorithm. From the review we conclude that, more generalized approaches can make genetic algorithm based software testing one of the strongest methods in practical software testing. We hope that, this review will be a major breakthrough in genetic algorithm based software testing field.
PubDate: 2017-10-01
DOI: 10.1007/s10462-016-9504-8
Issue No: Vol. 48, No. 3 (2017)

• Evaluation in artificial intelligence: from task-oriented to
ability-oriented measurement
• Authors: José Hernández-Orallo
Pages: 397 - 447
Abstract: Abstract The evaluation of artificial intelligence systems and components is crucial for the progress of the discipline. In this paper we describe and critically assess the different ways AI systems are evaluated, and the role of components and techniques in these systems. We first focus on the traditional task-oriented evaluation approach. We identify three kinds of evaluation: human discrimination, problem benchmarks and peer confrontation. We describe some of the limitations of the many evaluation schemes and competitions in these three categories, and follow the progression of some of these tests. We then focus on a less customary (and challenging) ability-oriented evaluation approach, where a system is characterised by its (cognitive) abilities, rather than by the tasks it is designed to solve. We discuss several possibilities: the adaptation of cognitive tests used for humans and animals, the development of tests derived from algorithmic information theory or more integrated approaches under the perspective of universal psychometrics. We analyse some evaluation tests from AI that are better positioned for an ability-oriented evaluation and discuss how their problems and limitations can possibly be addressed with some of the tools and ideas that appear within the paper. Finally, we enumerate a series of lessons learnt and generic guidelines to be used when an AI evaluation scheme is under consideration.
PubDate: 2017-10-01
DOI: 10.1007/s10462-016-9505-7
Issue No: Vol. 48, No. 3 (2017)

inference system (ANFIS) approach with the help of ANFIS input selection
• Authors: Erman Çakıt; Waldemar Karwowski
Pages: 139 - 155
Abstract: Abstract This study presents an adaptive neuro-fuzzy inference system (ANFIS) approach performed to estimate the number of adverse events where the dependent variables are adverse events leading to four types of variables: number of people killed, wounded, hijacked and total number of adverse events. Fourteen infrastructure development projects were selected based on allocated budgets values at different time periods, population density, and previous month adverse event numbers selected as independent variables. Firstly, number of independent variables was reduced by using ANFIS input selection approach. Then, several ANFIS models were performed and investigated for Afghanistan and the whole country divided into seven regions for analysis purposes. Performances of models were assessed and compared based on the mean absolute errors. The difference between observed and estimated value was also calculated within $${\pm }1$$ range with values around 90 %. We included multiple linear regression (MLR) model results to assess the predictive power of the ANFIS approach, in comparison to a traditional statistical approach. When the model accuracy was calculated according to the performance metrics, ANFIS showed greater predictive accuracy than MLR analysis, as indicated by experimental results. As a result of this study, we conclude that ANFIS is able to estimate the occurrence of adverse events according to economical infrastructure development project data.
PubDate: 2017-08-01
DOI: 10.1007/s10462-016-9497-3
Issue No: Vol. 48, No. 2 (2017)

• A systematic review of text stemming techniques
• Authors: Jasmeet Singh; Vishal Gupta
Pages: 157 - 217
Abstract: Abstract Stemming is a program that matches the morphological variants of the word to its root word. Stemming is extensively used as a pre-processing tool in the field of natural language processing, information retrieval, and language modeling. Though a lot of advancements have been made in the field, yet organized arrangement of the previous work and efforts are lacking in this field. In this paper, we present a review of the text stemming theory, algorithms, and applications. It first describes the existing literature relevant to text stemming by classifying it according to certain key parameters; then it describes the deep analysis of some well-known stemming algorithms on standard data sets. In the end, the current state-of-the-art and certain open issues related to unsupervised stemming are presented. The main aim of this paper is to provide an extensive and useful understanding of the important aspects of text stemming. The open issues and analysis of the current stemming techniques will help the researchers to think of new lines to conduct research in future.
PubDate: 2017-08-01
DOI: 10.1007/s10462-016-9498-2
Issue No: Vol. 48, No. 2 (2017)

• Liar liar, pants on fire; or how to use subjective logic and argumentation
to evaluate information from untrustworthy sources
• Authors: Andrew Koster; Ana L. C. Bazzan; Marcelo de Souza
Pages: 219 - 235
Abstract: Abstract This paper presents a non-prioritized belief change operator, designed specifically for incorporating new information from many heterogeneous sources in an uncertain environment. We take into account that sources may be untrustworthy and provide a principled method for dealing with the reception of contradictory information. We specify a novel Data-Oriented Belief Revision Operator, that uses a trust model, subjective logic, and a preference-based argumentation framework to evaluate novel information and change the agent’s belief set accordingly. We apply this belief change operator in a collaborative traffic scenario, where we show that (1) some form of trust-based non-prioritized belief change operator is necessary, and (2) in a direct comparison between our operator and a previous proposition, our operator performs at least as well in all scenarios, and significantly better in some.
PubDate: 2017-08-01
DOI: 10.1007/s10462-016-9499-1
Issue No: Vol. 48, No. 2 (2017)

• Erratum to: IIR model identification using a modified inclined planes
system optimization algorithm
• Authors: Ali Mohammadi; Seyed Hamid Zahiri
Pages: 261 - 261
PubDate: 2017-08-01
DOI: 10.1007/s10462-016-9512-8
Issue No: Vol. 48, No. 2 (2017)

• A survey of imperatives and action representation formalisms
• Authors: Bama Srinivasan; Ranjani Parthasarathi
Pages: 263 - 297
Abstract: Abstract Representation and reasoning of actions is a wide spread area in the domain of Artificial Intelligence. The representation involves natural language instructions, which are based on the linguistic concepts and the reasoning methodology deals with the logical structures. In the computational domain, several theories pertaining to the state-space approach have been proposed to represent and reason out actions. Considering these aspects, this paper provides an account of work from the viewpoint of linguistics, logic and action representation formalisms. Based on this study, this paper then proposes a seven axes categorization scheme, that can be used to compare and analyze different theories.
PubDate: 2017-08-01
DOI: 10.1007/s10462-016-9501-y
Issue No: Vol. 48, No. 2 (2017)

• Ball tracking in sports: a survey
• Authors: Paresh R. Kamble; Avinash G. Keskar; Kishor M. Bhurchandi
Abstract: Abstract Increase in the number of sport lovers in games like football, cricket, etc. has created a need for digging, analyzing and presenting more and more multidimensional information to them. Different classes of people require different kinds of information and this expands the space and scale of the required information. Tracking of ball movement is of utmost importance for extracting any information from the ball based sports video sequences. Based on the literature survey, we have initially proposed a block diagram depicting different steps and flow of a general tracking process. The paper further follows the same flow throughout. Detection is the first step of tracking. Dynamic and unpredictable nature of ball appearance, movement and continuously changing background make the detection and tracking processes challenging. Due to these challenges, many researchers have been attracted to this problem and have produced good results under specific conditions. However, generalization of the published work and algorithms to different sports is a distant dream. This paper is an effort to present an exhaustive survey of all the published research works on ball tracking in a categorical manner. The work also reviews the used techniques, their performance, advantages, limitations and their suitability for a particular sport. Finally, we present discussions on the published work so far and our views and opinions followed by a modified block diagram of the tracking process. The paper concludes with the final observations and suggestions on scope of future work.
PubDate: 2017-10-16
DOI: 10.1007/s10462-017-9582-2

• Neuromodulation of internal emergent representations for sequential tasks
• Authors: Dongshu Wang; Junhao Wang; Lei Liu
Abstract: Abstract Serotonin and dopamine transmitters are synthesized in the lower brain but are transmitted widely to many areas of the brain for diffused use. Emergent representations are critical for understanding their effects. In prior work Zheng et al. (in: Proceedings of 2013 international joint conference on neural networks (IJCNN2013), pp 1404–1411, Dallas, Texas, USA, August 4–9, 2013), their effects on internal, non-motor neurons were studied for only pattern recognition tasks. In this paper, we study their effects on sequential tasks—robot navigation under different settings. They are sequential tasks because the outcome of behavior depends on not only the current behavior as in pattern recognition but also the previous behaviors and environment (e.g., previous navigational trajectories). Analytically, we show that the serotonin and dopamine systems affect the performance of sequential tasks in a compound way. Experimentally, we show that the effect on the learning rate of internal feature neurons (in the Y area) allows the agent to approach a friend and avoid an enemy faster as compounding effects of sequential states in static and dynamic environment. Further, we test the effect of punishment and reward schedule with the same initial locations. These simulation experiments all indicate that the reinforcement learning via the serotonin and dopamine systems is beneficial for developing desirable behaviors in this set of sequential tasks—staying statistically close to its friend and away from its enemy. As far as we know, this is the first work that investigates the effects of reinforcer (via serotonin and dopamine) on internal neurons (Y neurons) for sequential tasks using emergent representations.
PubDate: 2017-10-16
DOI: 10.1007/s10462-017-9585-z

• Review of modified and hybrid flower pollination algorithms for solving
optimization problems
• Authors: Dhabitah Lazim; Azlan Mohd Zain; Mahadi Bahari; Abdullah Hisham Omar
Abstract: Abstract Flower pollination algorithm (FPA) is a nature-inspired meta-heuristics to handle a large scale optimization process. This paper reviews the previous studies on the application of FPA, modified FPA and hybrid FPA for solving optimization problems. The effectiveness of FPA for solving the optimization problems are highlighted and discussed. The improvement aspects include local and global search strategies and the quality of the solutions. The measured enhancements in FPA are based on various research domains. The results of review indicate the capability of the enhanced and hybrid FPA for solving optimization problems in variety of applications and outperformed the results of other established optimization techniques.
PubDate: 2017-10-14
DOI: 10.1007/s10462-017-9580-4

• Importance sampling policy gradient algorithms in reproducing kernel
Hilbert space
• Authors: Tuyen Pham Le; Vien Anh Ngo; P. Marlith Jaramillo; TaeChoong Chung
Abstract: Abstract Modeling policies in reproducing kernel Hilbert space (RKHS) offers a very flexible and powerful new family of policy gradient algorithms called RKHS policy gradient algorithms. They are designed to optimize over a space of very high or infinite dimensional policies. As a matter of fact, they are known to suffer from a large variance problem. This critical issue comes from the fact that updating the current policy is based on a functional gradient that does not exploit all old episodes sampled by previous policies. In this paper, we introduce a generalized RKHS policy gradient algorithm that integrates the following important ideas: (i) policy modeling in RKHS; (ii) normalized importance sampling, which helps reduce the estimation variance by reusing previously sampled episodes in a principled way; and (iii) regularization terms, which avoid updating the policy too over-fit to sampled data. In the experiment section, we provide an analysis of the proposed algorithms through bench-marking domains. The experiment results show that the proposed algorithm can still enjoy a powerful policy modeling in RKHS and achieve more data-efficiency.
PubDate: 2017-10-10
DOI: 10.1007/s10462-017-9579-x

• A survey of feature selection methods for Gaussian mixture models and
hidden Markov models
• Authors: Stephen Adams; Peter A. Beling
Abstract: Abstract Feature selection is the process of reducing the number of collected features to a relevant subset of features and is often used to combat the curse of dimensionality. This paper provides a review of the literature on feature selection techniques specifically designed for Gaussian mixture models (GMMs) and hidden Markov models (HMMs), two common parametric latent variable models. The primary contribution of this work is the collection and grouping of feature selection methods specifically designed for GMMs and for HMMs. An additional contribution lies in outlining the connections between these two groups of feature selection methods. Often, feature selection methods for GMMs and HMMs are treated as separate topics. In this survey, we propose that methods developed for one model can be adapted to the other model. Further, we find that the number of feature selection methods for GMMs outweighs the number of methods for HMMs and that the proportion of methods for HMMs that require supervised data is larger than the proportion of GMM methods that require supervised data. We conclude that further research into unsupervised feature selection methods for HMMs is required and that established methods for GMMs could be adapted to HMMs. It should be noted that feature selection can also be referred to as dimensionality reduction, variable selection, attribute selection, and variable subset reduction. In this paper, we make a distinction between dimensionality reduction and feature selection. Dimensionality reduction, which we do not consider, is any process that reduces the number of features used in a model and can include methods that transform features in order to reduce the dimensionality. Feature selection, by contrast, is a specific form of dimensionality reduction that eliminates feature as inputs into the model. The primary difference is that dimensionality reduction can still require the collection of all the data sources in order to transform and reduce the feature set, while feature selection eliminates the need to collect the irrelevant data sources.
PubDate: 2017-09-25
DOI: 10.1007/s10462-017-9581-3

• A survey on techniques to handle face recognition challenges: occlusion,
single sample per subject and expression
• Authors: Badr Lahasan; Syaheerah Lebai Lutfi; Rubén San-Segundo
Abstract: Abstract Face recognition is receiving a significant attention due to the need of facing important challenges when developing real applications under unconstrained environments. The three most important challenges are facial occlusion, the problem of dealing with a single sample per subject (SSPS) and facial expression. This paper describes and analyzes various strategies that have been developed recently for overcoming these three major challenges that seriously affect the performance of real face recognition systems. This survey is organized in three parts. In the first part, approaches to tackle the challenge of facial occlusion are classified, illustrated and compared. The second part briefly describes the SSPS problem and the associated solutions. In the third part, facial expression challenge is illustrated. In addition, pros and cons of each technique are stated. Finally, several improvements for future research are suggested, providing a useful perspective for addressing new research in face recognition.
PubDate: 2017-09-14
DOI: 10.1007/s10462-017-9578-y

• Empirically grounded agent-based models of innovation diffusion: a
critical review
• Authors: Haifeng Zhang; Yevgeniy Vorobeychik
Abstract: Abstract Innovation diffusion has been studied extensively in a variety of disciplines, including sociology, economics, marketing, ecology, and computer science. Traditional literature on innovation diffusion has been dominated by models of aggregate behavior and trends. However, the agent-based modeling (ABM) paradigm is gaining popularity as it captures agent heterogeneity and enables fine-grained modeling of interactions mediated by social and geographic networks. While most ABM work on innovation diffusion is theoretical, empirically grounded models are increasingly important, particularly in guiding policy decisions. We present a critical review of empirically grounded agent-based models of innovation diffusion, developing a categorization of this research based on types of agent models as well as applications. By connecting the modeling methodologies in the fields of information and innovation diffusion, we suggest that the maximum likelihood estimation framework widely used in the former is a promising paradigm for calibration of agent-based models for innovation diffusion. Although many advances have been made to standardize ABM methodology, we identify four major issues in model calibration and validation, and suggest potential solutions.
PubDate: 2017-09-01
DOI: 10.1007/s10462-017-9577-z

• A review on methods and software for fuzzy cognitive maps
• Authors: Gerardo Felix; Gonzalo Nápoles; Rafael Falcon; Wojciech Froelich; Koen Vanhoof; Rafael Bello
Abstract: Abstract Fuzzy cognitive maps (FCMs) keep growing in popularity within the scientific community. However, despite substantial advances in the theory and applications of FCMs, there is a lack of an up-to-date, comprehensive presentation of the state-of-the-art in this domain. In this review study we are filling that gap. First, we present basic FCM concepts and analyze their static and dynamic properties, and next we elaborate on existing algorithms used for learning the FCM structure. Second, we provide a goal-driven overview of numerous theoretical developments recently reported in this area. Moreover, we consider the application of FCMs to time series forecasting and classification. Finally, in order to support the readers in their own research, we provide an overview of the existing software tools enabling the implementation of both existing FCM schemes as well as prospective theoretical and/or practical contributions.
PubDate: 2017-08-17
DOI: 10.1007/s10462-017-9575-1

• Functional and semantic roles in a high-level knowledge representation
language
• Authors: Gian Piero Zarri
Abstract: Abstract We describe in this paper a formalization of the notion of “role” that involves a clear separation between two very different sorts of roles. Semantic roles, like student or customer, are seen as (pre-defined) transitory properties that can be associated with (usually animate) entities. From a formal point of view, they can be represented as standard concepts to be placed into a specific branch of a particular ontology; they formalize the static and classificatory aspects of the notion of role. Functional roles must be used, instead, to model those pervasive and dynamic situations corresponding to events, activities, circumstances etc. that are characterized by spatio-temporal references; see, e.g., “John is now acting as a student”. They denote the specific function with respect to the global meaning of an event/situation/activity... that is performed by the entities involved in this event/situation... and formalize the dynamic and relational aspects of the notion of role. A functional role of the subject/agent/actor/protagonist... type is used to associate “John” with the notion of student or customer (semantic roles) during a specific time interval. Formally, functional roles are expressed as primitive symbols like subject, object, source, beneficiary. Semantic and functional roles interact smoothly when they are used to deal with challenging knowledge representation problems like the so-called “counting problem”, or when we need to set-up powerful inference rules whose atoms can directly denote complex situations. In this paper, the differentiation between semantic and functional roles will be illustrated from an narrative knowledge representation language (NKRL) point of view. NKRL is a high-level conceptual tool used for the computer-usable representation and management of the inner meaning of syntactically complex and semantically rich multimedia information. But, as we will see, the importance of this distinction goes well beyond its usefulness in a specific NKRL context. In particular, the use of functional roles is of paramount importance for the set-up of those evolved n-ary forms of knowledge representation that allow us to get rid from the limitations in expressiveness proper to the standard (binary) solutions.
PubDate: 2017-08-09
DOI: 10.1007/s10462-017-9571-5

• Domain adaptation network based on hypergraph regularized denoising
autoencoder
• Authors: Xuesong Wang; Yuting Ma; Yuhu Cheng
Abstract: Abstract Domain adaptation learning aims to solve the classification problems of unlabeled target domain by using rich labeled samples in source domain, but there are three main problems: negative transfer, under adaptation and under fitting. Aiming at these problems, a domain adaptation network based on hypergraph regularized denoising autoencoder (DAHDA) is proposed in this paper. To better fit the data distribution, the network is built with denoising autoencoder which can extract more robust feature representation. In the last feature and classification layers, the marginal and conditional distribution matching terms between domains are obtained via maximum mean discrepancy measurement to solve the under adaptation problem. To avoid negative transfer, the hypergraph regularization term is introduced to explore the high-order relationships among data. The classification performance of the model can be improved by preserving the statistical property and geometric structure simultaneously. Experimental results of 16 cross-domain transfer tasks verify that DAHDA outperforms other state-of-the-art methods.
PubDate: 2017-08-04
DOI: 10.1007/s10462-017-9576-0

• A survey for the applications of content-based microscopic image analysis
in microorganism classification domains
• Authors: Chen Li; Kai Wang; Ning Xu
Abstract: Abstract Microorganisms such as protozoa and bacteria play very important roles in many practical domains, like agriculture, industry and medicine. To explore functions of different categories of microorganisms is a fundamental work in biological studies, which can assist biologists and related scientists to get to know more properties, habits and characteristics of these tiny but obbligato living beings. However, taxonomy of microorganisms (microorganism classification) is traditionally investigated through morphological, chemical or physical analysis, which is time and money consuming. In order to overcome this, since the 1970s innovative content-based microscopic image analysis (CBMIA) approaches are introduced to microbiological fields. CBMIA methods classify microorganisms into different categories using multiple artificial intelligence approaches, such as machine vision, pattern recognition and machine learning algorithms. Furthermore, because CBMIA approaches are semi- or full-automatic computer-based methods, they are very efficient and labour cost saving, supporting a technical feasibility for microorganism classification in our current big data age. In this article, we review the development history of microorganism classification using CBMIA approaches with two crossed pipelines. In the first pipeline, all related works are grouped by their corresponding microorganism application domains. By this pipeline, it is easy for microbiologists to have an insight into each special application domain and find their interested applied CBMIA techniques. In the second pipeline, the related works in each application domain are reviewed by time periods. Using this pipeline, computer scientists can see the dynamic of technological development clearly and keep up with the future development trend in this interdisciplinary field. In addition, the frequently-used CBMIA methods are further analysed to find technological common points and potential reasons.
PubDate: 2017-08-02
DOI: 10.1007/s10462-017-9572-4

• Parallel vision for perception and understanding of complex scenes:
methods, framework, and perspectives
• Authors: Kunfeng Wang; Chao Gou; Nanning Zheng; James M. Rehg; Fei-Yue Wang
Abstract: Abstract In the study of image and vision computing, the generalization capability of an algorithm often determines whether it is able to work well in complex scenes. The goal of this review article is to survey the use of photorealistic image synthesis methods in addressing the problems of visual perception and understanding. Currently, the ACP Methodology comprising artificial systems, computational experiments, and parallel execution is playing an essential role in modeling and control of complex systems. This paper extends the ACP Methodology into the computer vision field, by proposing the concept and basic framework of Parallel Vision. In this paper, we first review previous works related to Parallel Vision, in terms of synthetic data generation and utilization. We detail the utility of synthetic data for feature analysis, object analysis, scene analysis, and other analyses. Then we propose the basic framework of Parallel Vision, which is composed of an ACP trilogy (artificial scenes, computational experiments, and parallel execution). We also present some in-depth thoughts and perspectives on Parallel Vision. This paper emphasizes the significance of synthetic data to vision system design and suggests a novel research methodology for perception and understanding of complex scenes.
PubDate: 2017-07-18
DOI: 10.1007/s10462-017-9569-z

JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327

Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs