Abstract: The anatomy of the fitness landscape for the quadratic assignment problem is studied in this paper. We study the properties of both random problems, and real-world problems. Using auto-correlation as a measure for the landscape ruggedness, we study the landscape of the problems and show how this property is related to the problem matrices with which the problems are represented. Our main goal in this paper is to study new properties of the fitness landscape, which have not been studied before, and we believe are more capable of reflecting the problem difficulties. Using local search algorithm which exhaustively explore the plateaus and the local optima, we explore the landscape, store all the local optima we find, and study their properties. The properties we study include the time it takes for a local search algorithm to find local optima, the number of local optima, the probability of reaching the global optimum, the expected cost of the local optima around the global optimum and the basin of attraction of the global and local optima. We study the properties for problems of different sizes, and through extrapolations, we show how the properties change with the system size and why the problem becomes harder as the system size grows. In our study we show how the real-world problems are similar to, or different from the random problems. We also try to show what properties of the problem matrices make the landscape of the real problems be different from or similar to one another. PubDate: 2015-05-22

Abstract: Tile coding is an effective reinforcement learning method that uses a rather ingenious generalization mechanism based on (1) a carefully designed parameter setting and (2) the assumption that nearby states in the problem space will correspond to similar payoff predictions in the action-value function. Previously, we extended XCSF with tile coding prediction and compared it to tabular tile coding, showing that (1) XCSF performs as well as parameter-optimized tile coding, while also (2) evolving individualized parameter settings in each problem subspace. Our comparison was based on a set of well-known reinforcement learning environments (2D Gridworld and the Mountain Car) that involved no action-value discontinuities and so posed no challenge to tabular tile coding. In this paper, we go a step further and test XCSF with tile coding on a set of problems designed to challenge tile coding by introducing discontinuities in the action value landscape. The new testbed (called MazeWorld) extends 2D Gridworld with impenetrable obstacles, a conceptually simple modification that can dramatically increase the problem complexity for tabular tile coding. We compare four versions of XCSF with tile coding (each adapting a different set of parameters) to tabular tile coding on four problems of increasing complexity. We show that our system (1) needs fewer training problems than standard tile coding to reach an optimal policy; (2) it can evolve adequate coding parameters in each subspace without any previous knowledge; and that (3) even when XCSF is not allowed to evolve these parameters, the genetic algorithm will still adapt classifier conditions to properly decompose the problem into subspaces thus being much less sensitive to the parameter settings than tabular tile coding. PubDate: 2015-04-11

Abstract: Algorithmic scalability is a major concern for any machine learning strategy in this age of ‘big data’. A large number of potentially predictive attributes is emblematic of problems in bioinformatics, genetic epidemiology, and many other fields. Previously, ExSTraCS was introduced as an extended Michigan-style supervised learning classifier system that combined a set of powerful heuristics to successfully tackle the challenges of classification, prediction, and knowledge discovery in complex, noisy, and heterogeneous problem domains. While Michigan-style learning classifier systems are powerful and flexible learners, they are not considered to be particularly scalable. For the first time, this paper presents a complete description of the ExSTraCS algorithm and introduces an effective strategy to dramatically improve learning classifier system scalability. ExSTraCS 2.0 addresses scalability with (1) a rule specificity limit, (2) new approaches to expert knowledge guided covering and mutation mechanisms, and (3) the implementation and utilization of the TuRF algorithm for improving the quality of expert knowledge discovery in larger datasets. Performance over a complex spectrum of simulated genetic datasets demonstrated that these new mechanisms dramatically improve nearly every performance metric on datasets with 20 attributes and made it possible for ExSTraCS to reliably scale up to perform on related 200 and 2000-attribute datasets. ExSTraCS 2.0 was also able to reliably solve the 6, 11, 20, 37, 70, and 135 multiplexer problems, and did so in similar or fewer learning iterations than previously reported, with smaller finite training sets, and without using building blocks discovered from simpler multiplexer problems. Furthermore, ExSTraCS usability was made simpler through the elimination of previously critical run parameters. PubDate: 2015-04-03

Abstract: The XCS classifier system is a rule-based evolutionary machine learning system. XCS evolves classifiers in order to learn generalized solutions.
The XCS with adaptive action mapping (XCSAM) is inherited from XCS, which evolves a best action map where it evolves classifiers that advocate the best action in every state. Accordingly, XCSAM can potentially evolve solutions that are more compact than XCS, which in contrast focuses on a complete action map. Previous experimental results however have shown that, in some problems, XCSAM may produce solutions with more classifiers than XCS. In this paper, we initially show that the original fitness-based selection strategy of XCS produces non effective classifiers which are not likely to be in the best action map (i.e., they are inaccurate ones or do not have best actions) in XCSAM. Then, we introduce a new selection strategy for XCSAM that promotes the evolution of classifiers advocating the best action map and thus produces more compact solutions. The new strategy selects classifiers based both on their fitness (like XCS) and on the parameter optimality of action of XCSAM. The result is a pressure towards classifiers that are accurate and advocate the best actions. We present analyses showing that the new selection strategy successfully enables XCSAM to focus on classifiers having best actions. Our experimental results show that XCSAM with the new selection strategy (called XCSAM-SS) can evolve smaller solutions than XCS (and the original XCSAM) both in single-step and multi-step problems. As a consequence, XCSAM can also learn with smaller iterations than XCS in the single-step problem. Our conclusion is that, as the best action map potentially has a compact solution, XCSAM evolves a much compact solution than XCS by adding an adequate selection strategy. PubDate: 2015-04-02

Abstract: Sequence labeling is an interesting classification domain where, like normal classification, every input has a class label, but unlike normal classification, prediction of an input’s label may depend on the values of other inputs or their classes, and so a learner may need to refer to inputs and classes at different time stamps to classify the current input. This is more difficult because a learner does not know where and how many inputs are needed to classify the current input. Our interest is in learning general rules for sequence labeling. The XCS algorithm is a rule-based knowledge discovery system powered by a genetic algorithm which has often been used for classification. Here we present XCS-SL, an extension of XCS classifier system which can be applicable to sequence labeling. Towards an application of Learning Classifier System (LCS) to sequence labeling, we propose a new classifier condition with memory (called a variable-length condition) and a rule-discovery system for the new classifier condition, which enables XCS to apply it to sequence labeling. In XCS-SL, classification rules (called “classifiers” here) can include extra conditions on previous inputs, which act as memories. In sequence labeling, the number of conditions/memories needed may be different for each input, hence, using a fixed number of conditions (i.e., fixed-length condition) for all classifiers is not a good solution. Instead, XCS-SL classifiers have a variable-length condition to provide more or less memory. The genetic algorithm can grow and shrink conditions to find a suitable memory size. On two synthetic benchmark problems XCS-SL learns optimal classifiers, and on a real-world sequence labeling task it derives high classification accuracy and discovers interesting knowledge that shows dependencies between inputs at different times. The comprehensively described system is the first application of a LCS to sequence labeling and we consider it a promising direction for future work. PubDate: 2015-03-10

Abstract: Natural and artificial dynamical systems in the real world often have dynamics at multiple time scales. Such dynamics can contribute substantially to the complexity of a dynamical system and increase the difficulty with which it can be analyzed. Although evolutionary algorithms have been proposed that are amenable to the automated modeling of dynamical systems, none have explicitly taken into account multiple time scales or leveraged the information about these dynamics that is inherent in experimental observations. We propose a hybrid approach to the design of models for multiple-time-scale dynamical systems that combines an evolutionary algorithm with other metaheuristics and conventional nonlinear regression. With only minimal human-supplied domain knowledge, the algorithm automates the process of analyzing raw experimental observations and creating an interpretable symbolic model of the system under study. We describe the algorithm in detail and demonstrate its applicability to a variety of both physical and simulated systems. In addition, we study the performance and scalability of the algorithm under different types of dynamics, varying levels of experimental noise, and other factors relevant to the practical application of the algorithm. PubDate: 2015-01-31

Abstract: The direction set by Wilson’s XCS is that modern Learning Classifier Systems can be characterized by their use of rule accuracy as the utility metric for the search algorithm(s) discovering useful rules. Such searching typically takes place within the restricted space of co-active rules for efficiency. This paper gives an overview of the evolution of Learning Classifier Systems up to XCS, and then of some of the subsequent developments of Wilson’s algorithm to different types of learning. PubDate: 2015-01-29

Abstract: Evolutionary computation has often been presented as a possible model for creativity in computers. In this paper, evolution is discussed in the light of a theoretical model of human artistic process, recently presented by the author. Some crucial differences between human artistic creativity and natural evolution are observed and discussed, also in the light of other creative processes occurring in nature. As a tractable way to overcome these limitations, a new kind of evolutionary implementation of creativity is proposed, based on a simplified version of the previously presented model, and the results of initial experiments are presented and discussed. Artistic creativity is here modeled as an iterated turn-based process, alternating between a conceptual representation and a material representation of the work-to-be. Evolutionary computation is proposed as a heuristic solution to the principal steps in this process, translating back and forth between the two kinds of representation. Those steps are: implementation, going from concept to material form, and re-conceptualization, forming a new conceptual representation based on the current material form. The advantages and disadvantages of this approach are discussed, and concluding from the initial experiments, it is a very promising path, well worth further exploration. PubDate: 2015-01-20

Abstract: Population diversity is very important in giving the algorithm the power to explore the search space and not get trapped in local optima. In this respect, using a probabilistic representation for the quantum individuals, the Quantum-inspired Evolutionary Algorithms (QiEA) claim higher diversity in the population. Here, considering this important feature of QiEA, we propose different structures to offer better interaction between the q-individuals and propose new operators to preserve the diversity in the population and thus improve the performance of the QiEA. The effect of the structured population is investigated on the performance of the algorithm. Additionally, two operators are proposed in this paper. Being called the Diversity Preserving QiEA the first operator finds the converged similar q-individuals around a local optimum and while keeping the best q-individuals, by reinitializing the inferior ones pushes them out of the basin of attraction of the local optimum, so helping the algorithm to search other regions in the search space. The other operator is a reinitialization operator which by reinitializing the whole population helps it escape from the local optima it is trapped in. By studying the effect of the parameters of the proposed operators on their performance we show how the proposed operators improve the performance of QiEA. Experiments are performed on Knapsack, Trap and fourteen numerical objective functions and the results show better performance for the proposed algorithm than the original version of QiEA. PubDate: 2014-11-22

Abstract: Monterey Mirror is an experiment in interactive music performance. It is engages a human (the performer) and a computer (the mirror) in a game of playing, listening, and exchanging musical ideas. The computer side involves an interactive stochastic music generator which incorporates Markov models, genetic algorithms, and power-law metrics. This approach combines the predictive power of Markov models with the innovative power of genetic algorithms, using power-law metrics for fitness evaluation. These power-law metrics have been developed and refined in a decade-long project, which explores music information retrieval based on Zipf’s law and related power laws. We describe the architecture of Monterey Mirror, which can generate musical responses based on aesthetic variations of user input. We also explore how such a system may be used as a musical meta-instrument/environment in avant-garde music composition and performance projects. PubDate: 2014-11-21

Abstract: In this paper, we study the cooperative coevolutionary algorithms (CCEAs) for dynamic optimization.
We introduce the CCEAs with two popular types of individuals: (1) random immigrants (RIs) that increase the diversity for changing environments, and (2) elitist individuals that increase the local convergence to the optima. The CCEAs are evaluated on a standard suite of benchmark problems and are compared with evolution strategies (ES). Our experimental results show that the CCEAs are efficient in locating and tracking optima in dynamic environments. They are superior to the ES when the RI individuals and the elitist individuals are used. In addition, we empirically investigate how the CCEAs perform with different parameter settings. These settings include collaboration methods, the use of plus–comma selections, and the number of RI individuals and elitist individuals. We also investigate the CCEAs that use a mutative σ-self adaptation. The CCEAs perform the best when they use the best collaboration method and the plus selection. The use of the mutative σ-self adaptation is insignificant. Our results also show that the CCEAs are more scalable than the ES in dynamic environments. PubDate: 2014-11-18

Abstract: A cognitive architecture is presented for modelling some properties of sensorimotor learning in infants, namely the ability to accumulate adaptations and skills over multiple tasks in a manner which allows recombination and re-use of task specific competences. The control architecture invented consists of a population of compartments (units of neuroevolution) each containing networks capable of controlling a robot with many degrees of freedom. The nodes of the network undergo internal mutations, and the networks undergo stochastic structural modifications, constrained by a mutational and recombinational grammar. The nodes used consist of dynamical systems such as dynamic movement primitives, continuous time recurrent neural networks and high-level supervised and unsupervised learning algorithms. Edges in the network represent the passing of information from a sending node to a receiving node. The networks in a compartment operate in parallel and encode a space of possible subsumption-like architectures that are used to successfully evolve a variety of behaviours for a NAO H25 humanoid robot. PubDate: 2014-11-15

Abstract: We
introduce a novel evolutionary algorithm (EA) with a semantic network-based representation. For enabling this, we establish new formulations of EA variation operators, crossover and mutation, that we adapt to work on semantic networks. The algorithm employs commonsense reasoning to ensure all operations preserve the meaningfulness of the networks, using ConceptNet and WordNet knowledge bases. The algorithm can be interpreted as a novel memetic algorithm (MA), given that (1) individuals represent pieces of information that undergo evolution, as in the original sense of memetics as it was introduced by Dawkins; and (2) this is different from existing MA, where the word “memetic” has been used as a synonym for local refinement after global optimization. For evaluating the approach, we introduce an analogical similarity-based fitness measure that is computed through structure mapping. This setup enables the open-ended generation of networks analogous to a given base network. PubDate: 2014-11-13

Abstract: NeuroEvolution is the application of Evolutionary Algorithms to the training of Artificial Neural Networks. Currently the vast majority of NeuroEvolutionary methods create homogeneous networks of user defined transfer functions. This is despite NeuroEvolution being capable of creating heterogeneous networks where each neuron’s transfer function is not chosen by the user, but selected or optimised during evolution. This paper demonstrates how NeuroEvolution can be used to select or optimise each neuron’s transfer function and empirically shows that doing so significantly aids training. This result is important as the majority of NeuroEvolutionary methods are capable of creating heterogeneous networks using the methods described. PubDate: 2014-11-08

Abstract: Abstract
We have recently presented an initial study of evolutionary algorithms used to design vertical-axis wind turbines (VAWTs) wherein candidate prototypes are evaluated under fan generated wind conditions after being physically instantiated by a 3D printer. That is, unlike other approaches such as computational fluid dynamics simulations, no mathematical formulations are used and no model assumptions are made. However, the representation used significantly restricted the range of morphologies explored. In this paper, we present initial explorations into the use of a simple generative encoding, known as Gielis superformula, that produces a highly flexible 3D shape representation to design VAWT. First, the target-based evolution of 3D artefacts is investigated and subsequently initial design experiments are performed wherein each VAWT candidate is physically instantiated and evaluated under fan generated wind conditions. It is shown possible to produce very closely matching designs of a number of 3D objects through the evolution of supershapes produced by Gielis superformula. Moreover, it is shown possible to use artificial physical evolution to identify novel and increasingly efficient supershape VAWT designs. PubDate: 2014-10-28

Abstract: Abstract
Since their inception, field programmable gate arrays have seen an enormous growth in usage because they can dramatically reduce design and manufacturing costs. However, the time required for placement (a key step in the design) is dominating the compilation process. In this paper, we take some initial theoretical steps towards developing an efficient genetic algorithm for solving the placement problem by developing suitable recombination operators for performing placement. According to Holland, when the genetic algorithm recombines two parent genotypes, the differences between them define a genotypic subspace, and any offspring produced should be confined to this subspace. Those recombination operators that violate this principle can direct a search away from the region containing the parent genotypes and this is contrary to the intended task for recombination. This is often detrimental to search performance. This paper contributes the development of an intuitive visualization technique that can be used to easily detect violations of the previous principle. The efficacy of the proposed methodology is demonstrated and it is demonstrated that many standard recombination operators violate this principle. The methodology is then used to guide the development of novel operators that exhibit substantial (and statistically significant) improvements in performance over standard recombination operators. PubDate: 2014-10-02