Authors:Francisco Botana; Tomas Recio Pages: 3 - 20 Abstract: Abstract We review the behavior of some popular dynamic geometry software when computing envelopes, relating the diverse methods implemented in these programs with the various definitions of envelope. Special attention is given to the new GeoGebra 5.0 version, that incorporates a mathematically rigorous approach for envelope computations. Furthermore, a discussion on the role, in this context, of the cooperation between GeoGebra and a recent parametric polynomial solving algorithm is detailed. This approach seems to yield accurate results, allowing for the first time sound computations of envelopes of families of plane curves in interactive environments. PubDate: 2017-05-01 DOI: 10.1007/s10472-016-9500-3 Issue No:Vol. 80, No. 1 (2017)

Authors:Uwe Egly; Martin Kronegger; Florian Lonsing; Andreas Pfandler Pages: 21 - 45 Abstract: Abstract We consider planning with uncertainty in the initial state as a case study of incremental quantified Boolean formula (QBF) solving. We report on experiments with a workflow to incrementally encode a planning instance into a sequence of QBFs. To solve this sequence of successively constructed QBFs, we use our general-purpose incremental QBF solver DepQBF. Since the generated QBFs have many clauses and variables in common, our approach avoids redundancy both in the encoding phase as well as in the solving phase. We also present experiments with incremental preprocessing techniques that are based on blocked clause elimination (QBCE). QBCE allows to eliminate certain clauses from a QBF in a satisfiability preserving way. We implemented the QBCE-based techniques in DepQBF in three variants: as preprocessing, as inprocessing (which extends preprocessing by taking into account variable assignments that were fixed by the QBF solver), and as a novel dynamic approach where QBCE is tightly integrated in the solving process. For DepQBF, experimental results show that incremental QBF solving with incremental QBCE outperforms incremental QBF solving without QBCE, which in turn outperforms nonincremental QBF solving. For the first time we report on incremental QBF solving with incremental QBCE as inprocessing. Our results are the first empirical study of incremental QBF solving in the context of planning and motivate its use in other application domains. PubDate: 2017-05-01 DOI: 10.1007/s10472-016-9501-2 Issue No:Vol. 80, No. 1 (2017)

Authors:Juana Sendra; David Gómez Sánchez-Pascuala; Valerio Morán Pages: 47 - 64 Abstract: Abstract In this paper we present two packages, implemented in the computer algebra system Maple, for dealing with offsets and conchoids to algebraic curves, respectively. Help pages and procedures are described. Also in an annex, we provide a brief atlas, created with these packages, and where the offset and the conchoid of several algebraic plane curves are obtained, their rationality is analyzed, and parametrizations are computed. Practical performance of the implemented algorithms shows that the packages execute in reasonable time; we include time cost tables of the computation of the offset and conchoid curves of two rational families of curves using the implemented packages. PubDate: 2017-05-01 DOI: 10.1007/s10472-016-9504-z Issue No:Vol. 80, No. 1 (2017)

Authors:M. Martinez; A. M. H. Abdel-Fattah; U. Krumnack; D. Gómez-Ramírez; A. Smaill; T. R. Besold; A. Pease; M. Schmidt; M. Guhe; K.-U. Kühnberger Pages: 65 - 89 Abstract: Abstract In Cognitive Science, conceptual blending has been proposed as an important cognitive mechanism that facilitates the creation of new concepts and ideas by constrained combination of available knowledge. It thereby provides a possible theoretical foundation for modeling high-level cognitive faculties such as the ability to understand, learn, and create new concepts and theories. Quite often the development of new mathematical theories and results is based on the combination of previously independent concepts, potentially even originating from distinct subareas of mathematics. Conceptual blending promises to offer a framework for modeling and re-creating this form of mathematical concept invention with computational means. This paper describes a logic-based framework which allows a formal treatment of theory blending (a subform of the general notion of conceptual blending with high relevance for applications in mathematics), discusses an interactive algorithm for blending within the framework, and provides several illustrating worked examples from mathematics. PubDate: 2017-05-01 DOI: 10.1007/s10472-016-9505-y Issue No:Vol. 80, No. 1 (2017)

Authors:Belaïd Benhamou Pages: 91 - 112 Abstract: Abstract The concept of symmetry has been extensively studied in the field of constraint programming and in the propositional satisfiability. Several methods for detection and removal of these symmetries have been developed, and their use in known solvers of these domains improved dramatically their effectiveness on a big variety of problems considered difficult to solve. The concept of symmetry may be exported to other areas where some structures can be exploited effectively. Particularly, in the area of data mining where some tasks can be expressed as constraints or logical formulas. We are interested here, by the detection and elimination of local and global symmetries in the item-set mining problem. Recent works have provided effective encodings as Boolean constraints for these data mining tasks and some idea on symmetry elimination in this area begin to appear, but still few and the techniques presented are often on global symmetry that is detected and eliminated statically in a preprocessing phase. In this work we study the notion of local symmetry and compare it to global symmetry for the itemset mining problem. We show how local symmetries of the boolean encoding can be detected dynamically and give some properties that allow to eliminate theses symmetries in SAT-based itemset mining solvers in order to enhance their efficiency. PubDate: 2017-05-01 DOI: 10.1007/s10472-016-9528-4 Issue No:Vol. 80, No. 1 (2017)

Authors:D. Magos; I. Mourtos Pages: 267 - 289 Abstract: Abstract Submodularity defines a general framework rich of important theoretical properties while accommodating numerous applications. Although the notion has been present in the literature of Combinatorial Optimization for several decades, it has been overlooked in the analysis of global constraints. The current work illustrates the potential of submodularity as a powerful tool for such an analysis. In particular, we show that the cumulative constraint, when all tasks are identical, has the submodular/supermodular representation property, i.e., it can be represented by a submodular/supermodular system of linear inequalities. Motivated by that representation, we show that the system of any two (global) constraints not necessarily of the same type, each bearing the above-mentioned property, has an integral relaxation given by the conjunction of the linear inequalities representing each individual constraint. This result is obtained through the use of the celebrated polymatroid intersection theorem. PubDate: 2017-04-01 DOI: 10.1007/s10472-016-9522-x Issue No:Vol. 79, No. 4 (2017)

Authors:Tomás de la Rosa; Raquel Fuentetaja Pages: 291 - 305 Abstract: Abstract In this paper we describe ENSEMBLE-ROLLER, a learning-based automated planner that uses a bagging approach to enhance existing techniques for learning planning policies. Previous policy-style planning and learning systems sort state successors based on action predictions from a relational classifier. However, these learning-based planners can produce several plans of bad quality, since it is very difficult to encode in a single classifier all possible situations occurring in a planning domain. We propose to use ensembles of relational classifiers to generate more robust policies. As in other applications of machine learning, the idea of the ensembles of classifiers consists of providing accuracy for particular scenarios and diversity to cover a wide range of situations. In particular, ENSEMBLE-ROLLER learns ensembles of relational decision trees for each planning domain. The control knowledge from different sets of trees is aggregated as a single prediction or applied separately in a multiple-queue search algorithm. Experimental results show that both ways of using new policies produce on average plans of better quality. PubDate: 2017-04-01 DOI: 10.1007/s10472-016-9523-9 Issue No:Vol. 79, No. 4 (2017)

Authors:Jan Triska; Vilem Vychodil Pages: 307 - 335 Abstract: Abstract We study logic for reasoning with if-then formulas describing dependencies between attributes of objects which are observed in consecutive points in time. We introduce semantic entailment of the formulas, show its fixed-point characterization, investigate closure properties of model classes, present an axiomatization and prove its completeness, and investigate alternative axiomatizations and normalized proofs. We investigate decidability and complexity issues of the logic and prove that the entailment problem is NP-hard and belongs to EXPSPACE. We show that by restricting to predictive formulas, the entailment problem is decidable in pseudo-linear time. PubDate: 2017-04-01 DOI: 10.1007/s10472-016-9526-6 Issue No:Vol. 79, No. 4 (2017)

Authors:Mikko Lauri; Aino Ropponen; Risto Ritala Pages: 337 - 370 Abstract: Abstract We consider the problem of an agent traversing a directed graph with the objective of maximizing the probability of reaching a goal node before a given deadline. Only the probability of the travel times of edges is known to the agent. The agent must balance between traversal actions towards the goal, and delays due to actions improving information about graph edge travel times. We describe the relationship of the problem to the more general partially observable Markov decision process. Further, we show that if edge travel times are independent and the underlying directed graph is acyclic, a closed loop solution can be computed. The solution specifies whether to execute a traversal or information-gathering action as a function of the current node, the time remaining until the deadline, and the information about edge travel times. We present results from two case studies, quantifying the usefulness of information-gathering as opposed to applying only traversal actions. PubDate: 2017-04-01 DOI: 10.1007/s10472-016-9527-5 Issue No:Vol. 79, No. 4 (2017)

Authors:Piotr Wojciechowski; Pavlos Eirinakis; K. Subramani Pages: 371 - 392 Abstract: Erratum to: Ann Math Artif Intell (2017) 79:245-266 DOI 10.1007/s10472-016-9525-7 Owing to an error in the production process, the following article has been published incorrectly online. The article including the illustrations is presented hereafter. PubDate: 2017-04-01 Issue No:Vol. 79, No. 4 (2017)

Authors:Alexander Kuleshov; Alexander Bernstein Abstract: Abstract Consider unknown smooth function which maps high-dimensional inputs to multidimensional outputs and whose domain of definition is unknown low-dimensional input manifold embedded in an ambient high-dimensional input space. Given training dataset consisting of ‘input-output’ pairs, regression on input manifold problem is to estimate the unknown function and its Jacobian matrix, as well to estimate the input manifold. By transforming high-dimensional inputs in their low-dimensional features, initial regression problem is reduced to certain regression on feature space problem. The paper presents a new geometrically motivated method for solving both interrelated regression problems. PubDate: 2017-05-16 DOI: 10.1007/s10472-017-9551-0

Authors:Ernst Ahlberg; Oscar Hammar; Claus Bendtsen; Lars Carlsson Abstract: Abstract We present two applications of conformal prediction relevant to drug discovery. The first application is around interpretation of predictions and the second one around the selection of compounds to progress in a drug discovery project setting. PubDate: 2017-04-28 DOI: 10.1007/s10472-017-9550-1

Authors:Lior Aronshtam; Havazelet Cohen; Tammar Shrot Abstract: Abstract This article focuses on the question of whether a certain candidate’s (player’s) chance to advance further in a tennis tournament can be increased when the ordering of the tournament can be controlled (manipulated by the organizers) according to his own preferences. Is it possible to increase the number of ranking points a player will receive? And most importantly, can it be done in reasonable computational time? The answers to these questions differ for different settings. e.g., the information available on the outcome of each game, the significance of the number of points gained or of the number of games won. We analyzed five different variations of these tournament questions. First the complexity hardness of trying to control the tournaments is shown. Then, the tools of parametrized complexity are used to investigate the source of the problems’ hardness. Specifically, we check whether this hardness holds when the size of the problem is bounded. The findings of this analysis show that it is possible under certain circumstances to control the tournament in favor of a specific candidate in order to help him advance further in the tournament. PubDate: 2017-04-24 DOI: 10.1007/s10472-017-9549-7

Authors:John McCabe-Dansted; Mark Reynolds Abstract: Abstract Attempts to manage the reasoning about systems with fairness properties are long running. The popular but restricted Computational Tree Logic (CTL) is amenable to automated reasoning but has difficulty expressing some fairness properties. More expressive languages such as CTL* and CTL+ are computationally complex. The main contribution of this paper is to show the usefulness and practicality of employing the bundled variants of these languages to handle fairness. In particular we present a tableau for a bundled variant of CTL that still has the similar computational complexity to the CTL tableau and a simpler implementation. We further show that the decision problem remains in EXPTIME even if a bounded number of CTL* fairness constraints are allowed in the input formulas. By abandoning limit closure the bundled logics can simultaneously be easier to automate and express many typical fairness constraints. PubDate: 2017-04-19 DOI: 10.1007/s10472-017-9546-x

Authors:Vladimir Vovk; Dusko Pavlovic Abstract: Abstract We construct universal prediction systems in the spirit of Popper’s falsifiability and Kolmogorov complexity and randomness. These prediction systems do not depend on any statistical assumptions (but under the IID assumption they dominate, to within the usual accuracy, conformal prediction). Our constructions give rise to a theory of algorithmic complexity and randomness of time containing analogues of several notions and results of the classical theory of Kolmogorov complexity and randomness. PubDate: 2017-04-19 DOI: 10.1007/s10472-017-9547-9

Authors:A. Zaytsev; E. Burnaev Abstract: Abstract Engineers widely use Gaussian process regression framework to construct surrogate models aimed to replace computationally expensive physical models while exploring design space. Thanks to Gaussian process properties we can use both samples generated by a high fidelity function (an expensive and accurate representation of a physical phenomenon) and a low fidelity function (a cheap and coarse approximation of the same physical phenomenon) while constructing a surrogate model. However, if samples sizes are more than few thousands of points, computational costs of the Gaussian process regression become prohibitive both in case of learning and in case of prediction calculation. We propose two approaches to circumvent this computational burden: one approach is based on the Nyström approximation of sample covariance matrices and another is based on an intelligent usage of a blackbox that can evaluate a low fidelity function on the fly at any point of a design space. We examine performance of the proposed approaches using a number of artificial and real problems, including engineering optimization of a rotating disk shape. PubDate: 2017-04-05 DOI: 10.1007/s10472-017-9545-y

Authors:Evgeny Burnaev; Ivan Panin; Bruno Sudret Abstract: Abstract Global sensitivity analysis aims at quantifying respective effects of input random variables (or combinations thereof) onto variance of a physical or mathematical model response. Among the abundant literature on sensitivity measures, Sobol indices have received much attention since they provide accurate information for most of models. We consider a problem of experimental design points selection for Sobol’ indices estimation. Based on the concept of D-optimality, we propose a method for constructing an adaptive design of experiments, effective for calculation of Sobol’ indices based on Polynomial Chaos Expansions. We provide a set of applications that demonstrate the efficiency of the proposed approach. PubDate: 2017-03-30 DOI: 10.1007/s10472-017-9542-1

Authors:Weifu Ding; Jiangshe Zhang Abstract: Abstract This paper presents a novel online object tracking algorithm with sparse representation for learning effective appearance models under a particle filtering framework. Compared with the state-of-the-art ℓ 1 sparse tracker, which simply assumes that the image pixels are corrupted by independent Gaussian noise, our proposed method is based on information theoretical Learning and is much less sensitive to corruptions; it achieves this by assigning small weights to occluded pixels and outliers. The most appealing aspect of this approach is that it can yield robust estimations without using the trivial templates adopted by the previous sparse tracker. By using a weighted linear least squares with non-negativity constraints at each iteration, a sparse representation of the target candidate is learned; to further improve the tracking performance, target templates are dynamically updated to capture appearance changes. In our template update mechanism, the similarity between the templates and the target candidates is measured by the earth movers’ distance(EMD). Using the largest open benchmark for visual tracking, we empirically compare two ensemble methods constructed from six state-of-the-art trackers, against the individual trackers. The proposed tracking algorithm runs in real-time, and using challenging sequences performs favorably in terms of efficiency, accuracy and robustness against state-of-the-art algorithms. PubDate: 2017-03-23 DOI: 10.1007/s10472-017-9543-0

Authors:Federico Bergenti; Stefania Monica Abstract: Abstract This paper describes an algorithm to enforce hyper-arc consistency of polynomial constraints defined over finite domains. First, the paper describes the language of so called polynomial constraints over finite domains, and it introduces a canonical form for such constraints. Then, the canonical form is used to transform the problem of testing the satisfiability of a constraint in a box into the problem of studying the sign of a related polynomial function in the same box, a problem which is effectively solved by using the modified Bernstein form of polynomials. The modified Bernstein form of polynomials is briefly discussed, and the proposed hyper-arc consistency algorithm is finally detailed. The proposed algorithm is a subdivision procedure which, starting from an initial approximation of the domains of variables, removes values from domains to enforce hyper-arc consistency. PubDate: 2017-03-20 DOI: 10.1007/s10472-017-9544-z