Authors:Michael Beeson; Julien Narboux; Freek Wiedijk Abstract: We used computer proof-checking methods to verify the correctness of our proofs of the propositions in Euclid Book I. We used axioms as close as possible to those of Euclid, in a language closely related to that used in Tarski’s formal geometry. We used proofs as close as possible to those given by Euclid, but filling Euclid’s gaps and correcting errors. Euclid Book I has 48 propositions; we proved 235 theorems. The extras were partly “Book Zero”, preliminaries of a very fundamental nature, partly propositions that Euclid omitted but were used implicitly, partly advanced theorems that we found necessary to fill Euclid’s gaps, and partly just variants of Euclid’s propositions. We wrote these proofs in a simple fragment of first-order logic corresponding to Euclid’s logic, debugged them using a custom software tool, and then checked them in the well-known and trusted proof checkers HOL Light and Coq. PubDate: 2019-01-10 DOI: 10.1007/s10472-018-9606-x

Authors:Corinna Cortes; Spencer Greenberg; Mehryar Mohri Abstract: We present an extensive analysis of relative deviation bounds, including detailed proofs of two-sided inequalities and their implications. We also give detailed proofs of two-sided generalization bounds that hold in the general case of unbounded loss functions, under the assumption that a moment of the loss is bounded. We then illustrate how to apply these results in a sample application: the analysis of importance weighting. PubDate: 2019-01-08 DOI: 10.1007/s10472-018-9613-y

Authors:Satyanath Bhat; Shweta Jain; Sujit Gujar; Y. Narahari Abstract: We study the problem of a buyer who gains stochastic rewards by procuring through an auction, multiple units of a service or item from a pool of heterogeneous agents who are strategic on two dimensions, namely cost and capacity. The reward obtained for a single unit from an allocated agent depends on the inherent quality of the agent; the agent’s quality is fixed but unknown. Each agent can only supply a limited number of units (capacity of the agent). The cost incurred per unit and capacity (maximum number of units that can be supplied) are private information of each agent. The auctioneer is required to elicit from the agents their costs as well as capacities (making the mechanism design bidimensional) and further, learn the qualities of the agents as well, with a view to maximize her utility. Motivated by this, we design a bidimensional multi-armed bandit procurement auction that seeks to maximize the expected utility of the auctioneer subject to incentive compatibility and individual rationality, while simultaneously learning the unknown qualities of the agents. We first work with the assumption that the qualities are known, and propose an optimal, truthful mechanism 2D-OPT for the auctioneer to elicit costs and capacities. Next, in order to learn the qualities of the agents as well, we provide sufficient conditions for a learning algorithm to be Bayesian incentive compatible and individually rational. We finally design a novel learning mechanism, 2D-UCB that is stochastic Bayesian incentive compatible and individually rational. PubDate: 2019-01-03 DOI: 10.1007/s10472-018-9611-0

Authors:Gauri Jain; Manisha Sharma; Basant Agarwal Abstract: As the use of the Internet is increasing, people are connected virtually using social media platforms such as text messages, Facebook, Twitter, etc. This has led to increase in the spread of unsolicited messages known as spam which is used for marketing, collecting personal information, or just to offend the people. Therefore, it is crucial to have a strong spam detection architecture that could prevent these types of messages. Spam detection in noisy platform such as Twitter is still a problem due to short text and high variability in the language used in social media. In this paper, we propose a novel deep learning architecture based on Convolutional Neural Network (CNN) and Long Short Term Neural Network (LSTM). The model is supported by introducing the semantic information in representation of the words with the help of knowledge-bases such as WordNet and ConceptNet. Use of these knowledge-bases improves the performance by providing better semantic vector representation of testing words which earlier were having random value due to not seen in the training. Proposed Experimental results on two benchmark datasets show the effectiveness of the proposed approach with respect to the accuracy and F1-score. PubDate: 2019-01-02 DOI: 10.1007/s10472-018-9612-z

Authors:Subhrajit Bhattacharya; Robert Ghrist Pages: 139 - 160 Abstract: We consider the problem of optimal path planning in different homotopy classes in a given environment. Though important in robotics applications, path-planning with reasoning about homotopy classes of trajectories has typically focused on subsets of the Euclidean plane in the robotics literature. The problem of finding optimal trajectories in different homotopy classes in more general configuration spaces (or even characterizing the homotopy classes of such trajectories) can be difficult. In this paper we propose automated solutions to this problem in several general classes of configuration spaces by constructing presentations of fundamental groups and giving algorithms for solving the word problem in such groups. We present explicit results that apply to knot and link complements in 3-space, discuss how to extend to cylindrically-deleted coordination spaces of arbitrary dimension, and also present results in the coordination space of robots navigating on an Euclidean plane. PubDate: 2018-12-01 DOI: 10.1007/s10472-018-9596-8 Issue No:Vol. 84, No. 3-4 (2018)

Authors:Zuoquan Lin; Chen Chen Pages: 185 - 200 Abstract: In nonmonotonic reasoning there are the problems of inconsistency and incoherence in general, and in default reasoning there may be only one trivial extension or no extension in special. We propose the restricted semantics of four-valued logic for default reasoning to resolve the problems of inconsistency and incoherence and in the meantime retain classical extensions in the presence of consistency and coherency. The restricted semantics can maintain both the expressiveness and reasoning ability of default logic. We provide a transformation approach to compute the restricted extensions by reducing them into classical ones. PubDate: 2018-12-01 DOI: 10.1007/s10472-018-9592-z Issue No:Vol. 84, No. 3-4 (2018)

Authors:Jonathan Pieper Pages: 17 - 33 Abstract: This paper describes the team BusyBeaver, that participated in and won the Multi-Agent Programming Contest 2017. Its strategy is based on dividing agents into three static groups modeling the work chain of buying, assembling and delivering items. The team is coordinated by a centralized agent doing most of the high-level planning, usually using greedy algorithms and specialized heuristics. There is a heavy focus on proactively buying and assembling some items, in order to quickly complete upcoming jobs. PubDate: 2018-10-01 DOI: 10.1007/s10472-018-9589-7 Issue No:Vol. 84, No. 1-2 (2018)

Authors:Evangelos I. Sarmas Pages: 35 - 56 Abstract: This paper presents the workings of the Flisvos-2017 multi-agent system that participated in the Multi-Agent Programming Contest MAPC 2017 of Clausthal TU. PubDate: 2018-10-01 DOI: 10.1007/s10472-018-9587-9 Issue No:Vol. 84, No. 1-2 (2018)

Authors:Jørgen Villadsen; Oliver Fleckenstein; Helge Hatteland; John Bruntse Larsen Pages: 57 - 74 Abstract: This paper presents the overall strategy utilized by Jason-DTU to achieve a shared second place in the annual Multi-Agent Programming Contest. It provides an overview of the implementation details considering perception, task handling, agent logic and more. The paper analyzes the team’s results in each match, and is summarized by evaluating strengths and weaknesses of the proposed multi-agent system. PubDate: 2018-10-01 DOI: 10.1007/s10472-018-9588-8 Issue No:Vol. 84, No. 1-2 (2018)

Authors:Rafael C. Cardoso; Tabajara Krausburg; Túlio Baségio; Débora C. Engelmann; Jomi F. Hübner; Rafael H. Bordini Pages: 75 - 93 Abstract: The Multi-Agent Programming Contest in 2017 expanded upon the Agents in the City scenario used in the 2016 edition of the contest. In the Agents in the City contest, two teams compete to accomplish logistic tasks in simulations using realistic city maps from OpenStreetMap. The new version of the scenario shifted emphasis to include jobs that require assembled items, new types of facilities, and larger teams, resulting in a significantly more complex scenario than before. In this paper, we describe the strategies used by our team, highlighting our adaptations and any new additions from our participation in the previous year. One such new addition, is that now we have fully explored the use of all three programming dimensions (agent, environment, and organization) found in JaCaMo, the multi-agent system development platform that we used to implement our team. We also provide some discussion and analysis on what we believe were some of our most influential matches. PubDate: 2018-10-01 DOI: 10.1007/s10472-018-9584-z Issue No:Vol. 84, No. 1-2 (2018)

Authors:Philipp Czerner; Jonathan Pieper Pages: 95 - 115 Abstract: In this paper we describe our participation in the 2017 edition of the Multi-Agent Programming Contest as team ‘lampe’. Our strategy was implemented in C++; it uses a centralised organisation of agents and evaluates different strategies based on an internal simulation of the future game state. Strategies are generated using handwritten heuristics in a randomised fashion, also relying on the internal simulation. PubDate: 2018-10-01 DOI: 10.1007/s10472-018-9581-2 Issue No:Vol. 84, No. 1-2 (2018)

Authors:Christopher-Eyk Hrabia; Patrick Marvin Lehmann; Nabil Battjbuer; Axel Hessler; Sahin Albayrak Pages: 117 - 138 Abstract: In the Multi-Agent Programming Contest 2017 the TUBDAI team of the Technische Universität Berlin is using the complex multi-agent scenario to evaluate the application of two frameworks from the field (multi-)robot systems. In particular the task-level decision-making and planning framework ROS Hybrid Behaviour Planner (RHBP) is used to implement the execution and decision-making for single agents. The RHBP framework builds on top of the framework Robot Operating System (ROS) that is used to implement the communication and scenario specific coordination strategy of the agents. The united team for the official contest is formed by volunteering students from a project course and their supervisors. PubDate: 2018-10-01 DOI: 10.1007/s10472-018-9586-x Issue No:Vol. 84, No. 1-2 (2018)

Authors:Tony Veale Abstract: Text is an especially malleable medium for human and machine creativity. When guided by the appropriate symbolic and/or statistical models, even a small and seemingly superficial change at the formal level can result in a predictable yet profound change at the semantic and pragmatic level. Text is also a virtually unlimited resource on the web, which offers abundant, free-flowing channels of topical texts for almost every genre and register. In this paper we consider diverse approaches to transforming these input channels into new and creative streams of machine-generated outputs. We focus on the specific kind of linguistic creativity associated with metaphor, yet also demonstrate that divergent approaches to metaphor generation can, in turn, enable divergent uses and applications for machine creativity. PubDate: 2018-12-18 DOI: 10.1007/s10472-018-9614-x

Authors:Anton Nijholt Abstract: Eighteenth and nineteenth century philosophers took interest in humour and, in particular, humorous incongruities. Humour was not necessarily their main interest; however, observations on humour could support their more general philosophical theories. Spontaneous and unintentional humour such as anecdotes, witty remarks and absurd events were the styles of humour that they analysed and made part of their theories. Prepared humour such as verbal jokes were rarely included in their observations, likely dismissed as too vulgar and not requiring intellectual effort. Humour, as analysed by several eighteenth and nineteenth century philosophers, was seen as part of daily life or life simulated on stage. In the twentieth century, Freud emphasized a possible ‘relief’ function of ‘prepared’ humour such as jokes. Additionally, linguists began developing theories to analyse jokes. A joke has a particular structure that is constructed with the aim of achieving a humorous effect. This structure makes jokes suitable for linguistic analysis. In the present-day humour research, jokes have become a main topic of research. This linguistically oriented joke research neglects many other forms of humour: spontaneous humour, non-verbal humour, physical humour, and many forms of unintentional humour that appear in real life. We want to survey and re-evaluate the contributions to the humour research of these eighteenth, nineteenth and early twentieth century philosophers and clarify that their more general contributions to the humour research have been neglected in favour of the very restricted form of prepared humour and linguistically expressed and analysed humour as it appears in jokes. We hope that the views expressed in this paper will help to steer the humour research away from joke research and help to integrate humour in the design of human-computer interfaces and smart environments. That is, rather than considering only verbal jokes, we should aim at generating smart environments that understand, facilitate or create humour that goes beyond jokes. PubDate: 2018-12-12 DOI: 10.1007/s10472-018-9609-7

Authors:Johann A. Makowsky Abstract: We survey the status of decidability of the first order consequences in various axiomatizations of Hilbert-style Euclidean geometry. We draw attention to a widely overlooked result by Martin Ziegler from 1980, which proves Tarski’s conjecture on the undecidability of finitely axiomatizable theories of fields. We elaborate on how to use Ziegler’s theorem to show that the consequence relations for the first order theory of the Hilbert plane and the Euclidean plane are undecidable. As new results we add: (A) The first order consequence relations for Wu’s orthogonal and metric geometries (Wen-Tsün Wu, 1984), and for the axiomatization of Origami geometry (J. Justin 1986, H. Huzita 1991) are undecidable. It was already known that the universal theory of Hilbert planes and Wu’s orthogonal geometry is decidable. We show here using elementary model theoretic tools that (B) the universal first order consequences of any geometric theory T of Pappian planes which is consistent with the analytic geometry of the reals is decidable. The techniques used were all known to experts in mathematical logic and geometry in the past but no detailed proofs are easily accessible for practitioners of symbolic computation or automated theorem proving. PubDate: 2018-12-12 DOI: 10.1007/s10472-018-9610-1

Authors:Kayla Jacobs; Alon Itai; Shuly Wintner Abstract: Acronyms—words formed from the initial letters of a phrase—are important for various natural language processing applications, including information retrieval and machine translation. While hand-crafted acronym dictionaries exist, they are limited and require frequent updates. We present a new machine-learning-based approach to automatically build an acronym dictionary from unannotated texts. This is the first such technique that specifically handles non-local acronyms, i.e., that can determine an acronym’s expansion even when the expansion does not appear in the same document as the acronym. Our approach automatically enhances the dictionary with contextual information to help address the acronym disambiguation task (selecting the most appropriate expansion for a given acronym in context), outperforming dictionaries built using prior techniques. We apply the approach to Modern Hebrew, a language with a long tradition of using acronyms, in which the productive morphology and unique orthography adds to the complexity of the problem. PubDate: 2018-12-06 DOI: 10.1007/s10472-018-9608-8

Authors:Ilias Kotsireas; Temur Kutsia; Dimitris E. Simos Abstract: In the past few decades, design theory has grown to encompass a wide variety of research directions. It comes as no surprise that applications in coding theory and communications continue to arise, and also that designs have found applications in new areas. Computer science has provided a new source of applications of designs, and simultaneously a field of new and challenging problems in design theory. In this paper, we revisit a construction for orthogonal designs using the multiplication tables of Cayley-Dixon algebras of dimension 2n. The desired orthogonal designs can be described by a system of equations with the aid of a Gröbner basis computation. For orders greater than 16 the combinatorial explosion of the problem gives rise to equations that are unfeasible to be handled by traditional search algorithms. However, the structural properties of the designs make this problem possible to be tackled in terms of rewriting techniques, by equational unification. We establish connections between central concepts of design theory and equational unification where equivalence operations of designs point to the computation of a minimal complete set of unifiers. These connections make viable the computation of some types of orthogonal designs that have not been found before with the aforementioned algebraic modeling. PubDate: 2018-11-29 DOI: 10.1007/s10472-018-9607-9

Authors:Sreelekha S. Abstract: Most of the tasks which require intelligent behavior have some degree of uncertainty associated with them. The occurrence of uncertainty might be because of several reasons such as the incomplete domain knowledge, unreliable or ambiguous data due to measurement errors, inconsistent data representation. Most of the knowledge-based systems require the incorporation of some form of uncertainty management, in order to handle this kind of indeterminacy present in the system. In this paper, we present one such method to handle the uncertainty in neurules, a neuro-symbolic integration concept. Neuro-Computing is used within the symbolic frame work for improving the performance of symbolic rules. The uncertainty, the personal belief degree that an uncertain event may occur is managed by computing the composite belief values of incomplete or conflicting data. With the implementation of uncertainty management in neurules, the accuracy of the inference mechanism and the generalization performance can be improved. PubDate: 2018-11-01 DOI: 10.1007/s10472-018-9605-y

Authors:David Braun; Nicolas Magaud; Pascal Schreck Abstract: Incidence geometry is a well-established theory which captures the very basic properties of all geometries in terms of points belonging to lines, planes, etc. Moreover, projective incidence geometry leads to a simple framework where many properties can be studied. In this article, we consider two very different but complementary mathematical approaches formalizing this theory within the Coq proof assistant. The first one consists of the usual and synthetic geometric axiom system often encountered in the literature. The second one is more original and relies on combinatorial aspects through the notion of rank which is based on the matroid structure of incidence geometry. This paper mainly contributes to the field by proving the equivalence between these two approaches in both 2D and 3D. This result allows us to study the further automation of many proofs of projective geometry theorems. We give an overview of techniques that will be heavily used in the equivalence proof and are generic enough to be reused later in yet-to-be-written proofs. Finally, we discuss the possibilities of future automation that can be envisaged using the rank notion. PubDate: 2018-10-24 DOI: 10.1007/s10472-018-9604-z

Authors:Dolores Barrios Rolanía; Guillermo Delgado Martínez; Daniel Manrique Abstract: This article presents an evolutionary algorithm to autonomously construct full-connected multilayered feedforward neural architectures. This algorithm employs grammar-guided genetic programming with a context-free grammar that has been specifically designed to satisfy three important restrictions. First, the sentences that belong to the language produced by the grammar only encode all valid neural architectures. Second, full-connected feedforward neural architectures of any size can be generated. Third, smaller-sized neural architectures are favored to avoid overfitting. The proposed evolutionary neural architectures construction system is applied to compute the terms of the two sequences that define the three-term recurrence relation associated with a sequence of orthogonal polynomials. This application imposes an important constraint: training datasets are always very small. Therefore, an adequate sized neural architecture has to be evolved to achieve satisfactory results, which are presented in terms of accuracy and size of the evolved neural architectures, and convergence speed of the evolutionary process. PubDate: 2018-09-18 DOI: 10.1007/s10472-018-9601-2