Authors:Rafael C. Cardoso; Tabajara Krausburg; Túlio Baségio; Débora C. Engelmann; Jomi F. Hübner; Rafael H. Bordini Abstract: The Multi-Agent Programming Contest in 2017 expanded upon the Agents in the City scenario used in the 2016 edition of the contest. In the Agents in the City contest, two teams compete to accomplish logistic tasks in simulations using realistic city maps from OpenStreetMap. The new version of the scenario shifted emphasis to include jobs that require assembled items, new types of facilities, and larger teams, resulting in a significantly more complex scenario than before. In this paper, we describe the strategies used by our team, highlighting our adaptations and any new additions from our participation in the previous year. One such new addition, is that now we have fully explored the use of all three programming dimensions (agent, environment, and organization) found in JaCaMo, the multi-agent system development platform that we used to implement our team. We also provide some discussion and analysis on what we believe were some of our most influential matches. PubDate: 2018-04-30 DOI: 10.1007/s10472-018-9584-z

Authors:Daniel Berend; Amnon Meisels; Or Peri Abstract: Distributed Constraint Optimization Problems (DCOPs) are widely used in Multi-Agent Systems for coordination and scheduling. The present paper proposes a heuristic algorithm that uses probabilistic assessment of the optimal solution in order to quickly find a solution that is not far from the optimal one. The heuristic assessment uses two passes by the agents to produce a high-quality solution. Extensive performance evaluation demonstrates that the solution of the proposed probabilistic assessment algorithm is indeed very close to the optimum, on smaller problems where this could be measured. In larger set-ups, the quality of the solution is demonstrated relatively to standard incomplete search algorithms. PubDate: 2018-04-26 DOI: 10.1007/s10472-018-9582-1

Authors:Philipp Czerner; Jonathan Pieper Abstract: In this paper we describe our participation in the 2017 edition of the Multi-Agent Programming Contest as team ‘lampe’. Our strategy was implemented in C++; it uses a centralised organisation of agents and evaluates different strategies based on an internal simulation of the future game state. Strategies are generated using handwritten heuristics in a randomised fashion, also relying on the internal simulation. PubDate: 2018-04-10 DOI: 10.1007/s10472-018-9581-2

Authors:Régis Riveret; Pietro Baroni; Yang Gao; Guido Governatori; Antonino Rotolo; Giovanni Sartor Abstract: The combination of argumentation and probability paves the way to new accounts of qualitative and quantitative uncertainty, thereby offering new theoretical and applicative opportunities. Due to a variety of interests, probabilistic argumentation is approached in the literature with different frameworks, pertaining to structured and abstract argumentation, and with respect to diverse types of uncertainty, in particular the uncertainty on the credibility of the premises, the uncertainty about which arguments to consider, and the uncertainty on the acceptance status of arguments or statements. Towards a general framework for probabilistic argumentation, we investigate a labelling-oriented framework encompassing a basic setting for rule-based argumentation and its (semi-) abstract account, along with diverse types of uncertainty. Our framework provides a systematic treatment of various kinds of uncertainty and of their relationships and allows us to back or question assertions from the literature. PubDate: 2018-03-20 DOI: 10.1007/s10472-018-9574-1

Authors:S. D. Prestwich; S. A. Tarim; I. Ozkan Abstract: Probabilistic methods for causal discovery are based on the detection of patterns of correlation between variables. They are based on statistical theory and have revolutionised the study of causality. However, when correlation itself is unreliable, so are probabilistic methods: unusual data can lead to spurious causal links, while nonmonotonic functional relationships between variables can prevent the detection of causal links. We describe a new heuristic method for inferring causality between two continuous variables, based on randomness and unimodality tests and making few assumptions about the data. We evaluate the method against probabilistic and additive noise algorithms on real and artificial datasets, and show that it performs competitively. PubDate: 2018-03-14 DOI: 10.1007/s10472-018-9575-0

Authors:Luís Cruz-Filipe; Graça Gaspar; Isabel Nunes; Peter Schneider-Kamp Abstract: In the database world, integrity constraints are essential to guarantee database integrity. The related problem of database repair deals with finding the best way to change a database so that it satisfies its integrity constraints. These two topics have been studied intensively since the 1980s. The formalism of active integrity constraints, proposed in 2004, aims at addressing them jointly, by providing a syntax whereby a particular subclass of integrity constraints can be specified together with preferred ways to repair inconsistency. In the last decade, several authors have proposed adaptations of the notion of integrity constraints to other reasoning frameworks than relational databases. In this article, we extend this line of work in two ways. First, we target multi-context systems, a general-purpose framework for combining heterogeneous reasoning systems, able to model most other reasoning frameworks, as we demonstrate. Second, we extend the notions of active integrity constraints and grounded repairs to this generalized setting. This way of including repair suggestions inside integrity constraints, subject to a validity check, enables us to define simple iterative algorithms to find all possible grounded repairs for an inconsistent multi-context system, avoiding the need to solve complex or undecidable problems. PubDate: 2018-03-14 DOI: 10.1007/s10472-018-9577-y

Authors:Ryan Berryhill; Andreas Veneris Abstract: In the modern hardware design cycle, correcting the design when verification reveals a state to be erroneously unreachable can be a time-consuming manual process. Recently-developed algorithms aid the engineer in finding the root cause of the failure in these cases. However, they exhaustively examine every design location to determine a set of possible root causes, potentially requiring substantial runtime. This work develops a novel approach that is applicable to practical diagnosis problems. In contrast to previous approaches, it considers only a portion of the design locations but still finds the complete solution set to the problem. The presented approach proceeds through a series of iterations, each considering a strategically-chosen subset of the design locations (a suspect set) to determine if they are root causes. The results of each iteration inform the choice of suspect set for the next iteration. By choosing the first iteration’s suspect set appropriately, the algorithm is able to find the complete solution set to the problem. Empirical results on industrial designs and standard benchmark designs demonstrate a 15x speedup compared to the previous approach, while considering only 18.7% of the design locations as suspects. PubDate: 2018-03-13 DOI: 10.1007/s10472-018-9578-x

Authors:Jan Triska; Vilem Vychodil Abstract: We deal with dependencies in object-attribute data which is recorded at separate points in time. The data is formalized by finitely many tables encoding the relationship between objects and attributes and each table can be seen as single formal context observed at separate point in time. Given such data, we are interested in concise ways of characterizing all if-then dependencies between attributes that hold in the data and are preserved in all time points. In order to formalize the dependencies, we use particular if-then rules called attribute implications annotated by time points which can be seen as particular formulas of linear temporal logic. We introduce non-redundant bases of dependencies from data as non-redundant sets entailing exactly all dependencies that hold in the data. In addition, we investigate minimality of bases as stronger form of non-redundancy. For given data, we present description of minimal bases using the notion of pseudo-intents generalized in the temporal setting. The present paper is a continuation of our previous paper on entailment of attribute implications annotated by time points. PubDate: 2018-03-06 DOI: 10.1007/s10472-018-9576-z

Authors:Veronica Dahl; Sergio Tessaris; Mariano De Sousa Bispo Abstract: In the Parsing-as-Constraint-Solving model of language processing, grammar syntax is described modularly through independent constraints among direct constituents of a phrase - constraints such as: “in verb phrases, a verb must precede its complements”, or “in noun phrases, a noun requires a determiner”. Parsing reduces to verifying the constraints relevant to an input phrase, but instead of the typical hierarchical (i.e., parse tree) representations of a successful parse (and also typical complete silence upon unsuccessful parses), the main result is a list of satisfied constraints, and if the input is not totally conforming, also a list of unsatisfied constraints. The latter can serve various purposes beyond plain parsing, such as guiding the correction of any imperfections found in the input- and we can still construct a parse tree if needed, as a side effect. While almost purely syntax-based, the Parsing-as-Constraint-Solving model lends itself well to accommodating interactions with other levels of analysis. These however have been little explored. In this position paper we discuss how to extend this model to incorporate semantic information, in particular from ontologies, and with particular guidance from unsatisfied constraints. This departs from more typical constraint-solving schemes, where failed constraints are simply listed and do not actively contribute to the parse. By giving failed constraints a more active role, we can arrive at more precise analyses and at more appropriate corrections of flawed input. Because even not totally conforming sentences can be more precisely parsed, we gain in expressivity with respect to both the classical, strictly stratified approach to NLP, and the less precise and less reliable statistically-based methods. PubDate: 2018-02-22 DOI: 10.1007/s10472-018-9573-2

Authors:Mehrnoosh Sadrzadeh; Dimitri Kartsaklis; Esma Balkır Abstract: Distributional semantic models provide vector representations for words by gathering co-occurrence frequencies from corpora of text. Compositional distributional models extend these from words to phrases and sentences. In categorical compositional distributional semantics, phrase and sentence representations are functions of their grammatical structure and representations of the words therein. In this setting, grammatical structures are formalised by morphisms of a compact closed category and meanings of words are formalised by objects of the same category. These can be instantiated in the form of vectors or density matrices. This paper concerns the applications of this model to phrase and sentence level entailment. We argue that entropy-based distances of vectors and density matrices provide a good candidate to measure word-level entailment, show the advantage of density matrices over vectors for word level entailments, and prove that these distances extend compositionally from words to phrases and sentences. We exemplify our theoretical constructions on real data and a toy entailment dataset and provide preliminary experimental evidence. PubDate: 2018-02-15 DOI: 10.1007/s10472-017-9570-x

Authors:Christoph Beierle; Christian Eichhorn; Gabriele Kern-Isberner; Steven Kutsch Abstract: While the axiomatic system P is an important standard for plausible, nonmonotonic inferences from conditional knowledge bases, it is known to be too weak to solve benchmark problems like Irrelevance or Subclass Inheritance. Ordinal conditional functions provide a semantic base for system P and have often been used to design stronger inference relations, like Pearl’s system Z, or c-representations. While each c-representation shows excellent inference properties and handles particularly Irrelevance and Subclass Inheritance properly, it is still an open problem which c-representation is the best. In this paper, we consider the skeptical inference relation, called c-inference, that is obtained by taking all c-representations of a given knowledge base into account. We study properties of c-inference and show in particular that it preserves the properties of solving Irrelevance and Subclass Inheritance. Based on a characterization of c-representations as solutions of a Constraint Satisfaction Problem (CSP), we also model skeptical c-inference as a CSP and prove soundness and completeness of the modelling, ensuring that constraint solvers can be used for implementing c-inference. PubDate: 2018-02-01 DOI: 10.1007/s10472-017-9571-9

Authors:Arnaud Durand; Miika Hannula; Juha Kontinen; Arne Meier; Jonni Virtema Abstract: We define a variant of team semantics called multiteam semantics based on multisets and study the properties of various logics in this framework. In particular, we define natural probabilistic versions of inclusion and independence atoms and certain approximation operators motivated by approximate dependence atoms of Väänänen. PubDate: 2018-01-18 DOI: 10.1007/s10472-017-9568-4

Authors:Imen Zghidi; Brahim Hnich; Abdelwahad Rebaii Abstract: In this paper, we propose a novel notion of statistical consistency for single-stage Stochastic Constraint Satisfaction Problems (SCSPs) in which some of the random variables’ support set is infinite. The essence of this novel notion of local consistency is to be able to make an inference in the presence of infinite scenarios in an uncertain environment. This inference would be based on a restricted finite subset of scenarios with a certain confidence level and a threshold tolerance error. The confidence level is the probability that characterizes the extend to which our inference — based on a subset of scenarios — is correct. The threshold tolerance error is the error range that we can tolerate while making such an inference. We propose a novel statistical consistency enforcing algorithm that is based on sound statistical inference; and experimentally show how to prune inconsistent values in the presence of an infinite set of scenarios. PubDate: 2018-01-11 DOI: 10.1007/s10472-018-9572-3

Authors:Manuele Leonelli; Eva Riccomagno; Jim Q. Smith Pages: 273 - 313 Abstract: Influence diagrams provide a compact graphical representation of decision problems. Several algorithms for the quick computation of their associated expected utilities are available in the literature. However, often they rely on a full quantification of both probabilistic uncertainties and utility values. For problems where all random variables and decision spaces are finite and discrete, here we develop a symbolic way to calculate the expected utilities of influence diagrams that does not require a full numerical representation. Within this approach expected utilities correspond to families of polynomials. After characterizing their polynomial structure, we develop an efficient symbolic algorithm for the propagation of expected utilities through the diagram and provide an implementation of this algorithm using a computer algebra system. We then characterize many of the standard manipulations of influence diagrams as transformations of polynomials. We also generalize the decision analytic framework of these diagrams by defining asymmetries as operations over the expected utility polynomials. PubDate: 2017-12-01 DOI: 10.1007/s10472-017-9553-y Issue No:Vol. 81, No. 3-4 (2017)

Authors:C. E. Larson; N. Van Cleemput Pages: 315 - 327 Abstract: Discovery in mathematics is a prototypical intelligent behavior, and an early and continuing goal of artificial intelligence research. We present a heuristic for producing mathematical conjectures of a certain typical form and demonstrate its utility. Our program conjectures relations that hold between properties of objects (property-relation conjectures). These objects can be of a wide variety of types. The statements are true for all objects known to the program, and are the simplest statements which are true of all these objects. The examples here include new conjectures for the hamiltonicity of a graph, a well-studied property of graphs. While our motivation and experiments have been to produce mathematical conjectures—and to contribute to mathematical research—other kinds of interesting property-relation conjectures can be imagined, and this research may be more generally applicable to the development of intelligent machinery. PubDate: 2017-12-01 DOI: 10.1007/s10472-017-9559-5 Issue No:Vol. 81, No. 3-4 (2017)

Authors:Flavio Ferrarotti; Klaus-Dieter Schewe; Loredana Tec; Qing Wang Abstract: We develop a logic which enables reasoning about single steps of non-deterministic and parallel State Machines (ASMs). Our logic builds upon the unifying logic introduced by Nanchen and Stärk for reasoning about hierarchical (parallel) ASMs. Our main contribution is the handling of non-determinism within the logical formalism. We do this without sacrificing the completeness of the logic for statements about single steps of non-deterministic and parallel ASMs, such as invariants of rules, consistency conditions for rules, or step-by-step equivalence of rules. Moreover, we show that the proposed one-step logic can be easily extended to a multiple-step logic which enables reasoning about concurrent ASMs. PubDate: 2017-12-18 DOI: 10.1007/s10472-017-9569-3

Authors:Hiromi Narimatsu; Hiroyuki Kasai Abstract: Sequential data modeling and analysis have become indispensable tools for analyzing sequential data, such as time-series data, because larger amounts of sensed event data have become available. These methods capture the sequential structure of data of interest, such as input-output relations and correlation among datasets. However, because most studies in this area are specialized or limited to their respective applications, rigorous requirement analysis of such models has not been undertaken from a general perspective. Therefore, we particularly examine the structure of sequential data, and extract the necessity of “state duration” and “state interval” of events for efficient and rich representation of sequential data. Specifically addressing the hidden semi-Markov model (HSMM) that represents such state duration inside a model, we attempt to add representational capability of a state interval of events onto HSMM. To this end, we propose two extended models: an interval state hidden semi-Markov model (IS-HSMM) to express the length of a state interval with a special state node designated as “interval state node”; and an interval length probability hidden semi-Markov model (ILP-HSMM) which represents the length of the state interval with a new probabilistic parameter “interval length probability.” Exhaustive simulations have revealed superior performance of the proposed models in comparison with HSMM. These proposed models are the first reported extensions of HMM to support state interval representation as well as state duration representation. PubDate: 2017-08-31 DOI: 10.1007/s10472-017-9561-y

Authors:Ignacio Montes; Sebastien Destercke Abstract: Within imprecise probability theory, the extreme points of convex probability sets have an important practical role (to perform inference on graphical models, to compute expectation bounds, …). This is especially true for sets presenting specific features that make them easy to manipulate in applications. This easiness is the reason why extreme points of such models (probability intervals, possibility distributions, …) have been well studied. Yet, imprecise cumulative distributions (a.k.a. p-boxes) constitute an important exception, as the characterization of their extreme points remain to be studied. This is what we do in this paper, where we characterize the maximal number of extreme points of a p-box, give a family of p-boxes that attains this number and show an algorithm that allows to compute the extreme points of a given p-box. To achieve all this, we also provide what we think to be a new characterization of extreme points of a belief function. PubDate: 2017-08-11 DOI: 10.1007/s10472-017-9562-x

Authors:Pavel Surynek Abstract: This paper deals with solving cooperative path finding (CPF) problems in a makespan-optimal way. A feasible solution to the CPF problem lies in the moving of mobile agents where each agent has unique initial and goal positions. The abstraction adopted in CPF assumes that agents are discrete units that move over an undirected graph by traversing its edges. We focus specifically on makespan-optimal solutions to the CPF problem where the task is to generate solutions that are as short as possible in terms of the total number of time steps required for all agents to reach their goal positions. We demonstrate that reducing CPF to propositional satisfiability (SAT) represents a viable way to obtain makespan-optimal solutions. Several ways of encoding CPFs into propositional formulae are proposed and evaluated both theoretically and experimentally. Encodings based on the log and direct representations of decision variables are compared. The evaluation indicates that SAT-based solutions to CPF outperform the makespan-optimal versions of such search-based CPF solvers such as OD+ID, CBS, and ICTS in highly constrained scenarios (i.e., environments that are densely occupied by agents and where interactions among the agents are frequent). Moreover, the experiments clearly show that CPF encodings based on the direct representation of variables can be solved faster, although they are less space-efficient than log encodings. PubDate: 2017-08-02 DOI: 10.1007/s10472-017-9560-z