Authors:Veronica Dahl; Sergio Tessaris; Mariano De Sousa Bispo Abstract: Abstract In the Parsing-as-Constraint-Solving model of language processing, grammar syntax is described modularly through independent constraints among direct constituents of a phrase - constraints such as: “in verb phrases, a verb must precede its complements”, or “in noun phrases, a noun requires a determiner”. Parsing reduces to verifying the constraints relevant to an input phrase, but instead of the typical hierarchical (i.e., parse tree) representations of a successful parse (and also typical complete silence upon unsuccessful parses), the main result is a list of satisfied constraints, and if the input is not totally conforming, also a list of unsatisfied constraints. The latter can serve various purposes beyond plain parsing, such as guiding the correction of any imperfections found in the input- and we can still construct a parse tree if needed, as a side effect. While almost purely syntax-based, the Parsing-as-Constraint-Solving model lends itself well to accommodating interactions with other levels of analysis. These however have been little explored. In this position paper we discuss how to extend this model to incorporate semantic information, in particular from ontologies, and with particular guidance from unsatisfied constraints. This departs from more typical constraint-solving schemes, where failed constraints are simply listed and do not actively contribute to the parse. By giving failed constraints a more active role, we can arrive at more precise analyses and at more appropriate corrections of flawed input. Because even not totally conforming sentences can be more precisely parsed, we gain in expressivity with respect to both the classical, strictly stratified approach to NLP, and the less precise and less reliable statistically-based methods. PubDate: 2018-02-22 DOI: 10.1007/s10472-018-9573-2

Authors:Mehrnoosh Sadrzadeh; Dimitri Kartsaklis; Esma Balkır Abstract: Abstract Distributional semantic models provide vector representations for words by gathering co-occurrence frequencies from corpora of text. Compositional distributional models extend these from words to phrases and sentences. In categorical compositional distributional semantics, phrase and sentence representations are functions of their grammatical structure and representations of the words therein. In this setting, grammatical structures are formalised by morphisms of a compact closed category and meanings of words are formalised by objects of the same category. These can be instantiated in the form of vectors or density matrices. This paper concerns the applications of this model to phrase and sentence level entailment. We argue that entropy-based distances of vectors and density matrices provide a good candidate to measure word-level entailment, show the advantage of density matrices over vectors for word level entailments, and prove that these distances extend compositionally from words to phrases and sentences. We exemplify our theoretical constructions on real data and a toy entailment dataset and provide preliminary experimental evidence. PubDate: 2018-02-15 DOI: 10.1007/s10472-017-9570-x

Authors:Christoph Beierle; Christian Eichhorn; Gabriele Kern-Isberner; Steven Kutsch Abstract: Abstract While the axiomatic system P is an important standard for plausible, nonmonotonic inferences from conditional knowledge bases, it is known to be too weak to solve benchmark problems like Irrelevance or Subclass Inheritance. Ordinal conditional functions provide a semantic base for system P and have often been used to design stronger inference relations, like Pearl’s system Z, or c-representations. While each c-representation shows excellent inference properties and handles particularly Irrelevance and Subclass Inheritance properly, it is still an open problem which c-representation is the best. In this paper, we consider the skeptical inference relation, called c-inference, that is obtained by taking all c-representations of a given knowledge base into account. We study properties of c-inference and show in particular that it preserves the properties of solving Irrelevance and Subclass Inheritance. Based on a characterization of c-representations as solutions of a Constraint Satisfaction Problem (CSP), we also model skeptical c-inference as a CSP and prove soundness and completeness of the modelling, ensuring that constraint solvers can be used for implementing c-inference. PubDate: 2018-02-01 DOI: 10.1007/s10472-017-9571-9

Authors:Arnaud Durand; Miika Hannula; Juha Kontinen; Arne Meier; Jonni Virtema Abstract: Abstract We define a variant of team semantics called multiteam semantics based on multisets and study the properties of various logics in this framework. In particular, we define natural probabilistic versions of inclusion and independence atoms and certain approximation operators motivated by approximate dependence atoms of Väänänen. PubDate: 2018-01-18 DOI: 10.1007/s10472-017-9568-4

Authors:Imen Zghidi; Brahim Hnich; Abdelwahad Rebaii Abstract: Abstract In this paper, we propose a novel notion of statistical consistency for single-stage Stochastic Constraint Satisfaction Problems (SCSPs) in which some of the random variables’ support set is infinite. The essence of this novel notion of local consistency is to be able to make an inference in the presence of infinite scenarios in an uncertain environment. This inference would be based on a restricted finite subset of scenarios with a certain confidence level and a threshold tolerance error. The confidence level is the probability that characterizes the extend to which our inference — based on a subset of scenarios — is correct. The threshold tolerance error is the error range that we can tolerate while making such an inference. We propose a novel statistical consistency enforcing algorithm that is based on sound statistical inference; and experimentally show how to prune inconsistent values in the presence of an infinite set of scenarios. PubDate: 2018-01-11 DOI: 10.1007/s10472-018-9572-3

Authors:Shufeng Kong; Sanjiang Li; Yongming Li; Zhiguo Long Pages: 241 - 271 Abstract: Abstract The study of tractable subclasses of constraint satisfaction problems is a central topic in constraint solving. Tree convex constraints are extensions of the well-known row convex constraints. Just like the latter, every path-consistent tree convex constraint network is globally consistent. However, it is NP-complete to decide whether a tree convex constraint network has solutions. This paper studies and compares three subclasses of tree convex constraints, which are called chain-, path-, and tree-preserving constraints respectively. The class of tree-preserving constraints strictly contains the subclasses of path-preserving and arc-consistent chain-preserving constraints. We prove that, when enforcing strong path-consistency on a tree-preserving constraint network, in each step, the network remains tree-preserving. This ensures the global consistency of consistent tree-preserving networks after enforcing strong path-consistency, and also guarantees the applicability of the partial path-consistency algorithms to tree-preserving constraint networks, which is usually much more efficient than the path-consistency algorithms for large sparse constraint networks. As an application, we show that the class of tree-preserving constraints is useful in solving the scene labelling problem. PubDate: 2017-12-01 DOI: 10.1007/s10472-017-9552-z Issue No:Vol. 81, No. 3-4 (2017)

Authors:Manuele Leonelli; Eva Riccomagno; Jim Q. Smith Pages: 273 - 313 Abstract: Abstract Influence diagrams provide a compact graphical representation of decision problems. Several algorithms for the quick computation of their associated expected utilities are available in the literature. However, often they rely on a full quantification of both probabilistic uncertainties and utility values. For problems where all random variables and decision spaces are finite and discrete, here we develop a symbolic way to calculate the expected utilities of influence diagrams that does not require a full numerical representation. Within this approach expected utilities correspond to families of polynomials. After characterizing their polynomial structure, we develop an efficient symbolic algorithm for the propagation of expected utilities through the diagram and provide an implementation of this algorithm using a computer algebra system. We then characterize many of the standard manipulations of influence diagrams as transformations of polynomials. We also generalize the decision analytic framework of these diagrams by defining asymmetries as operations over the expected utility polynomials. PubDate: 2017-12-01 DOI: 10.1007/s10472-017-9553-y Issue No:Vol. 81, No. 3-4 (2017)

Authors:C. E. Larson; N. Van Cleemput Pages: 315 - 327 Abstract: Abstract Discovery in mathematics is a prototypical intelligent behavior, and an early and continuing goal of artificial intelligence research. We present a heuristic for producing mathematical conjectures of a certain typical form and demonstrate its utility. Our program conjectures relations that hold between properties of objects (property-relation conjectures). These objects can be of a wide variety of types. The statements are true for all objects known to the program, and are the simplest statements which are true of all these objects. The examples here include new conjectures for the hamiltonicity of a graph, a well-studied property of graphs. While our motivation and experiments have been to produce mathematical conjectures—and to contribute to mathematical research—other kinds of interesting property-relation conjectures can be imagined, and this research may be more generally applicable to the development of intelligent machinery. PubDate: 2017-12-01 DOI: 10.1007/s10472-017-9559-5 Issue No:Vol. 81, No. 3-4 (2017)

Authors:Vladimir Vovk; Dusko Pavlovic Pages: 47 - 70 Abstract: Abstract We construct universal prediction systems in the spirit of Popper’s falsifiability and Kolmogorov complexity and randomness. These prediction systems do not depend on any statistical assumptions (but under the IID assumption they dominate, to within the usual accuracy, conformal prediction). Our constructions give rise to a theory of algorithmic complexity and randomness of time containing analogues of several notions and results of the classical theory of Kolmogorov complexity and randomness. PubDate: 2017-10-01 DOI: 10.1007/s10472-017-9547-9 Issue No:Vol. 81, No. 1-2 (2017)

Authors:Ernst Ahlberg; Oscar Hammar; Claus Bendtsen; Lars Carlsson Pages: 145 - 154 Abstract: Abstract We present two applications of conformal prediction relevant to drug discovery. The first application is around interpretation of predictions and the second one around the selection of compounds to progress in a drug discovery project setting. PubDate: 2017-10-01 DOI: 10.1007/s10472-017-9550-1 Issue No:Vol. 81, No. 1-2 (2017)

Authors:Claus Bendtsen; Andrea Degasperi; Ernst Ahlberg; Lars Carlsson Pages: 155 - 166 Abstract: Abstract The high cost for new medicines is hindering their development and machine learning is therefore being used to avoid carrying out physical experiments. Here, we present a comparison between three different machine learning approaches in a classification setting where learning and prediction follow a teaching schedule to mimic the drug discovery process. The approaches are standard SVM classification, SVM based multi-kernel classification and SVM classification based on learning using privileged information. Our two main conclusions are derived using experimental in-vitro data and compound structure descriptors. The in-vitro data is assumed to i) be completely absent in the standard SVM setting, ii) be available at all times when applying multi-kernel learning, or iii) be available as privileged information during training only. The structure descriptors are always available. One conclusion is that multi-kernel learning has higher odds than standard SVM in producing higher accuracy. The second is that learning using privileged information does not have higher odds than the standard SVM, although it may improve accuracy when the training sets are small. PubDate: 2017-10-01 DOI: 10.1007/s10472-017-9541-2 Issue No:Vol. 81, No. 1-2 (2017)

Authors:Flavio Ferrarotti; Klaus-Dieter Schewe; Loredana Tec; Qing Wang Abstract: Abstract We develop a logic which enables reasoning about single steps of non-deterministic and parallel Abstract State Machines (ASMs). Our logic builds upon the unifying logic introduced by Nanchen and Stärk for reasoning about hierarchical (parallel) ASMs. Our main contribution is the handling of non-determinism within the logical formalism. We do this without sacrificing the completeness of the logic for statements about single steps of non-deterministic and parallel ASMs, such as invariants of rules, consistency conditions for rules, or step-by-step equivalence of rules. Moreover, we show that the proposed one-step logic can be easily extended to a multiple-step logic which enables reasoning about concurrent ASMs. PubDate: 2017-12-18 DOI: 10.1007/s10472-017-9569-3

Abstract: Abstract Answer Set Programming has become an increasingly popular formalism for declarative problem solving. Among the huge body of theoretical results, investigations of different equivalence notions between logic programs play a fundamental role for understanding modularity and optimization. While strong equivalence between two programs holds if they can be faithfully replaced by each other in any context (facts and rules), uniform equivalence amounts to equivalent behavior of programs under any set of facts. Both notions (as well as several variants thereof) have been extensively studied. However, the somewhat reverse notion of equivalence which holds if two programs are equivalent under the addition of any set of proper rules (i.e., all rules except facts) has not been investigated yet. In this paper, we close this gap and give a thorough study of this notion, which we call rule equivalence , and its parameterized version where we allow facts over a given restricted alphabet to appear in the context. This notion of equivalence is thus a relationship between two programs whose input is (partially) fixed but where additional proper rules might still be added. Such a notion might be helpful in debugging of programs. We give full characterization results and a complexity analysis for the propositional case of rule equivalence and its relativized versions. Moreover, we address the problem of program simplification under rule equivalence. Finally, we show that rule equivalence is decidable in the non-ground case. PubDate: 2017-11-10 DOI: 10.1007/s10472-017-9567-5

Authors:Wafa Jguirim; Wady Naanaa; Martin C. Cooper Abstract: Abstract Finding a solution to a constraint satisfaction problem (CSP) is known to be an NP-hard task. Considerable effort has been spent on identifying tractable classes of CSP, in other words, classes of constraint satisfaction problems for which there are polynomial time recognition and resolution algorithms. In this article, we present a relational tractable class of binary CSP. Our key contribution is a new ternary operation that we name mjx. We first characterize mjx-closed relations which leads to an optimal algorithm to recognize such relations. To reduce space and time complexity, we define a new storage technique for these relations which reduces the complexity of establishing a form of strong directional path consistency, the consistency level that solves all instances of the proposed class (and, indeed, of all relational classes closed under a majority polymorphism). PubDate: 2017-10-26 DOI: 10.1007/s10472-017-9566-6

Authors:Cynthia Maushagen; Jörg Rothe Abstract: Abstract Control by partition refers to situations where an election chair seeks to influence the outcome of an election by partitioning either the candidates or the voters into two groups, thus creating two first-round subelections that determine who will take part in a final round. The model of partition-of-voters control attacks is remotely related to “gerrymandering” (maliciously resizing election districts). While the complexity of control by partition has been studied thoroughly for many voting systems, there are no such results known for the important veto voting system. We settle the complexity of control by partition for veto in a broad variety of models. In addition, by giving a counterexample we observe that a reduction from the literature (Chen et al. 2015) showing the parameterized complexity of control by adding candidates to plurality elections, parameterized by the number of voters, is technically flawed, and we show how this reduction can be adapted to make it correct. PubDate: 2017-10-26 DOI: 10.1007/s10472-017-9565-7

Authors:Olivier Caelen Abstract: Abstract We propose a way to infer distributions of any performance indicator computed from the confusion matrix. This allows us to evaluate the variability of an indicator and to assess the importance of an observed difference between two performance indicators. We will assume that the values in a confusion matrix are observations coming from a multinomial distribution. Our method is based on a Bayesian approach in which the unknown parameters of the multinomial probability function themselves are assumed to be generated from a random vector. We will show that these unknown parameters follow a Dirichlet distribution. Thanks to the Bayesian approach, we also benefit from an elegant way of injecting prior knowledge into the distributions. Experiments are done on real and synthetic data sets and assess our method’s ability to construct accurate distributions. PubDate: 2017-09-11 DOI: 10.1007/s10472-017-9564-8

Authors:Hiromi Narimatsu; Hiroyuki Kasai Abstract: Abstract Sequential data modeling and analysis have become indispensable tools for analyzing sequential data, such as time-series data, because larger amounts of sensed event data have become available. These methods capture the sequential structure of data of interest, such as input-output relations and correlation among datasets. However, because most studies in this area are specialized or limited to their respective applications, rigorous requirement analysis of such models has not been undertaken from a general perspective. Therefore, we particularly examine the structure of sequential data, and extract the necessity of “state duration” and “state interval” of events for efficient and rich representation of sequential data. Specifically addressing the hidden semi-Markov model (HSMM) that represents such state duration inside a model, we attempt to add representational capability of a state interval of events onto HSMM. To this end, we propose two extended models: an interval state hidden semi-Markov model (IS-HSMM) to express the length of a state interval with a special state node designated as “interval state node”; and an interval length probability hidden semi-Markov model (ILP-HSMM) which represents the length of the state interval with a new probabilistic parameter “interval length probability.” Exhaustive simulations have revealed superior performance of the proposed models in comparison with HSMM. These proposed models are the first reported extensions of HMM to support state interval representation as well as state duration representation. PubDate: 2017-08-31 DOI: 10.1007/s10472-017-9561-y

Authors:Ignacio Montes; Sebastien Destercke Abstract: Abstract Within imprecise probability theory, the extreme points of convex probability sets have an important practical role (to perform inference on graphical models, to compute expectation bounds, …). This is especially true for sets presenting specific features that make them easy to manipulate in applications. This easiness is the reason why extreme points of such models (probability intervals, possibility distributions, …) have been well studied. Yet, imprecise cumulative distributions (a.k.a. p-boxes) constitute an important exception, as the characterization of their extreme points remain to be studied. This is what we do in this paper, where we characterize the maximal number of extreme points of a p-box, give a family of p-boxes that attains this number and show an algorithm that allows to compute the extreme points of a given p-box. To achieve all this, we also provide what we think to be a new characterization of extreme points of a belief function. PubDate: 2017-08-11 DOI: 10.1007/s10472-017-9562-x

Authors:Pavel Surynek Abstract: This paper deals with solving cooperative path finding (CPF) problems in a makespan-optimal way. A feasible solution to the CPF problem lies in the moving of mobile agents where each agent has unique initial and goal positions. The abstraction adopted in CPF assumes that agents are discrete units that move over an undirected graph by traversing its edges. We focus specifically on makespan-optimal solutions to the CPF problem where the task is to generate solutions that are as short as possible in terms of the total number of time steps required for all agents to reach their goal positions. We demonstrate that reducing CPF to propositional satisfiability (SAT) represents a viable way to obtain makespan-optimal solutions. Several ways of encoding CPFs into propositional formulae are proposed and evaluated both theoretically and experimentally. Encodings based on the log and direct representations of decision variables are compared. The evaluation indicates that SAT-based solutions to CPF outperform the makespan-optimal versions of such search-based CPF solvers such as OD+ID, CBS, and ICTS in highly constrained scenarios (i.e., environments that are densely occupied by agents and where interactions among the agents are frequent). Moreover, the experiments clearly show that CPF encodings based on the direct representation of variables can be solved faster, although they are less space-efficient than log encodings. PubDate: 2017-08-02 DOI: 10.1007/s10472-017-9560-z