Journal Cover Artificial Intelligence
  [SJR: 2.426]   [H-I: 115]   [160 followers]  Follow
    
   Hybrid Journal Hybrid journal (It can contain Open Access articles)
   ISSN (Print) 0004-3702
   Published by Elsevier Homepage  [3177 journals]
  • Online spatio-temporal matching in stochastic and dynamic domains
    • Authors: Meghna Lowalekar; Pradeep Varakantham; Patrick Jaillet
      Pages: 71 - 112
      Abstract: Publication date: August 2018
      Source:Artificial Intelligence, Volume 261
      Author(s): Meghna Lowalekar, Pradeep Varakantham, Patrick Jaillet
      Online spatio-temporal matching of servers/services to customers is a problem that arises at a large scale in many domains associated with shared transportation (e.g., taxis, ride sharing, super shuttles, etc.) and delivery services (e.g., food, equipment, clothing, home fuel, etc.). A key characteristic of these problems is that the matching of servers/services to customers in one stage has a direct impact on the matching in the next stage. For instance, it is efficient for taxis to pick up customers closer to the drop off point of the customer from the first stage of matching. Traditionally, greedy/myopic approaches have been adopted to address such large scale online matching problems. While they provide solutions in a scalable manner, due to their myopic nature, the quality of matching obtained can be improved significantly (demonstrated in our experimental results). In this paper, we present a multi-stage stochastic optimization formulation to consider potential future demand scenarios (obtained from past data). We then provide an enhancement to solve large scale problems more effectively and efficiently online. We also provide the worst-case theoretical bounds on the performance of different approaches. Finally, we demonstrate the significant improvement provided by our techniques over myopic approaches and two other multi-stage approaches from literature (Approximate Dynamic Programming and Hybrid Multi-Stage Stochastic optimization formulation) on three real world taxi data sets.

      PubDate: 2018-05-16T20:45:23Z
      DOI: 10.1016/j.artint.2018.04.005
      Issue No: Vol. 261 (2018)
       
  • Symbolic perimeter abstraction heuristics for cost-optimal planning
    • Authors: Álvaro Torralba; Carlos Linares López; Daniel Borrajo
      Pages: 1 - 31
      Abstract: Publication date: June 2018
      Source:Artificial Intelligence, Volume 259
      Author(s): Álvaro Torralba, Carlos Linares López, Daniel Borrajo
      In the context of heuristic search within automated planning, abstraction heuristics map the problem into an abstract instance and use the optimal solution cost in the abstract state space as an estimate for the real solution cost. Their flexibility in choosing different abstract mappings makes abstractions a powerful tool to obtain domain-independent heuristics. Different types of abstraction heuristics exist depending on how the mapping is defined, like Pattern Databases (PDBs) or Merge-and-Shrink (M&S). In this paper, we consider two variants of PDBs, symbolic and perimeter PDBs, combining them to take advantage of their synergy. Symbolic PDBs use decision diagrams in order to efficiently traverse the abstract state space. Perimeter PDBs derive more informed estimates by first constructing a perimeter around the goal and then using it to initialize the abstract search. We generalize this idea by considering a hierarchy of abstractions. Our algorithm starts by constructing a symbolic perimeter around the goal and, whenever continuing the search becomes unfeasible, it switches to a more abstracted state space. By delaying the use of an abstraction, the algorithm derives heuristics as informed as possible. Moreover, we prove that M&S abstractions with a linear merge strategy can be efficiently represented as decision diagrams, enabling the use of symbolic search with M&S abstractions as well as with PDBs. Our experimental evaluation shows that symbolic perimeter abstractions are competitive with other state-of-the-art heuristics.

      PubDate: 2018-04-10T21:43:17Z
      DOI: 10.1016/j.artint.2018.02.002
      Issue No: Vol. 259 (2018)
       
  • Arguing about informant credibility in open multi-agent systems
    • Authors: Sebastian Gottifredi; Luciano H. Tamargo; Alejandro J. García; Guillermo R. Simari
      Pages: 91 - 109
      Abstract: Publication date: June 2018
      Source:Artificial Intelligence, Volume 259
      Author(s): Sebastian Gottifredi, Luciano H. Tamargo, Alejandro J. García, Guillermo R. Simari
      This paper proposes the use of an argumentation framework with recursive attacks to address a trust model in a collaborative open multi-agent system. Our approach is focused on scenarios where agents share information about the credibility (informational trust) they have assigned to their peers. We will represent informants' credibility through credibility objects which will include not only trust information but also the informant source. This leads to a recursive setting where the reliability of certain credibility information depends on the credibility of other pieces of information that should be subject to the same analysis. Credibility objects are maintained in a credibility base which can have information in conflict. In this scenario, we will formally show that our proposal will produce a partially ordered credibility relation; such relation contains the information that can be justified by an argumentation process.

      PubDate: 2018-04-10T21:43:17Z
      DOI: 10.1016/j.artint.2018.03.001
      Issue No: Vol. 259 (2018)
       
  • The complexity and generality of learning answer set programs
    • Authors: Mark Law; Alessandra Russo; Krysia Broda
      Pages: 110 - 146
      Abstract: Publication date: June 2018
      Source:Artificial Intelligence, Volume 259
      Author(s): Mark Law, Alessandra Russo, Krysia Broda
      Traditionally most of the work in the field of Inductive Logic Programming (ILP) has addressed the problem of learning Prolog programs. On the other hand, Answer Set Programming is increasingly being used as a powerful language for knowledge representation and reasoning, and is also gaining increasing attention in industry. Consequently, the research activity in ILP has widened to the area of Answer Set Programming, witnessing the proposal of several new learning frameworks that have extended ILP to learning answer set programs. In this paper, we investigate the theoretical properties of these existing frameworks for learning programs under the answer set semantics. Specifically, we present a detailed analysis of the computational complexity of each of these frameworks with respect to the two decision problems of deciding whether a hypothesis is a solution of a learning task and deciding whether a learning task has any solutions. We introduce a new notion of generality of a learning framework, which enables us to define a framework to be more general than another in terms of being able to distinguish one ASP hypothesis solution from a set of incorrect ASP programs. Based on this notion, we formally prove a generality relation over the set of existing frameworks for learning programs under answer set semantics. In particular, we show that our recently proposed framework, Context-dependent Learning from Ordered Answer Sets, is more general than brave induction, induction of stable models, and cautious induction, and maintains the same complexity as cautious induction, which has the highest complexity of these frameworks.

      PubDate: 2018-04-10T21:43:17Z
      DOI: 10.1016/j.artint.2018.03.005
      Issue No: Vol. 259 (2018)
       
  • Predicting citywide crowd flows using deep spatio-temporal residual
           networks
    • Authors: Junbo Zhang; Yu Zheng; Dekang Qi; Ruiyuan Li; Xiuwen Yi; Tianrui Li
      Pages: 147 - 166
      Abstract: Publication date: June 2018
      Source:Artificial Intelligence, Volume 259
      Author(s): Junbo Zhang, Yu Zheng, Dekang Qi, Ruiyuan Li, Xiuwen Yi, Tianrui Li
      Forecasting the flow of crowds is of great importance to traffic management and public safety, and very challenging as it is affected by many complex factors, including spatial dependencies (nearby and distant), temporal dependencies (closeness, period, trend), and external conditions (e.g. weather and events). We propose a deep-learning-based approach, called ST-ResNet, to collectively forecast two types of crowd flows (i.e. inflow and outflow) in each and every region of a city. We design an end-to-end structure of ST-ResNet based on unique properties of spatio-temporal data. More specifically, we employ the residual neural network framework to model the temporal closeness, period, and trend properties of crowd traffic. For each property, we design a branch of residual convolutional units, each of which models the spatial properties of crowd traffic. ST-ResNet learns to dynamically aggregate the output of the three residual neural networks based on data, assigning different weights to different branches and regions. The aggregation is further combined with external factors, such as weather and day of the week, to predict the final traffic of crowds in each and every region. We have developed a real-time system based on Microsoft Azure Cloud, called UrbanFlow, providing the crowd flow monitoring and forecasting in Guiyang City of China. In addition, we present an extensive experimental evaluation using two types of crowd flows in Beijing and New York City (NYC), where ST-ResNet outperforms nine well-known baselines.

      PubDate: 2018-04-10T21:43:17Z
      DOI: 10.1016/j.artint.2018.03.002
      Issue No: Vol. 259 (2018)
       
  • Safe inductions and their applications in knowledge representation
    • Authors: Bart Bogaerts; Joost Vennekens; Marc Denecker
      Pages: 167 - 185
      Abstract: Publication date: June 2018
      Source:Artificial Intelligence, Volume 259
      Author(s): Bart Bogaerts, Joost Vennekens, Marc Denecker
      In many knowledge representation formalisms, a constructive semantics is defined based on sequential applications of rules or of a semantic operator. These constructions often share the property that rule applications must be delayed until it is safe to do so: until it is known that the condition that triggers the rule will continue to hold. This intuition occurs for instance in the well-founded semantics of logic programs and in autoepistemic logic. In this paper, we formally define the safety criterion algebraically. We study properties of so-called safe inductions and apply our theory to logic programming and autoepistemic logic. For the latter, we show that safe inductions manage to capture the intended meaning of a class of theories on which all classical constructive semantics fail.

      PubDate: 2018-04-10T21:43:17Z
      DOI: 10.1016/j.artint.2018.03.008
      Issue No: Vol. 259 (2018)
       
  • Shielded base contraction
    • Authors: Marco Garapa; Eduardo Fermé; Maurício D.L. Reis
      Pages: 186 - 216
      Abstract: Publication date: June 2018
      Source:Artificial Intelligence, Volume 259
      Author(s): Marco Garapa, Eduardo Fermé, Maurício D.L. Reis
      The one that is considered the standard model in the area of logic of belief change was proposed by Alchourrón, Gärdenfors and Makinson in 1985 and is, nowadays, known as the AGM model. Two of the main shortcomings pointed out to the AGM model of belief change are the (impractical) use of belief sets to represent belief states and the (unrealistic) acceptance of any new information. In this paper we study a kind of operators—known as shielded base contractions—which address both those issues. Indeed, on the one hand, these operators are defined on belief bases (rather than belief sets) and, on the other hand, they are constructed with the underlying idea that not all new informations are accepted. We propose twenty different classes of shielded base contractions and obtain axiomatic characterizations for each of them. Additionally we thoroughly investigate the interrelations (in the sense of inclusion) among all those classes. More precisely, we analyse whether each of those classes is or is not (strictly) contained in each of the remaining ones.

      PubDate: 2018-04-10T21:43:17Z
      DOI: 10.1016/j.artint.2018.03.007
      Issue No: Vol. 259 (2018)
       
  • Forming k coalitions and facilitating relationships in social networks
    • Authors: Liat Sless; Noam Hazon; Sarit Kraus; Michael Wooldridge
      Pages: 217 - 245
      Abstract: Publication date: June 2018
      Source:Artificial Intelligence, Volume 259
      Author(s): Liat Sless, Noam Hazon, Sarit Kraus, Michael Wooldridge
      In this paper we relax two common assumptions that are made when studying coalition formation. The first is that any number of coalitions can be formed; the second is that any possible coalition can be formed. We study a model of coalition formation where the value depends on a social network and exactly k coalitions must be formed. Additionally, in this context we present a new problem for an organizer that would like to introduce members of the social network to each other in order to increase the social welfare or to stabilize a coalition structure. We show that, when the number of coalitions, k, is fixed and there are not many negative edges, it is possible to find the coalition structure that maximizes the social welfare in polynomial time. Furthermore, an organizer can efficiently find the optimal set of edges to add to the network, and we experimentally demonstrate the effectiveness of this approach. In addition, we show that in our setting even when k is fixed and there are not many negative edges, finding a member of the core is intractable. However, we provide a heuristic for efficiently finding a member of the core that also guarantees a social welfare within a factor of 1/2 of the optimal social welfare. We also show that checking whether a given coalition structure is a member of the core can be done in polynomial time. Finally, we consider the problem faced by an organizer who would like to add edges to the network in order to stabilize a specific coalition structure core: we show that this problem is intractable.

      PubDate: 2018-04-10T21:43:17Z
      DOI: 10.1016/j.artint.2018.03.004
      Issue No: Vol. 259 (2018)
       
  • Extracting mutual exclusion invariants from lifted temporal planning
           domains
    • Authors: Sara Bernardini; Fabio Fagnani; David E. Smith
      Pages: 1 - 65
      Abstract: Publication date: May 2018
      Source:Artificial Intelligence, Volume 258
      Author(s): Sara Bernardini, Fabio Fagnani, David E. Smith
      We present a technique for automatically extracting mutual exclusion invariants from temporal planning instances. It first identifies a set of invariant templates by inspecting the lifted representation of the domain and then checks these templates against properties that assure invariance. Our technique builds on other approaches to invariant synthesis presented in the literature but departs from their limited focus on instantaneous actions by addressing temporal domains. To deal with time, we formulate invariance conditions that account for the entire temporal structure of the actions and the possible concurrent interactions between them. As a result, we construct a more comprehensive technique than previous methods, which is able to find not only invariants for temporal domains but also a broader set of invariants for sequential domains. Our experimental results provide evidence that our domain analysis is effective at identifying a more extensive set of invariants, which results in the generation of fewer multi-valued state variables. We show that, in turn, this reduction in the number of variables reflects positively on the performance of the temporal planners that use a variable/value representation.

      PubDate: 2018-02-26T12:06:17Z
      DOI: 10.1016/j.artint.2018.01.004
      Issue No: Vol. 258 (2018)
       
  • Autonomous agents modelling other agents: A comprehensive survey and open
           problems
    • Authors: Stefano V. Albrecht; Peter Stone
      Pages: 66 - 95
      Abstract: Publication date: May 2018
      Source:Artificial Intelligence, Volume 258
      Author(s): Stefano V. Albrecht, Peter Stone
      Much research in artificial intelligence is concerned with the development of autonomous agents that can interact effectively with other agents. An important aspect of such agents is the ability to reason about the behaviours of other agents, by constructing models which make predictions about various properties of interest (such as actions, goals, beliefs) of the modelled agents. A variety of modelling approaches now exist which vary widely in their methodology and underlying assumptions, catering to the needs of the different sub-communities within which they were developed and reflecting the different practical uses for which they are intended. The purpose of the present article is to provide a comprehensive survey of the salient modelling methods which can be found in the literature. The article concludes with a discussion of open problems which may form the basis for fruitful future research.

      PubDate: 2018-02-26T12:06:17Z
      DOI: 10.1016/j.artint.2018.01.002
      Issue No: Vol. 258 (2018)
       
  • Star-topology decoupled state space search
    • Authors: Daniel Gnad; Jörg Hoffmann
      Pages: 24 - 60
      Abstract: Publication date: April 2018
      Source:Artificial Intelligence, Volume 257
      Author(s): Daniel Gnad, Jörg Hoffmann
      State space search is a basic method for analyzing reachability in discrete transition systems. To tackle large compactly described transition systems – the state space explosion – a wealth of techniques (e.g., partial-order reduction) have been developed that reduce the search space without affecting the existence of (optimal) solution paths. Focusing on classical AI planning, where the compact description is in terms of a vector of state variables, an initial state, a goal condition, and a set of actions, we add another technique, that we baptize star-topology decoupling, into this arsenal. A star topology partitions the state variables into components so that a single center component directly interacts with several leaf components, but the leaves interact only via the center. Many applications explicitly come with such structure; any classical planning task can be viewed in this way by selecting the center as a subset of state variables separating connected leaf components. Our key observation is that, given such a star topology, the leaves are conditionally independent given the center, in the sense that, given a fixed path of transitions by the center, the possible center-compliant paths are independent across the leaves. Our decoupled search hence branches over center transitions only, and maintains the center-compliant paths for each leaf separately. As we show, this method has exponential separations to all previous search reduction techniques, i.e., examples where it results in exponentially less effort. One can, in principle, prune duplicates in a way so that the decoupled state space can never be larger than the original one. Standard search algorithms remain applicable using simple transformations. Our experiments exhibit large improvements on standard AI planning benchmarks with a pronounced star topology. 1

      PubDate: 2018-02-05T17:01:06Z
      DOI: 10.1016/j.artint.2017.12.004
      Issue No: Vol. 257 (2018)
       
  • The complexity landscape of decompositional parameters for ILP
    • Authors: Robert Ganian; Sebastian Ordyniak
      Pages: 61 - 71
      Abstract: Publication date: April 2018
      Source:Artificial Intelligence, Volume 257
      Author(s): Robert Ganian, Sebastian Ordyniak
      Integer Linear Programming (ILP) can be seen as the archetypical problem for NP-complete optimization problems, and a wide range of problems in artificial intelligence are solved in practice via a translation to ILP. Despite its huge range of applications, only few tractable fragments of ILP are known, probably the most prominent of which is based on the notion of total unimodularity. Using entirely different techniques, we identify new tractable fragments of ILP by studying structural parameterizations of the constraint matrix within the framework of parameterized complexity. In particular, we show that ILP is fixed-parameter tractable when parameterized by the treedepth of the constraint matrix and the maximum absolute value of any coefficient occurring in the ILP instance. Together with matching hardness results for the more general parameter treewidth, we give an overview of the complexity of ILP w.r.t. decompositional parameters defined on the constraint matrix.

      PubDate: 2018-02-05T17:01:06Z
      DOI: 10.1016/j.artint.2017.12.006
      Issue No: Vol. 257 (2018)
       
  • Enhancing context knowledge repositories with justifiable exceptions
    • Authors: Loris Bozzato; Thomas Eiter; Luciano Serafini
      Pages: 72 - 126
      Abstract: Publication date: April 2018
      Source:Artificial Intelligence, Volume 257
      Author(s): Loris Bozzato, Thomas Eiter, Luciano Serafini
      Dealing with context dependent knowledge is a well-known area of study that roots in John McCarthy's seminal work. More recently, the Contextualized Knowledge Repository (CKR) framework has been conceived as a logic-based approach in which knowledge bases have a two layered structure, modeled by a global context and a set of local contexts. The global context not only contains the meta-knowledge defining the properties of local contexts, but also holds the global (context independent) object knowledge that is shared by all of the local contexts. In many practical cases, however, it is desirable to leave the possibility to “override” the global object knowledge at the local level: in other words, it is interesting to recognize the pieces of knowledge that can admit exceptional instances in the local contexts that do not need to satisfy the general axiom. To address this need, we present in this paper an extension of CKR in which defeasible axioms can be included in the global context. The latter are verified in the local contexts only for the instances for which no exception to overriding exists, where exceptions require a justification in terms of facts that are provable from the knowledge base. We formally define this semantics and study some semantic and computational properties, where we characterize the complexity of the major reasoning tasks, among them satisfiability testing, instance checking, and conjunctive query answering. Furthermore, we present a translation of extended CKRs with knowledge bases in the Description Logic SROIQ -RL under the novel semantics to datalog programs under the stable model (answer set) semantics. We also present an implementation prototype and examine its scalability with respect to the size of the input CKR and the amount (level) of defeasibility in experiments. Finally, we compare our representation approach with some major formalisms for expressing defeasible knowledge in Description Logics and contextual knowledge representation. Our work adds to the body of results on using deductive database technology such as SQL and datalog in these areas, and provides an expressive formalism (in terms of intrinsic complexity) for exception handling by overriding.

      PubDate: 2018-02-05T17:01:06Z
      DOI: 10.1016/j.artint.2017.12.005
      Issue No: Vol. 257 (2018)
       
  • A general semi-structured formalism for computational argumentation:
           Definition, properties, and examples of application
    • Authors: Pietro Baroni; Massimiliano Giacomin; Beishui Liao
      Pages: 158 - 207
      Abstract: Publication date: April 2018
      Source:Artificial Intelligence, Volume 257
      Author(s): Pietro Baroni, Massimiliano Giacomin, Beishui Liao
      In the field of computational argumentation several formalisms featuring different levels of abstraction and focusing on different aspects of the argumentation process have been developed. Their combined use, necessary to achieve a comprehensive formal coverage of the argumentation phenomenon, gives rise to a nontrivial interplay between different abstraction levels, so that counterintuitive or undesirable outcomes may result from the combination of formalisms which appear to be well-behaved when considered in isolation. To address this problem we introduce a semi-structured formalism for argumentation, called LAF-ensembles, capturing a set of essential features of structured arguments and define a class of set based argumentation frameworks appropriate to support a semantic assessment of arguments for LAF-ensembles. It is shown that, under suitable assumptions, the combination of a LAF-ensemble and of an appropriate argumentation framework is guaranteed to produce justification outcomes satisfying a set of essential requirements. The generality and usefulness of the proposed approach are demonstrated by illustrating its ability to capture as instances and enhance two structured argumentation formalisms from the literature, namely Vreeswijk's abstract argument systems and Modgil and Prakken's ASPIC + . In particular, a revised version of the latter formalism, properly dealing with generic contrariness and solving significant technical limitations of ASPIC + , is proposed.

      PubDate: 2018-04-10T21:43:17Z
      DOI: 10.1016/j.artint.2018.01.003
      Issue No: Vol. 257 (2018)
       
  • Incentive-based Search for Efficient Equilibria of the Public Goods Game
    • Authors: Vadim Levit; Zohar Komarovsky; Tal Grinshpoun; Amnon Meisels
      Abstract: Publication date: Available online 1 May 2018
      Source:Artificial Intelligence
      Author(s): Vadim Levit, Zohar Komarovsky, Tal Grinshpoun, Amnon Meisels
      The “best-shot” public goods game is a network game, defined on a social network. The simple version of the public goods game (PGG) has a fixed utility for a player who has at least a single neighbor buying the good. Players in the general version of PGG have additional utility when multiple neighbors purchase the good. The general version of the public goods game is shown to be a potential game, establishing the convergence to a stable state (i.e., a pure Nash equilibrium – PNE) by best-response dynamics. One can think of best-response dynamics as a distributed algorithm that runs in a fixed order of players/agents and is guaranteed to converge to a PNE. A new distributed algorithm is proposed for finding PNEs with improved efficiency by the use of transfer of payoffs among players. For the simple version of PGG, it is shown that the proposed algorithm can stabilize an outcome that maximizes social welfare. For the general version of the game, the proposed procedure transforms any initial outcome into a stable solution at least as efficient as the initial outcome by using transfers. An extensive experimental evaluation on randomly generated PGGs demonstrates that whereas pure best-response dynamics converges on stable states that have lower efficiency than the initial outcome, the proposed procedure finds PNEs of higher efficiency.

      PubDate: 2018-05-16T20:45:23Z
      DOI: 10.1016/j.artint.2018.04.004
       
  • Entropy-based Pruning for Learning Bayesian Networks using BIC
    • Authors: Cassio P. de Campos; Mauro Scanagatta; Giorgio Corani; Marco Zaffalon
      Abstract: Publication date: Available online 18 April 2018
      Source:Artificial Intelligence
      Author(s): Cassio P. de Campos, Mauro Scanagatta, Giorgio Corani, Marco Zaffalon
      For decomposable score-based structure learning of Bayesian networks, existing approaches first compute a collection of candidate parent sets for each variable and then optimize over this collection by choosing one parent set for each variable without creating directed cycles while maximizing the total score. We target the task of constructing the collection of candidate parent sets when the score of choice is the Bayesian Information Criterion (BIC). We provide new non-trivial results that can be used to prune the search space of candidate parent sets of each node. We analyze how these new results relate to previous ideas in the literature both theoretically and empirically. We show in experiments with UCI data sets that gains can be significant. Since the new pruning rules are easy to implement and have low computational costs, they can be promptly integrated into all state-of-the-art methods for structure learning of Bayesian networks.

      PubDate: 2018-04-19T14:34:45Z
      DOI: 10.1016/j.artint.2018.04.002
       
  • Measuring inconsistency with constraints for propositional knowledge bases
    • Authors: Kedian
      Abstract: Publication date: June 2018
      Source:Artificial Intelligence, Volume 259
      Author(s): Kedian Mu
      Measuring inconsistency has been considered as a necessary starting point to understand the nature of inconsistency in a knowledge base better. For practical applications, however, we often have to face some constraints on resolving inconsistency. In this paper, we propose a graph-based approach to measuring the inconsistency for a propositional knowledge base with one or both of two typical types of constraints on modifying formulas. Here the first type of constraint, called the hard constraint, describes a pair of sets of formulas such that all the formulas in the first set should be protected from being modified on the condition that all the formulas in the second set must be modified in order to restore the consistency of that base, while the second type, called the soft constraint, describes a set of pairs of formulas that are not allowed to be modified together. At first, we use a bipartite graph to represent the relation between formulas and minimal inconsistent subsets of a knowledge base. Then we show that such a graph-based representation allows us to characterize the inconsistency with constraints in a concise way. Based on this characterization, we thus propose measures for the degree of inconsistency and for the responsibility of each formula for the inconsistency of a knowledge base with constraints, respectively. Finally, we show that these measures can be well explained based on Halpern and Pearl's causal model and Chockler and Halpern's notion of responsibility.

      PubDate: 2018-04-10T21:43:17Z
       
  • Deep Thinking, Where Machine Intelligence Ends and Human Creativity
           Begins, Garry Kasparov, Mig Greengard. John Murray, London (2017), ISBN:
           978-1-47365-350-4
    • Authors: Flash qFiasco
      Abstract: Publication date: Available online 10 April 2018
      Source:Artificial Intelligence
      Author(s): Flash qFiasco


      PubDate: 2018-04-10T21:43:17Z
       
  • Learning in the machine: Random backpropagation and the deep learning
           channel
    • Authors: Pierre Baldi; Peter Sadowski; Zhiqin Lu
      Abstract: Publication date: Available online 3 April 2018
      Source:Artificial Intelligence
      Author(s): Pierre Baldi, Peter Sadowski, Zhiqin Lu
      Random backpropagation (RBP) is a variant of the backpropagation algorithm for training neural networks, where the transpose of the forward matrices are replaced by fixed random matrices in the calculation of the weight updates. It is remarkable both because of its effectiveness, in spite of using random matrices to communicate error information, and because it completely removes the taxing requirement of maintaining symmetric weights in a physical neural system. To better understand random backpropagation, we first connect it to the notions of local learning and learning channels. Through this connection, we derive several alternatives to RBP, including skipped RBP (SRPB), adaptive RBP (ARBP), sparse RBP, and their combinations (e.g. ASRBP) and analyze their computational complexity. We then study their behavior through simulations using the MNIST and CIFAR-10 bechnmark datasets. These simulations show that most of these variants work robustly, almost as well as backpropagation, and that multiplication by the derivatives of the activation functions is important. As a follow-up, we study also the low-end of the number of bits required to communicate error information over the learning channel. We then provide partial intuitive explanations for some of the remarkable properties of RBP and its variations. Finally, we prove several mathematical results, including the convergence to fixed points of linear chains of arbitrary length, the convergence to fixed points of linear autoencoders with decorrelated data, the long-term existence of solutions for linear systems with a single hidden layer and convergence in special cases, and the convergence to fixed points of non-linear chains, when the derivative of the activation functions is included.

      PubDate: 2018-04-10T21:43:17Z
      DOI: 10.1016/j.artint.2018.03.003
       
  • Sequential plan recognition: An iterative approach to disambiguating
           between hypotheses
    • Authors: Reuth Mirsky; Roni Stern; Kobi Gal; Meir Kalech
      Abstract: Publication date: Available online 3 April 2018
      Source:Artificial Intelligence
      Author(s): Reuth Mirsky, Roni Stern, Kobi Gal, Meir Kalech
      Plan recognition algorithms output hypotheses about an agent's plans from its observed actions. Due to imperfect knowledge about the agent's behavior and the environment, it is often the case that there are multiple hypotheses about an agent's plans that are consistent with the observations, though only one of these hypotheses is correct. This paper addresses the problem of how to disambiguate between hypotheses during the recognition process, by querying the acting agent about whether a given plan is part of the correct hypothesis. The main contribution is a sound and complete process for reducing the set of possible hypotheses called Sequential Plan Recognition (SPR). SPR iteratively queries the user and revises the set of possible hypotheses according to the outcome of the query. Several policies are provided for choosing which plans to query the agent. These policies address the problem of how to reduce the number of hypotheses during the recognition process using a minimal number of queries. The proposed policies include policies that use maximum likelihood and information gain measures. The paper provides a complexity analysis of the SPR process and the proposed query policies. It demonstrate its efficiency on two known domains from the literature, describing how performance and runtime are affected by features in the domain. Our results can inform the design of future plan recognition systems that interleave the recognition process with intelligent interventions of their users.

      PubDate: 2018-04-10T21:43:17Z
      DOI: 10.1016/j.artint.2018.03.006
       
  • Optimal Defense Against Election Control by Deleting Voter Groups
    • Authors: Yue Yin; Yevgeniy Vorobeychik; Bo An; Noam Hazon
      Abstract: Publication date: Available online 21 February 2018
      Source:Artificial Intelligence
      Author(s): Yue Yin, Yevgeniy Vorobeychik, Bo An, Noam Hazon
      Election control encompasses attempts from an external agent to alter the structure of an election in order to change its outcome. This problem is both a fundamental theoretical problem in social choice, and a major practical concern for democratic institutions. Consequently, this issue has received considerable attention, particularly as it pertains to different voting rules. In contrast, the problem of how election control can be prevented or deterred has been largely ignored. We introduce the problem of optimal defense against election control, including destructive and constructive control, where manipulation is allowed at the granularity of groups of voters (e.g., voting locations) through a denial-of-service attack, and the defender allocates limited protection resources to prevent control. We consider plurality voting, and show that it is computationally hard to prevent both types of control, though destructive control itself can be performed in polynomial time. For defense against destructive control, we present a double-oracle framework for computing an optimal prevention strategy. We show that both defender and attacker best response subproblems are NP-complete, and develop exact mixed-integer linear programming approaches for solving these, as well as fast heuristic methods. We then extend this general approach to develop effective algorithmic solutions for defense against constructive control. Finally, we generalize the model and algorithmic approaches to consider uncertainty about voter preferences. Experiments conducted on both synthetic and real data demonstrate that the proposed computational framework can scale to realistic problem instances. 1

      PubDate: 2018-02-26T12:06:17Z
      DOI: 10.1016/j.artint.2018.02.001
       
  • Parallelizing SMT solving: Lazy decomposition and conciliation
    • Authors: Xi Cheng; Min Zhou; Xiaoyu Song; Ming Gu; Jiaguang Sun
      Abstract: Publication date: Available online 31 January 2018
      Source:Artificial Intelligence
      Author(s): Xi Cheng, Min Zhou, Xiaoyu Song, Ming Gu, Jiaguang Sun
      Satisfiability Modulo Theories (SMT) is the satisfiability problem for first-order formulae with respect to background theories. SMT extends the propositional satisfiability by introducing various underlying theories. To improve the efficiency of SMT solving, many efforts have been made on low-level algorithms but they generally cannot leverage the capability of parallel hardware. We propose a high-level and flexible framework, namely lazy decomposition and conciliation (LDC), to parallelize solving for quantifier-free SMT problems. Overall, a SMT problem is firstly decomposed into subproblems, then local reasoning inside each subproblem is conciliated with the global reasoning over the shared symbols across subproblems in parallel. LDC can be built on any existing solver without tuning its internal implementation, and is flexible as it is applicable to various underlying theories. We instantiate LDC in the theory of equality with uninterpreted functions, and implement a parallel solver PZ3 based on Z3. Experiment results on the QF_UF benchmarks from SMT-LIB as well as random problems show the potential of LDC, as (1) PZ3 generally outperforms Z3 in 4 out of 8 problem subcategories under various core configurations; (2) PZ3 usually achieves super-linear speed-up over Z3 on problems with sparse structures, which makes it possible to choose an appropriate solver from Z3 and PZ3 in advance according to the structure of input problem; (3) compared to PCVC4, a state-of-the-art portfolio-based parallel SMT solver, PZ3 achieves speed-up on a larger portion of problems and has better overall speed-up ratio.

      PubDate: 2018-02-05T17:01:06Z
      DOI: 10.1016/j.artint.2018.01.001
       
  • Implicit, explicit and speculative knowledge
    • Authors: Hans van Ditmarsch; Tim French; Fernando R. Velázquez-Quesada; Yì N. Wáng
      Pages: 35 - 67
      Abstract: Publication date: March 2018
      Source:Artificial Intelligence, Volume 256
      Author(s): Hans van Ditmarsch, Tim French, Fernando R. Velázquez-Quesada, Yì N. Wáng
      We compare different epistemic notions in the presence of awareness of propositional variables: the logic of implicit knowledge (in which explicit knowledge is definable), the logic of explicit knowledge, and the logic of speculative knowledge. Speculative knowledge is a novel epistemic notion that permits reasoning about unawareness. These logics are interpreted on epistemic awareness models: these are multi-agent Kripke structures for propositional awareness (in each state an agent may only be aware of formulas containing occurrences of a subset of all propositional variables). Different notions of bisimulation are suitable for these logics. We provide correspondence between bisimulation and modal equivalence on image-finite models for these logics. Expressivity and axiomatizations are investigated for models without restrictions, and for models with equivalence relations for all agents (modeling knowledge) and awareness introspection (agents know what they are aware of). We show that the logic of speculative knowledge is as expressive as the logic of explicit knowledge, and the logic of implicit knowledge is more expressive than the two other logics. We also present expressivity results for more restricted languages. We then provide and compare axiomatizations for the three logics; the axiomatizations for speculative knowledge are novel. We compare our results to those for awareness achieved in artificial intelligence, computer science, philosophy, and economics.

      PubDate: 2017-12-12T13:51:16Z
      DOI: 10.1016/j.artint.2017.11.004
      Issue No: Vol. 256 (2017)
       
  • Reactive multi-context systems: Heterogeneous reasoning in dynamic
           environments
    • Authors: Gerhard Brewka; Stefan Ellmauthaler; Ricardo Gonçalves; Matthias Knorr; João Leite; Jörg Pührer
      Pages: 68 - 104
      Abstract: Publication date: March 2018
      Source:Artificial Intelligence, Volume 256
      Author(s): Gerhard Brewka, Stefan Ellmauthaler, Ricardo Gonçalves, Matthias Knorr, João Leite, Jörg Pührer
      Managed multi-context systems (mMCSs) allow for the integration of heterogeneous knowledge sources in a modular and very general way. They were, however, mainly designed for static scenarios and are therefore not well-suited for dynamic environments in which continuous reasoning over such heterogeneous knowledge with constantly arriving streams of data is necessary. In this paper, we introduce reactive multi-context systems (rMCSs), a framework for reactive reasoning in the presence of heterogeneous knowledge sources and data streams. We show that rMCSs are indeed well-suited for this purpose by illustrating how several typical problems arising in the context of stream reasoning can be handled using them, by showing how inconsistencies possibly occurring in the integration of multiple knowledge sources can be handled, and by arguing that the potential non-determinism of rMCSs can be avoided if needed using an alternative, more skeptical well-founded semantics instead with beneficial computational properties. We also investigate the computational complexity of various reasoning problems related to rMCSs. Finally, we discuss related work, and show that rMCSs do not only generalize mMCSs to dynamic settings, but also capture/extend relevant approaches w.r.t. dynamics in knowledge representation and stream reasoning.

      PubDate: 2017-12-12T13:51:16Z
      DOI: 10.1016/j.artint.2017.11.007
      Issue No: Vol. 256 (2017)
       
  • Decentralized Reinforcement Learning of robot behaviors
    • Authors: David L. Leottau; Javier Ruiz-del-Solar; Robert Babuška
      Pages: 130 - 159
      Abstract: Publication date: March 2018
      Source:Artificial Intelligence, Volume 256
      Author(s): David L. Leottau, Javier Ruiz-del-Solar, Robert Babuška
      A multi-agent methodology is proposed for Decentralized Reinforcement Learning (DRL) of individual behaviors in problems where multi-dimensional action spaces are involved. When using this methodology, sub-tasks are learned in parallel by individual agents working toward a common goal. In addition to proposing this methodology, three specific multi agent DRL approaches are considered: DRL-Independent, DRL Cooperative-Adaptive (CA), and DRL-Lenient. These approaches are validated and analyzed with an extensive empirical study using four different problems: 3D Mountain Car, SCARA Real-Time Trajectory Generation, Ball-Dribbling in humanoid soccer robotics, and Ball-Pushing using differential drive robots. The experimental validation provides evidence that DRL implementations show better performances and faster learning times than their centralized counterparts, while using less computational resources. DRL-Lenient and DRL-CA algorithms achieve the best final performances for the four tested problems, outperforming their DRL-Independent counterparts. Furthermore, the benefits of the DRL-Lenient and DRL-CA are more noticeable when the problem complexity increases and the centralized scheme becomes intractable given the available computational resources and training time.

      PubDate: 2017-12-27T12:47:01Z
      DOI: 10.1016/j.artint.2017.12.001
      Issue No: Vol. 256 (2017)
       
  • Fixpoint semantics for active integrity constraints
    • Authors: Bart Bogaerts; Luís Cruz-Filipe
      Pages: 43 - 70
      Abstract: Publication date: February 2018
      Source:Artificial Intelligence, Volume 255
      Author(s): Bart Bogaerts, Luís Cruz-Filipe
      Active integrity constraints (AICs) constitute a formalism to associate with a database not just the constraints it should adhere to, but also how to fix the database in case one or more of these constraints are violated. The intuitions regarding which repairs are “good” given such a description are closely related to intuitions that live in various areas of non-monotonic reasoning, such as logic programming and autoepistemic logic. In this paper, we apply approximation fixpoint theory, an abstract, algebraic framework designed to unify semantics of non-monotonic logics, to the field of AICs. This results in a new family of semantics for AICs. We study properties of our new semantics and relationships to existing semantics. In particular, we argue that two of the newly defined semantics stand out. Grounded repairs have a simple definition that is purely based on semantic principles that semantics for AICs should adhere to. And, as we show, they coincide with the intended interpretation of AICs on many examples. The second semantics of interest is the AFT-well-founded semantics: it is a computationally cheap semantics that provides upper and lower bounds for many other classes of repairs.

      PubDate: 2017-12-12T13:51:16Z
      DOI: 10.1016/j.artint.2017.11.003
      Issue No: Vol. 255 (2017)
       
  • A quality assuring, cost optimal multi-armed bandit mechanism for
           expertsourcing
    • Authors: Shweta Jain; Sujit Gujar; Satyanath Bhat; Onno Zoeter; Y. Narahari
      Pages: 44 - 63
      Abstract: Publication date: January 2018
      Source:Artificial Intelligence, Volume 254
      Author(s): Shweta Jain, Sujit Gujar, Satyanath Bhat, Onno Zoeter, Y. Narahari
      There are numerous situations when a service requester wishes to expertsource a series of identical but non-trivial tasks from a pool of experts so as to achieve an assured accuracy level for each task, in a cost optimal way. The experts available are typically heterogeneous with unknown but fixed qualities and different service costs. The service costs are usually private to the experts and the experts could be strategic about their costs. The problem is to select for each task an optimal subset of experts so that the outcome obtained after aggregating the opinions from the selected experts guarantees a target level of accuracy. The problem is a challenging one even in a non-strategic setting since the accuracy of an aggregated outcome depends on unknown qualities. We develop a novel multi-armed bandit (MAB) mechanism for solving this problem. First, we propose a framework, Assured Accuracy Bandit (AAB) framework, which leads to a MAB algorithm, Constrained Confidence Bound for Non-Strategic Setting (CCB-NS). We derive an upper bound on the number of time steps this algorithm chooses a sub-optimal set, which depends on the target accuracy and true qualities. A more challenging situation arises when the requester not only has to learn the qualities of the experts but has to elicit their true service costs as well. We modify the CCB-NS algorithm to obtain an adaptive exploration separated algorithm Constrained Confidence Bound for Strategic Setting (CCB-S). The CCB-S algorithm produces an ex-post monotone allocation rule that can then be transformed into an ex-post incentive compatible and ex-post individually rational mechanism. This mechanism learns the qualities of the experts and guarantees a given target accuracy level in a cost optimal way. We also provide a lower bound on the number of times any algorithm must select a sub-optimal set and we see that the lower bound matches our upper bound up to a constant factor. We provide insights on a practical implementation of this framework through an illustrative example and demonstrate the efficacy of our algorithms through simulations.

      PubDate: 2017-11-16T02:37:41Z
      DOI: 10.1016/j.artint.2017.10.001
      Issue No: Vol. 254 (2017)
       
  • Towards breaking more composition symmetries in partial symmetry breaking
    • Authors: Jimmy H.M. Lee; Zichen Zhu
      Pages: 51 - 82
      Abstract: Publication date: November 2017
      Source:Artificial Intelligence, Volume 252
      Author(s): Jimmy H.M. Lee, Zichen Zhu
      The paper proposes a dynamic method, Recursive Symmetry Breaking During Search (ReSBDS), for efficient partial symmetry breaking. We first demonstrate how Partial Symmetry Breaking During Search (ParSBDS) misses important pruning opportunities when given only a subset of symmetries to break. The investigation pinpoints the culprit and in turn suggests rectification. The main idea is to add extra symmetry breaking constraints during search recursively to prune also symmetric nodes of some pruned subtrees. Thus, ReSBDS can break extra symmetry compositions, but is carefully designed to break only the ones that are easy to identify and inexpensive to break. We present theorems to guarantee the soundness and termination of our approach, and compare our method with popular static and dynamic methods. When the variable (value) heuristic is static, ReSBDS is also complete in eliminating all interchangeable variables (values) given only the generator symmetries. We propose further a light version of ReSBDS method (LReSBDS), which has a slightly weaker pruning power of ReSBDS but with a reduced overhead. We give theoretical characterization on the soundness and termination of LReSBDS, and comparisons on pruning strengths against other symmetry breaking methods including ReSBDS. Extensive experimentations confirm the efficiency of ReSBDS and LReSBDS, when compared against state of the art methods.

      PubDate: 2017-09-26T05:22:05Z
      DOI: 10.1016/j.artint.2017.07.006
      Issue No: Vol. 252 (2017)
       
  • Forward bounding on pseudo-trees for DCOPs and ADCOPs
    • Authors: Omer Litov; Amnon Meisels
      Pages: 83 - 99
      Abstract: Publication date: November 2017
      Source:Artificial Intelligence, Volume 252
      Author(s): Omer Litov, Amnon Meisels
      Complete search algorithms for solving Distributed constraint optimization problems (DCOPs) can be divided into two groups: algorithms that use a pseudo tree and algorithms that do not use one. The best performing algorithms that do not use a pseudo tree use some form of forward bounding. In order to try and gain from both worlds, a new algorithm, which incorporates a hybrid approach of the two groups is presented. The proposed algorithm – Pseudo-Tree Forward-bounding (PT-FB) – is shown to perform very well. PT-FB is next extended to be able to solve Asymmetric DCOPs (ADCOPs). Here again, its performance is better than its competitors. An extensive experimental evaluation of the performance of each of the proposed algorithms is presented.

      PubDate: 2017-09-26T05:22:05Z
      DOI: 10.1016/j.artint.2017.07.003
      Issue No: Vol. 252 (2017)
       
  • Belief Revision, Minimal Change and Relaxation: A General Framework based
           on Satisfaction Systems, and Applications to Description Logics
    • Authors: Marc Aiguier; Jamal Atif; Isabelle Bloch; Céline Hudelot
      Abstract: Publication date: Available online 17 December 2017
      Source:Artificial Intelligence
      Author(s): Marc Aiguier, Jamal Atif, Isabelle Bloch, Céline Hudelot
      Belief revision of knowledge bases represented by a set of sentences in a given logic has been extensively studied but for specific logics, mainly propositional, and also recently Horn and description logics. Here, we propose to generalize this operation from a model-theoretic point of view, by defining revision in the abstract model theory of satisfaction systems. In this framework, we generalize to any satisfaction system the characterization of the AGM postulates given by Katsuno and Mendelzon for propositional logic in terms of minimal change among interpretations. In this generalization, the constraint on syntax independence is partially relaxed. Moreover, we study how to define revision, satisfying these weakened AGM postulates, from relaxation notions that have been first introduced in description logics to define dissimilarity measures between concepts, and the consequence of which is to relax the set of models of the old belief until it becomes consistent with the new pieces of knowledge. We show how the proposed general framework can be instantiated in different logics such as propositional, first-order, description and Horn logics. In particular for description logics, we introduce several concrete relaxation operators tailored for the description logic ALC and its fragments EL and ELU , discuss their properties and provide some illustrative examples.

      PubDate: 2017-12-27T12:47:01Z
      DOI: 10.1016/j.artint.2017.12.002
       
  • Strong Temporal Planning with Uncontrollable Durations
    • Authors: Alessandro Cimatti; Minh Andrea Micheli Marco Roveri David Smith
      Abstract: Publication date: Available online 5 December 2017
      Source:Artificial Intelligence
      Author(s): Alessandro Cimatti, Minh Do, Andrea Micheli, Marco Roveri, David E. Smith
      Planning in real world domains often involves modeling and reasoning about the duration of actions. Temporal planning allows such modeling and reasoning by looking for plans that specify start and end time points for each action. In many practical cases, however, the duration of actions may be uncertain and not under the full control of the executor. For example, a navigation task may take more or less time, depending on external conditions such as terrain or weather. In this paper, we tackle the problem of strong temporal planning with uncontrollable action durations (STPUD). For actions with uncontrollable durations, the planner is only allowed to choose the start of the actions, while the end is chosen, within known bounds, by the environment. A solution plan must be robust with respect to all uncontrollable action durations, and must achieve the goal on all executions, despite the choices of the environment. We propose two complementary techniques. First, we discuss a dedicated planning method, that generalizes the state-space temporal planning framework, leveraging SMT-based techniques for temporal networks under uncertainty. Second, we present a compilation-based method, that reduces any STPUD problem to an ordinary temporal planning problem. Moreover, we investigate a set of sufficient conditions to simplify domains by removing some of the uncontrollability. We implemented both our approaches, and we experimentally evaluated our techniques on a large number of instances. Our results demonstrate the practical applicability of the two techniques, which show complementary behavior.

      PubDate: 2017-12-12T13:51:16Z
       
  • A Computational Framework for Conceptual Blending
    • Authors: Manfred Eppe; Ewen Maclean Roberto Confalonieri Oliver Kutz Marco Schorlemmer
      Abstract: Publication date: Available online 2 December 2017
      Source:Artificial Intelligence
      Author(s): Manfred Eppe, Ewen Maclean, Roberto Confalonieri, Oliver Kutz, Marco Schorlemmer, Enric Plaza, Kai-Uwe Kühnberger
      We present a computational framework for conceptual blending, a concept invention method that is advocated in cognitive science as a fundamental and uniquely human engine for creative thinking. Our framework treats a crucial part of the blending process, namely the generalisation of input concepts, as a search problem that is solved by means of modern answer set programming methods to find commonalities among input concepts. We also address the problem of pruning the space of possible blends by introducing metrics that capture most of the so-called optimality principles, described in the cognitive science literature as guidelines to produce meaningful and serendipitous blends. As a proof of concept, we demonstrate how our system invents novel concepts and theories in domains where creativity is crucial, namely mathematics and music.

      PubDate: 2017-12-12T13:51:16Z
       
  • Rational Deployment of Multiple Heuristics in Optimal State-Space Search
    • Authors: Erez Karpas; Oded Betzalel; Solomon Eyal Shimony; David Tolpin; Ariel Felner
      Abstract: Publication date: Available online 7 November 2017
      Source:Artificial Intelligence
      Author(s): Erez Karpas, Oded Betzalel, Solomon Eyal Shimony, David Tolpin, Ariel Felner
      The obvious way to use several admissible heuristics in searching for an optimal solution is to take their maximum. In this paper, we aim to reduce the time spent on computing heuristics within the context of A ⁎ and I D A ⁎ . We discuss Lazy A ⁎ and Lazy I D A ⁎ , variants of A ⁎ and I D A ⁎ , respectively, where heuristics are evaluated lazily: only when they are essential to a decision to be made in the search process. While these lazy algorithms outperform naive maximization, we can do even better by intelligently deciding when to compute the more expensive heuristic. We present a new rational metareasoning based scheme which decides whether to compute the more expensive heuristics at all, based on a myopic regret estimate. This scheme is used to create rational lazy A ⁎ and rational lazy I D A ⁎ . We also present different methods for estimating the parameters necessary for making such decisions. An empirical evaluation in several domains supports the theoretical results, and shows that the rational variants, rational lazy A ⁎ and rational lazy I D A ⁎ , are better than their non-rational counterparts.

      PubDate: 2017-11-09T02:30:24Z
      DOI: 10.1016/j.artint.2017.11.001
       
  • Fundamental properties of attack relations in structured argumentation
           with priorities
    • Authors: Phan Minh Dung; Phan Minh Thang
      Abstract: Publication date: Available online 7 November 2017
      Source:Artificial Intelligence
      Author(s): Phan Minh Dung, Phan Minh Thang
      Due to a proliferation and diversity of approaches to structured argumentation with prioritized rules, several simple and intuitive principles for characterization and evaluation of the proposed attack relations have recently been introduced in [23]. While the proposed principles and properties are helpful, they do not identify unique attack relations. Any user of structured argumentation still faces a fundamental problem of determining an appropriate attack relation for her/his application and further principles that could help in identifying such attack relation. We argue that a key purpose of introducing priorities between defeasible rules is to remove undesired attacks while keeping the set of removed attacks to a minimum. This intuitive idea could be viewed as a kind of minimal-removal-principle. We show in this paper that the minimal-removal-principle together with a new simple and intuitive property of inconsistency-resolving and previously proposed properties indeed characterize a unique attack relation referred to as the canonical attack relation. We show that canonical attack relations could be characterized in three distinct ways, as the supremum of a complete upper-semilattice of regular attack relations, or by removing the undesired attacks from the basic attack relations where the undesired attacks are captured by a least-fixed point of an intuitive removal function, or as the normal attack relations introduced in an earlier paper for a class of well-prioritized knowledge bases. We start our study with a language consisting only of literals and two type of attacks, rebut and undercut. We then show that our approach can easily be scaled up by showing that all key results still hold for general underlying logical languages and the inclusion of assumptions. We apply our proposed approach to valued-based argumentation and show that it also leads to the canonical semantics.

      PubDate: 2017-11-09T02:30:24Z
      DOI: 10.1016/j.artint.2017.11.002
       
  • Taking Account of the Actions of Others in Value-based Reasoning
    • Authors: Katie Atkinson; Trevor Bench-Capon
      Abstract: Publication date: Available online 28 September 2017
      Source:Artificial Intelligence
      Author(s): Katie Atkinson, Trevor Bench-Capon
      Practical reasoning, reasoning about what actions should be chosen, is highly dependent both on the individual values of the agent concerned and on what others choose to do. Hitherto, computational models of value-based argumentation for practical reasoning have required assumptions to be made about the beliefs and preferences of other agents. Here we present a new method for taking the actions of others into account that does not require these assumptions: the only beliefs and preferences considered are those of the agent engaged in the reasoning. Our new formalism draws on utility-based approaches and expresses the reasoning in the form of arguments and objections, to enable full integration with value-based practical reasoning. We illustrate our approach by showing how value-based reasoning is modelled in two scenarios used in experimental economics, the Ultimatum Game and the Prisoner's Dilemma, and we present an evaluation of our approach in terms of these experiments. The evaluation demonstrates that our model is able to reproduce computationally the results of ethnographic experiments, serving as an encouraging validation exercise.

      PubDate: 2017-10-04T05:01:40Z
      DOI: 10.1016/j.artint.2017.09.002
       
  • Resolving distributed knowledge
    • Authors: Thomas
      Abstract: Publication date: November 2017
      Source:Artificial Intelligence, Volume 252
      Author(s): Thomas Ågotnes, Yì N. Wáng
      In epistemic logic, a key formal theory for reasoning about knowledge in AI and other fields, different notions of group knowledge describe different ways in which knowledge can be associated with a group of agents. Distributed knowledge can be seen as the sum of the knowledge in a group; it is sometimes referred to as the potential knowledge of a group, or the joint knowledge they could obtain if they had unlimited means of communication. In epistemic logic, a formula of the form D G φ is intended to express the fact that group G has distributed knowledge of φ, that the total information in the group can be used to infer φ. In this paper we show that this is not the same as φ necessarily being true after the members of the group actually share all their information with each other – perhaps contrary to intuitive ideas about what distributed knowledge is. We furthermore introduce a new operator R G , such that R G φ means that φ is true after G have shared all their information with each other – after G's distributed knowledge has been resolved. The R G operators are called resolution operators. We study logics with different combinations of resolution operators and operators for common and distributed knowledge. Of particular interest is the relationship between distributed and common knowledge. The main results are characterizations of expressive power, and sound and complete axiomatizations. We also study the relationship to public announcement logic.

      PubDate: 2017-09-26T05:22:05Z
       
  • Distributed first order logic
    • Authors: Chiara Ghidini; Luciano Serafini
      Abstract: Publication date: Available online 30 August 2017
      Source:Artificial Intelligence
      Author(s): Chiara Ghidini, Luciano Serafini
      Distributed First Order Logic (DFOL) has been introduced more than ten years ago with the purpose of formalising distributed knowledge-based systems, where knowledge about heterogeneous domains is scattered into a set of interconnected modules. DFOL formalises the knowledge contained in each module by means of first-order theories, and the interconnections between modules by means of special inference rules called bridge rules. Despite their restricted form in the original DFOL formulation, bridge rules have influenced several works in the areas of heterogeneous knowledge integration, modular knowledge representation, and schema/ontology matching. This, in turn, has fostered extensions and modifications of the original DFOL that have never been systematically described and published. This paper tackles the lack of a comprehensive description of DFOL by providing a systematic account of a completely revised and extended version of the logic, together with a sound and complete axiomatisation of a general form of bridge rules based on Natural Deduction. The resulting DFOL framework is then proposed as a clear formal tool for the representation of and reasoning about distributed knowledge and bridge rules.

      PubDate: 2017-09-02T00:31:24Z
      DOI: 10.1016/j.artint.2017.08.008
       
  • Iterative voting and acyclic games
    • Authors: Reshef Meir; Maria Polukarov; Jeffrey S. Rosenschein; Nicholas R. Jennings
      Abstract: Publication date: Available online 24 August 2017
      Source:Artificial Intelligence
      Author(s): Reshef Meir, Maria Polukarov, Jeffrey S. Rosenschein, Nicholas R. Jennings
      Multi-agent decision problems, in which independent agents have to agree on a joint plan of action or allocation of resources, are central to artificial intelligence. In such situations, agents' individual preferences over available alternatives may vary, and they may try to reconcile these differences by voting. We consider scenarios where voters cannot coordinate their actions, but are allowed to change their vote after observing the current outcome, as is often the case both in offline committees and in online voting. Specifically, we are interested in identifying conditions under which such iterative voting processes are guaranteed to converge to a Nash equilibrium state—that is, under which this process is acyclic. We classify convergence results based on the underlying assumptions about the agent scheduler (the order in which the agents take their actions) and the action scheduler (the actions available to the agents at each step). By so doing, we position iterative voting models within the general framework of acyclic games and game forms. In more detail, our main technical results provide a complete picture of conditions for acyclicity in several variations of Plurality voting. In particular, we show that (a) under the traditional lexicographic tie-breaking, the game converges from any state and for any order of agents, under a weak restriction on voters' actions; and that (b) Plurality with randomized tie-breaking is not guaranteed to converge under arbitrary agent schedulers, but there is always some path of better replies from any initial state of the game to a Nash equilibrium. We thus show a first separation between order-free acyclicity and weak acyclicity of game forms, thereby settling an open question from [Kukushkin 2011]. In addition, we refute another conjecture of Kukushkin regarding strongly acyclic voting rules, by proving the existence of strongly acyclic separable game forms.

      PubDate: 2017-09-02T00:31:24Z
      DOI: 10.1016/j.artint.2017.08.002
       
  • Minimal sets on propositional formulae. Problems and reductions
    • Authors: Joao Marques-Silva; Mikoláš Janota; Carlos Mencía
      Abstract: Publication date: Available online 2 August 2017
      Source:Artificial Intelligence
      Author(s): Joao Marques-Silva, Mikoláš Janota, Carlos Mencía
      Boolean Satisfiability (SAT) is arguably the archetypical NP-complete decision problem. Progress in SAT solving algorithms has motivated an ever increasing number of practical applications in recent years. However, many practical uses of SAT involve solving function as opposed to decision problems. Concrete examples include computing minimal unsatisfiable subsets, minimal correction subsets, prime implicates and implicants, minimal models, backbone literals, and autarkies, among several others. In most cases, solving a function problem requires a number of adaptive or non-adaptive calls to a SAT solver. Given the computational complexity of SAT, it is therefore important to develop algorithms that either require the smallest possible number of calls to the SAT solver, or that involve simpler instances. This paper addresses a number of representative function problems defined on Boolean formulae, and shows that all these function problems can be reduced to a generic problem of computing a minimal set subject to a monotone predicate. This problem is referred to as the Minimal Set over Monotone Predicate (MSMP) problem. This work provides new ways for solving well-known function problems, including prime implicates, minimal correction subsets, backbone literals, independent variables and autarkies, among several others. Moreover, this work also motivates the development of more efficient algorithms for the MSMP problem. Finally the paper outlines a number of areas of future research related with extensions of the MSMP problem.

      PubDate: 2017-08-02T23:35:09Z
      DOI: 10.1016/j.artint.2017.07.005
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-