for Journals by Title or ISSN
for Articles by Keywords
Journal Cover Artificial Intelligence
  [SJR: 2.426]   [H-I: 115]   [165 followers]  Follow
   Hybrid Journal Hybrid journal (It can contain Open Access articles)
   ISSN (Print) 0004-3702
   Published by Elsevier Homepage  [3177 journals]
  • Extracting mutual exclusion invariants from lifted temporal planning
    • Authors: Sara Bernardini; Fabio Fagnani; David E. Smith
      Pages: 1 - 65
      Abstract: Publication date: May 2018
      Source:Artificial Intelligence, Volume 258
      Author(s): Sara Bernardini, Fabio Fagnani, David E. Smith
      We present a technique for automatically extracting mutual exclusion invariants from temporal planning instances. It first identifies a set of invariant templates by inspecting the lifted representation of the domain and then checks these templates against properties that assure invariance. Our technique builds on other approaches to invariant synthesis presented in the literature but departs from their limited focus on instantaneous actions by addressing temporal domains. To deal with time, we formulate invariance conditions that account for the entire temporal structure of the actions and the possible concurrent interactions between them. As a result, we construct a more comprehensive technique than previous methods, which is able to find not only invariants for temporal domains but also a broader set of invariants for sequential domains. Our experimental results provide evidence that our domain analysis is effective at identifying a more extensive set of invariants, which results in the generation of fewer multi-valued state variables. We show that, in turn, this reduction in the number of variables reflects positively on the performance of the temporal planners that use a variable/value representation.

      PubDate: 2018-02-26T12:06:17Z
      DOI: 10.1016/j.artint.2018.01.004
      Issue No: Vol. 258 (2018)
  • Autonomous agents modelling other agents: A comprehensive survey and open
    • Authors: Stefano V. Albrecht; Peter Stone
      Pages: 66 - 95
      Abstract: Publication date: May 2018
      Source:Artificial Intelligence, Volume 258
      Author(s): Stefano V. Albrecht, Peter Stone
      Much research in artificial intelligence is concerned with the development of autonomous agents that can interact effectively with other agents. An important aspect of such agents is the ability to reason about the behaviours of other agents, by constructing models which make predictions about various properties of interest (such as actions, goals, beliefs) of the modelled agents. A variety of modelling approaches now exist which vary widely in their methodology and underlying assumptions, catering to the needs of the different sub-communities within which they were developed and reflecting the different practical uses for which they are intended. The purpose of the present article is to provide a comprehensive survey of the salient modelling methods which can be found in the literature. The article concludes with a discussion of open problems which may form the basis for fruitful future research.

      PubDate: 2018-02-26T12:06:17Z
      DOI: 10.1016/j.artint.2018.01.002
      Issue No: Vol. 258 (2018)
  • Star-topology decoupled state space search
    • Authors: Daniel Gnad; Jörg Hoffmann
      Pages: 24 - 60
      Abstract: Publication date: April 2018
      Source:Artificial Intelligence, Volume 257
      Author(s): Daniel Gnad, Jörg Hoffmann
      State space search is a basic method for analyzing reachability in discrete transition systems. To tackle large compactly described transition systems – the state space explosion – a wealth of techniques (e.g., partial-order reduction) have been developed that reduce the search space without affecting the existence of (optimal) solution paths. Focusing on classical AI planning, where the compact description is in terms of a vector of state variables, an initial state, a goal condition, and a set of actions, we add another technique, that we baptize star-topology decoupling, into this arsenal. A star topology partitions the state variables into components so that a single center component directly interacts with several leaf components, but the leaves interact only via the center. Many applications explicitly come with such structure; any classical planning task can be viewed in this way by selecting the center as a subset of state variables separating connected leaf components. Our key observation is that, given such a star topology, the leaves are conditionally independent given the center, in the sense that, given a fixed path of transitions by the center, the possible center-compliant paths are independent across the leaves. Our decoupled search hence branches over center transitions only, and maintains the center-compliant paths for each leaf separately. As we show, this method has exponential separations to all previous search reduction techniques, i.e., examples where it results in exponentially less effort. One can, in principle, prune duplicates in a way so that the decoupled state space can never be larger than the original one. Standard search algorithms remain applicable using simple transformations. Our experiments exhibit large improvements on standard AI planning benchmarks with a pronounced star topology. 1

      PubDate: 2018-02-05T17:01:06Z
      DOI: 10.1016/j.artint.2017.12.004
      Issue No: Vol. 257 (2018)
  • The complexity landscape of decompositional parameters for ILP
    • Authors: Robert Ganian; Sebastian Ordyniak
      Pages: 61 - 71
      Abstract: Publication date: April 2018
      Source:Artificial Intelligence, Volume 257
      Author(s): Robert Ganian, Sebastian Ordyniak
      Integer Linear Programming (ILP) can be seen as the archetypical problem for NP-complete optimization problems, and a wide range of problems in artificial intelligence are solved in practice via a translation to ILP. Despite its huge range of applications, only few tractable fragments of ILP are known, probably the most prominent of which is based on the notion of total unimodularity. Using entirely different techniques, we identify new tractable fragments of ILP by studying structural parameterizations of the constraint matrix within the framework of parameterized complexity. In particular, we show that ILP is fixed-parameter tractable when parameterized by the treedepth of the constraint matrix and the maximum absolute value of any coefficient occurring in the ILP instance. Together with matching hardness results for the more general parameter treewidth, we give an overview of the complexity of ILP w.r.t. decompositional parameters defined on the constraint matrix.

      PubDate: 2018-02-05T17:01:06Z
      DOI: 10.1016/j.artint.2017.12.006
      Issue No: Vol. 257 (2018)
  • Enhancing context knowledge repositories with justifiable exceptions
    • Authors: Loris Bozzato; Thomas Eiter; Luciano Serafini
      Pages: 72 - 126
      Abstract: Publication date: April 2018
      Source:Artificial Intelligence, Volume 257
      Author(s): Loris Bozzato, Thomas Eiter, Luciano Serafini
      Dealing with context dependent knowledge is a well-known area of study that roots in John McCarthy's seminal work. More recently, the Contextualized Knowledge Repository (CKR) framework has been conceived as a logic-based approach in which knowledge bases have a two layered structure, modeled by a global context and a set of local contexts. The global context not only contains the meta-knowledge defining the properties of local contexts, but also holds the global (context independent) object knowledge that is shared by all of the local contexts. In many practical cases, however, it is desirable to leave the possibility to “override” the global object knowledge at the local level: in other words, it is interesting to recognize the pieces of knowledge that can admit exceptional instances in the local contexts that do not need to satisfy the general axiom. To address this need, we present in this paper an extension of CKR in which defeasible axioms can be included in the global context. The latter are verified in the local contexts only for the instances for which no exception to overriding exists, where exceptions require a justification in terms of facts that are provable from the knowledge base. We formally define this semantics and study some semantic and computational properties, where we characterize the complexity of the major reasoning tasks, among them satisfiability testing, instance checking, and conjunctive query answering. Furthermore, we present a translation of extended CKRs with knowledge bases in the Description Logic SROIQ -RL under the novel semantics to datalog programs under the stable model (answer set) semantics. We also present an implementation prototype and examine its scalability with respect to the size of the input CKR and the amount (level) of defeasibility in experiments. Finally, we compare our representation approach with some major formalisms for expressing defeasible knowledge in Description Logics and contextual knowledge representation. Our work adds to the body of results on using deductive database technology such as SQL and datalog in these areas, and provides an expressive formalism (in terms of intrinsic complexity) for exception handling by overriding.

      PubDate: 2018-02-05T17:01:06Z
      DOI: 10.1016/j.artint.2017.12.005
      Issue No: Vol. 257 (2018)
  • Optimal Defense Against Election Control by Deleting Voter Groups
    • Authors: Yue Yin; Yevgeniy Vorobeychik; Bo An; Noam Hazon
      Abstract: Publication date: Available online 21 February 2018
      Source:Artificial Intelligence
      Author(s): Yue Yin, Yevgeniy Vorobeychik, Bo An, Noam Hazon
      Election control encompasses attempts from an external agent to alter the structure of an election in order to change its outcome. This problem is both a fundamental theoretical problem in social choice, and a major practical concern for democratic institutions. Consequently, this issue has received considerable attention, particularly as it pertains to different voting rules. In contrast, the problem of how election control can be prevented or deterred has been largely ignored. We introduce the problem of optimal defense against election control, including destructive and constructive control, where manipulation is allowed at the granularity of groups of voters (e.g., voting locations) through a denial-of-service attack, and the defender allocates limited protection resources to prevent control. We consider plurality voting, and show that it is computationally hard to prevent both types of control, though destructive control itself can be performed in polynomial time. For defense against destructive control, we present a double-oracle framework for computing an optimal prevention strategy. We show that both defender and attacker best response subproblems are NP-complete, and develop exact mixed-integer linear programming approaches for solving these, as well as fast heuristic methods. We then extend this general approach to develop effective algorithmic solutions for defense against constructive control. Finally, we generalize the model and algorithmic approaches to consider uncertainty about voter preferences. Experiments conducted on both synthetic and real data demonstrate that the proposed computational framework can scale to realistic problem instances. 1

      PubDate: 2018-02-26T12:06:17Z
      DOI: 10.1016/j.artint.2018.02.001
  • Parallelizing SMT solving: Lazy decomposition and conciliation
    • Authors: Xi Cheng; Min Zhou; Xiaoyu Song; Ming Gu; Jiaguang Sun
      Abstract: Publication date: Available online 31 January 2018
      Source:Artificial Intelligence
      Author(s): Xi Cheng, Min Zhou, Xiaoyu Song, Ming Gu, Jiaguang Sun
      Satisfiability Modulo Theories (SMT) is the satisfiability problem for first-order formulae with respect to background theories. SMT extends the propositional satisfiability by introducing various underlying theories. To improve the efficiency of SMT solving, many efforts have been made on low-level algorithms but they generally cannot leverage the capability of parallel hardware. We propose a high-level and flexible framework, namely lazy decomposition and conciliation (LDC), to parallelize solving for quantifier-free SMT problems. Overall, a SMT problem is firstly decomposed into subproblems, then local reasoning inside each subproblem is conciliated with the global reasoning over the shared symbols across subproblems in parallel. LDC can be built on any existing solver without tuning its internal implementation, and is flexible as it is applicable to various underlying theories. We instantiate LDC in the theory of equality with uninterpreted functions, and implement a parallel solver PZ3 based on Z3. Experiment results on the QF_UF benchmarks from SMT-LIB as well as random problems show the potential of LDC, as (1) PZ3 generally outperforms Z3 in 4 out of 8 problem subcategories under various core configurations; (2) PZ3 usually achieves super-linear speed-up over Z3 on problems with sparse structures, which makes it possible to choose an appropriate solver from Z3 and PZ3 in advance according to the structure of input problem; (3) compared to PCVC4, a state-of-the-art portfolio-based parallel SMT solver, PZ3 achieves speed-up on a larger portion of problems and has better overall speed-up ratio.

      PubDate: 2018-02-05T17:01:06Z
      DOI: 10.1016/j.artint.2018.01.001
  • Implicit, explicit and speculative knowledge
    • Authors: Hans van Ditmarsch; Tim French; Fernando R. Velázquez-Quesada; Yì N. Wáng
      Pages: 35 - 67
      Abstract: Publication date: March 2018
      Source:Artificial Intelligence, Volume 256
      Author(s): Hans van Ditmarsch, Tim French, Fernando R. Velázquez-Quesada, Yì N. Wáng
      We compare different epistemic notions in the presence of awareness of propositional variables: the logic of implicit knowledge (in which explicit knowledge is definable), the logic of explicit knowledge, and the logic of speculative knowledge. Speculative knowledge is a novel epistemic notion that permits reasoning about unawareness. These logics are interpreted on epistemic awareness models: these are multi-agent Kripke structures for propositional awareness (in each state an agent may only be aware of formulas containing occurrences of a subset of all propositional variables). Different notions of bisimulation are suitable for these logics. We provide correspondence between bisimulation and modal equivalence on image-finite models for these logics. Expressivity and axiomatizations are investigated for models without restrictions, and for models with equivalence relations for all agents (modeling knowledge) and awareness introspection (agents know what they are aware of). We show that the logic of speculative knowledge is as expressive as the logic of explicit knowledge, and the logic of implicit knowledge is more expressive than the two other logics. We also present expressivity results for more restricted languages. We then provide and compare axiomatizations for the three logics; the axiomatizations for speculative knowledge are novel. We compare our results to those for awareness achieved in artificial intelligence, computer science, philosophy, and economics.

      PubDate: 2017-12-12T13:51:16Z
      DOI: 10.1016/j.artint.2017.11.004
      Issue No: Vol. 256 (2017)
  • Reactive multi-context systems: Heterogeneous reasoning in dynamic
    • Authors: Gerhard Brewka; Stefan Ellmauthaler; Ricardo Gonçalves; Matthias Knorr; João Leite; Jörg Pührer
      Pages: 68 - 104
      Abstract: Publication date: March 2018
      Source:Artificial Intelligence, Volume 256
      Author(s): Gerhard Brewka, Stefan Ellmauthaler, Ricardo Gonçalves, Matthias Knorr, João Leite, Jörg Pührer
      Managed multi-context systems (mMCSs) allow for the integration of heterogeneous knowledge sources in a modular and very general way. They were, however, mainly designed for static scenarios and are therefore not well-suited for dynamic environments in which continuous reasoning over such heterogeneous knowledge with constantly arriving streams of data is necessary. In this paper, we introduce reactive multi-context systems (rMCSs), a framework for reactive reasoning in the presence of heterogeneous knowledge sources and data streams. We show that rMCSs are indeed well-suited for this purpose by illustrating how several typical problems arising in the context of stream reasoning can be handled using them, by showing how inconsistencies possibly occurring in the integration of multiple knowledge sources can be handled, and by arguing that the potential non-determinism of rMCSs can be avoided if needed using an alternative, more skeptical well-founded semantics instead with beneficial computational properties. We also investigate the computational complexity of various reasoning problems related to rMCSs. Finally, we discuss related work, and show that rMCSs do not only generalize mMCSs to dynamic settings, but also capture/extend relevant approaches w.r.t. dynamics in knowledge representation and stream reasoning.

      PubDate: 2017-12-12T13:51:16Z
      DOI: 10.1016/j.artint.2017.11.007
      Issue No: Vol. 256 (2017)
  • Decentralized Reinforcement Learning of robot behaviors
    • Authors: David L. Leottau; Javier Ruiz-del-Solar; Robert Babuška
      Pages: 130 - 159
      Abstract: Publication date: March 2018
      Source:Artificial Intelligence, Volume 256
      Author(s): David L. Leottau, Javier Ruiz-del-Solar, Robert Babuška
      A multi-agent methodology is proposed for Decentralized Reinforcement Learning (DRL) of individual behaviors in problems where multi-dimensional action spaces are involved. When using this methodology, sub-tasks are learned in parallel by individual agents working toward a common goal. In addition to proposing this methodology, three specific multi agent DRL approaches are considered: DRL-Independent, DRL Cooperative-Adaptive (CA), and DRL-Lenient. These approaches are validated and analyzed with an extensive empirical study using four different problems: 3D Mountain Car, SCARA Real-Time Trajectory Generation, Ball-Dribbling in humanoid soccer robotics, and Ball-Pushing using differential drive robots. The experimental validation provides evidence that DRL implementations show better performances and faster learning times than their centralized counterparts, while using less computational resources. DRL-Lenient and DRL-CA algorithms achieve the best final performances for the four tested problems, outperforming their DRL-Independent counterparts. Furthermore, the benefits of the DRL-Lenient and DRL-CA are more noticeable when the problem complexity increases and the centralized scheme becomes intractable given the available computational resources and training time.

      PubDate: 2017-12-27T12:47:01Z
      DOI: 10.1016/j.artint.2017.12.001
      Issue No: Vol. 256 (2017)
  • Fixpoint semantics for active integrity constraints
    • Authors: Bart Bogaerts; Luís Cruz-Filipe
      Pages: 43 - 70
      Abstract: Publication date: February 2018
      Source:Artificial Intelligence, Volume 255
      Author(s): Bart Bogaerts, Luís Cruz-Filipe
      Active integrity constraints (AICs) constitute a formalism to associate with a database not just the constraints it should adhere to, but also how to fix the database in case one or more of these constraints are violated. The intuitions regarding which repairs are “good” given such a description are closely related to intuitions that live in various areas of non-monotonic reasoning, such as logic programming and autoepistemic logic. In this paper, we apply approximation fixpoint theory, an abstract, algebraic framework designed to unify semantics of non-monotonic logics, to the field of AICs. This results in a new family of semantics for AICs. We study properties of our new semantics and relationships to existing semantics. In particular, we argue that two of the newly defined semantics stand out. Grounded repairs have a simple definition that is purely based on semantic principles that semantics for AICs should adhere to. And, as we show, they coincide with the intended interpretation of AICs on many examples. The second semantics of interest is the AFT-well-founded semantics: it is a computationally cheap semantics that provides upper and lower bounds for many other classes of repairs.

      PubDate: 2017-12-12T13:51:16Z
      DOI: 10.1016/j.artint.2017.11.003
      Issue No: Vol. 255 (2017)
  • A quality assuring, cost optimal multi-armed bandit mechanism for
    • Authors: Shweta Jain; Sujit Gujar; Satyanath Bhat; Onno Zoeter; Y. Narahari
      Pages: 44 - 63
      Abstract: Publication date: January 2018
      Source:Artificial Intelligence, Volume 254
      Author(s): Shweta Jain, Sujit Gujar, Satyanath Bhat, Onno Zoeter, Y. Narahari
      There are numerous situations when a service requester wishes to expertsource a series of identical but non-trivial tasks from a pool of experts so as to achieve an assured accuracy level for each task, in a cost optimal way. The experts available are typically heterogeneous with unknown but fixed qualities and different service costs. The service costs are usually private to the experts and the experts could be strategic about their costs. The problem is to select for each task an optimal subset of experts so that the outcome obtained after aggregating the opinions from the selected experts guarantees a target level of accuracy. The problem is a challenging one even in a non-strategic setting since the accuracy of an aggregated outcome depends on unknown qualities. We develop a novel multi-armed bandit (MAB) mechanism for solving this problem. First, we propose a framework, Assured Accuracy Bandit (AAB) framework, which leads to a MAB algorithm, Constrained Confidence Bound for Non-Strategic Setting (CCB-NS). We derive an upper bound on the number of time steps this algorithm chooses a sub-optimal set, which depends on the target accuracy and true qualities. A more challenging situation arises when the requester not only has to learn the qualities of the experts but has to elicit their true service costs as well. We modify the CCB-NS algorithm to obtain an adaptive exploration separated algorithm Constrained Confidence Bound for Strategic Setting (CCB-S). The CCB-S algorithm produces an ex-post monotone allocation rule that can then be transformed into an ex-post incentive compatible and ex-post individually rational mechanism. This mechanism learns the qualities of the experts and guarantees a given target accuracy level in a cost optimal way. We also provide a lower bound on the number of times any algorithm must select a sub-optimal set and we see that the lower bound matches our upper bound up to a constant factor. We provide insights on a practical implementation of this framework through an illustrative example and demonstrate the efficacy of our algorithms through simulations.

      PubDate: 2017-11-16T02:37:41Z
      DOI: 10.1016/j.artint.2017.10.001
      Issue No: Vol. 254 (2017)
  • Towards breaking more composition symmetries in partial symmetry breaking
    • Authors: Jimmy H.M. Lee; Zichen Zhu
      Pages: 51 - 82
      Abstract: Publication date: November 2017
      Source:Artificial Intelligence, Volume 252
      Author(s): Jimmy H.M. Lee, Zichen Zhu
      The paper proposes a dynamic method, Recursive Symmetry Breaking During Search (ReSBDS), for efficient partial symmetry breaking. We first demonstrate how Partial Symmetry Breaking During Search (ParSBDS) misses important pruning opportunities when given only a subset of symmetries to break. The investigation pinpoints the culprit and in turn suggests rectification. The main idea is to add extra symmetry breaking constraints during search recursively to prune also symmetric nodes of some pruned subtrees. Thus, ReSBDS can break extra symmetry compositions, but is carefully designed to break only the ones that are easy to identify and inexpensive to break. We present theorems to guarantee the soundness and termination of our approach, and compare our method with popular static and dynamic methods. When the variable (value) heuristic is static, ReSBDS is also complete in eliminating all interchangeable variables (values) given only the generator symmetries. We propose further a light version of ReSBDS method (LReSBDS), which has a slightly weaker pruning power of ReSBDS but with a reduced overhead. We give theoretical characterization on the soundness and termination of LReSBDS, and comparisons on pruning strengths against other symmetry breaking methods including ReSBDS. Extensive experimentations confirm the efficiency of ReSBDS and LReSBDS, when compared against state of the art methods.

      PubDate: 2017-09-26T05:22:05Z
      DOI: 10.1016/j.artint.2017.07.006
      Issue No: Vol. 252 (2017)
  • Forward bounding on pseudo-trees for DCOPs and ADCOPs
    • Authors: Omer Litov; Amnon Meisels
      Pages: 83 - 99
      Abstract: Publication date: November 2017
      Source:Artificial Intelligence, Volume 252
      Author(s): Omer Litov, Amnon Meisels
      Complete search algorithms for solving Distributed constraint optimization problems (DCOPs) can be divided into two groups: algorithms that use a pseudo tree and algorithms that do not use one. The best performing algorithms that do not use a pseudo tree use some form of forward bounding. In order to try and gain from both worlds, a new algorithm, which incorporates a hybrid approach of the two groups is presented. The proposed algorithm – Pseudo-Tree Forward-bounding (PT-FB) – is shown to perform very well. PT-FB is next extended to be able to solve Asymmetric DCOPs (ADCOPs). Here again, its performance is better than its competitors. An extensive experimental evaluation of the performance of each of the proposed algorithms is presented.

      PubDate: 2017-09-26T05:22:05Z
      DOI: 10.1016/j.artint.2017.07.003
      Issue No: Vol. 252 (2017)
  • Belief Revision, Minimal Change and Relaxation: A General Framework based
           on Satisfaction Systems, and Applications to Description Logics
    • Authors: Marc Aiguier; Jamal Atif; Isabelle Bloch; Céline Hudelot
      Abstract: Publication date: Available online 17 December 2017
      Source:Artificial Intelligence
      Author(s): Marc Aiguier, Jamal Atif, Isabelle Bloch, Céline Hudelot
      Belief revision of knowledge bases represented by a set of sentences in a given logic has been extensively studied but for specific logics, mainly propositional, and also recently Horn and description logics. Here, we propose to generalize this operation from a model-theoretic point of view, by defining revision in the abstract model theory of satisfaction systems. In this framework, we generalize to any satisfaction system the characterization of the AGM postulates given by Katsuno and Mendelzon for propositional logic in terms of minimal change among interpretations. In this generalization, the constraint on syntax independence is partially relaxed. Moreover, we study how to define revision, satisfying these weakened AGM postulates, from relaxation notions that have been first introduced in description logics to define dissimilarity measures between concepts, and the consequence of which is to relax the set of models of the old belief until it becomes consistent with the new pieces of knowledge. We show how the proposed general framework can be instantiated in different logics such as propositional, first-order, description and Horn logics. In particular for description logics, we introduce several concrete relaxation operators tailored for the description logic ALC and its fragments EL and ELU , discuss their properties and provide some illustrative examples.

      PubDate: 2017-12-27T12:47:01Z
      DOI: 10.1016/j.artint.2017.12.002
  • Strong Temporal Planning with Uncontrollable Durations
    • Authors: Alessandro Cimatti; Minh Andrea Micheli Marco Roveri David Smith
      Abstract: Publication date: Available online 5 December 2017
      Source:Artificial Intelligence
      Author(s): Alessandro Cimatti, Minh Do, Andrea Micheli, Marco Roveri, David E. Smith
      Planning in real world domains often involves modeling and reasoning about the duration of actions. Temporal planning allows such modeling and reasoning by looking for plans that specify start and end time points for each action. In many practical cases, however, the duration of actions may be uncertain and not under the full control of the executor. For example, a navigation task may take more or less time, depending on external conditions such as terrain or weather. In this paper, we tackle the problem of strong temporal planning with uncontrollable action durations (STPUD). For actions with uncontrollable durations, the planner is only allowed to choose the start of the actions, while the end is chosen, within known bounds, by the environment. A solution plan must be robust with respect to all uncontrollable action durations, and must achieve the goal on all executions, despite the choices of the environment. We propose two complementary techniques. First, we discuss a dedicated planning method, that generalizes the state-space temporal planning framework, leveraging SMT-based techniques for temporal networks under uncertainty. Second, we present a compilation-based method, that reduces any STPUD problem to an ordinary temporal planning problem. Moreover, we investigate a set of sufficient conditions to simplify domains by removing some of the uncontrollability. We implemented both our approaches, and we experimentally evaluated our techniques on a large number of instances. Our results demonstrate the practical applicability of the two techniques, which show complementary behavior.

      PubDate: 2017-12-12T13:51:16Z
  • A Computational Framework for Conceptual Blending
    • Authors: Manfred Eppe; Ewen Maclean Roberto Confalonieri Oliver Kutz Marco Schorlemmer
      Abstract: Publication date: Available online 2 December 2017
      Source:Artificial Intelligence
      Author(s): Manfred Eppe, Ewen Maclean, Roberto Confalonieri, Oliver Kutz, Marco Schorlemmer, Enric Plaza, Kai-Uwe Kühnberger
      We present a computational framework for conceptual blending, a concept invention method that is advocated in cognitive science as a fundamental and uniquely human engine for creative thinking. Our framework treats a crucial part of the blending process, namely the generalisation of input concepts, as a search problem that is solved by means of modern answer set programming methods to find commonalities among input concepts. We also address the problem of pruning the space of possible blends by introducing metrics that capture most of the so-called optimality principles, described in the cognitive science literature as guidelines to produce meaningful and serendipitous blends. As a proof of concept, we demonstrate how our system invents novel concepts and theories in domains where creativity is crucial, namely mathematics and music.

      PubDate: 2017-12-12T13:51:16Z
  • Rational Deployment of Multiple Heuristics in Optimal State-Space Search
    • Authors: Erez Karpas; Oded Betzalel; Solomon Eyal Shimony; David Tolpin; Ariel Felner
      Abstract: Publication date: Available online 7 November 2017
      Source:Artificial Intelligence
      Author(s): Erez Karpas, Oded Betzalel, Solomon Eyal Shimony, David Tolpin, Ariel Felner
      The obvious way to use several admissible heuristics in searching for an optimal solution is to take their maximum. In this paper, we aim to reduce the time spent on computing heuristics within the context of A ⁎ and I D A ⁎ . We discuss Lazy A ⁎ and Lazy I D A ⁎ , variants of A ⁎ and I D A ⁎ , respectively, where heuristics are evaluated lazily: only when they are essential to a decision to be made in the search process. While these lazy algorithms outperform naive maximization, we can do even better by intelligently deciding when to compute the more expensive heuristic. We present a new rational metareasoning based scheme which decides whether to compute the more expensive heuristics at all, based on a myopic regret estimate. This scheme is used to create rational lazy A ⁎ and rational lazy I D A ⁎ . We also present different methods for estimating the parameters necessary for making such decisions. An empirical evaluation in several domains supports the theoretical results, and shows that the rational variants, rational lazy A ⁎ and rational lazy I D A ⁎ , are better than their non-rational counterparts.

      PubDate: 2017-11-09T02:30:24Z
      DOI: 10.1016/j.artint.2017.11.001
  • Fundamental properties of attack relations in structured argumentation
           with priorities
    • Authors: Phan Minh Dung; Phan Minh Thang
      Abstract: Publication date: Available online 7 November 2017
      Source:Artificial Intelligence
      Author(s): Phan Minh Dung, Phan Minh Thang
      Due to a proliferation and diversity of approaches to structured argumentation with prioritized rules, several simple and intuitive principles for characterization and evaluation of the proposed attack relations have recently been introduced in [23]. While the proposed principles and properties are helpful, they do not identify unique attack relations. Any user of structured argumentation still faces a fundamental problem of determining an appropriate attack relation for her/his application and further principles that could help in identifying such attack relation. We argue that a key purpose of introducing priorities between defeasible rules is to remove undesired attacks while keeping the set of removed attacks to a minimum. This intuitive idea could be viewed as a kind of minimal-removal-principle. We show in this paper that the minimal-removal-principle together with a new simple and intuitive property of inconsistency-resolving and previously proposed properties indeed characterize a unique attack relation referred to as the canonical attack relation. We show that canonical attack relations could be characterized in three distinct ways, as the supremum of a complete upper-semilattice of regular attack relations, or by removing the undesired attacks from the basic attack relations where the undesired attacks are captured by a least-fixed point of an intuitive removal function, or as the normal attack relations introduced in an earlier paper for a class of well-prioritized knowledge bases. We start our study with a language consisting only of literals and two type of attacks, rebut and undercut. We then show that our approach can easily be scaled up by showing that all key results still hold for general underlying logical languages and the inclusion of assumptions. We apply our proposed approach to valued-based argumentation and show that it also leads to the canonical semantics.

      PubDate: 2017-11-09T02:30:24Z
      DOI: 10.1016/j.artint.2017.11.002
  • Taking Account of the Actions of Others in Value-based Reasoning
    • Authors: Katie Atkinson; Trevor Bench-Capon
      Abstract: Publication date: Available online 28 September 2017
      Source:Artificial Intelligence
      Author(s): Katie Atkinson, Trevor Bench-Capon
      Practical reasoning, reasoning about what actions should be chosen, is highly dependent both on the individual values of the agent concerned and on what others choose to do. Hitherto, computational models of value-based argumentation for practical reasoning have required assumptions to be made about the beliefs and preferences of other agents. Here we present a new method for taking the actions of others into account that does not require these assumptions: the only beliefs and preferences considered are those of the agent engaged in the reasoning. Our new formalism draws on utility-based approaches and expresses the reasoning in the form of arguments and objections, to enable full integration with value-based practical reasoning. We illustrate our approach by showing how value-based reasoning is modelled in two scenarios used in experimental economics, the Ultimatum Game and the Prisoner's Dilemma, and we present an evaluation of our approach in terms of these experiments. The evaluation demonstrates that our model is able to reproduce computationally the results of ethnographic experiments, serving as an encouraging validation exercise.

      PubDate: 2017-10-04T05:01:40Z
      DOI: 10.1016/j.artint.2017.09.002
  • Resolving distributed knowledge
    • Authors: Thomas
      Abstract: Publication date: November 2017
      Source:Artificial Intelligence, Volume 252
      Author(s): Thomas Ågotnes, Yì N. Wáng
      In epistemic logic, a key formal theory for reasoning about knowledge in AI and other fields, different notions of group knowledge describe different ways in which knowledge can be associated with a group of agents. Distributed knowledge can be seen as the sum of the knowledge in a group; it is sometimes referred to as the potential knowledge of a group, or the joint knowledge they could obtain if they had unlimited means of communication. In epistemic logic, a formula of the form D G φ is intended to express the fact that group G has distributed knowledge of φ, that the total information in the group can be used to infer φ. In this paper we show that this is not the same as φ necessarily being true after the members of the group actually share all their information with each other – perhaps contrary to intuitive ideas about what distributed knowledge is. We furthermore introduce a new operator R G , such that R G φ means that φ is true after G have shared all their information with each other – after G's distributed knowledge has been resolved. The R G operators are called resolution operators. We study logics with different combinations of resolution operators and operators for common and distributed knowledge. Of particular interest is the relationship between distributed and common knowledge. The main results are characterizations of expressive power, and sound and complete axiomatizations. We also study the relationship to public announcement logic.

      PubDate: 2017-09-26T05:22:05Z
  • Intelligent agent supporting human-multi-robot Team collaboration
    • Authors: Ariel Rosenfeld; Noa Agmon; Oleg Maksimov; Sarit Kraus
      Abstract: Publication date: Available online 31 August 2017
      Source:Artificial Intelligence
      Author(s): Ariel Rosenfeld, Noa Agmon, Oleg Maksimov, Sarit Kraus
      The number of multi-robot systems deployed in field applications has risen dramatically over the years. Nevertheless, supervising and operating multiple robots simultaneously is a difficult task for a single operator to execute. In this article we propose a novel approach for utilizing automated advising agents in assisting an operator to better manage a team of multiple robots in complex environments. We introduce an advice provision methodology and exemplify its implementation using automated advising agents in two real-world human-multi-robot team collaboration tasks: the Search And Rescue (SAR) and the warehouse operation tasks. Our intelligent advising agents were evaluated through extensive field trials, with over 150 human operators using both simulated and physical mobile robots, and showed a significant improvement in the team's performance.

      PubDate: 2017-09-02T00:31:24Z
      DOI: 10.1016/j.artint.2017.08.005
  • Distributed first order logic
    • Authors: Chiara Ghidini; Luciano Serafini
      Abstract: Publication date: Available online 30 August 2017
      Source:Artificial Intelligence
      Author(s): Chiara Ghidini, Luciano Serafini
      Distributed First Order Logic (DFOL) has been introduced more than ten years ago with the purpose of formalising distributed knowledge-based systems, where knowledge about heterogeneous domains is scattered into a set of interconnected modules. DFOL formalises the knowledge contained in each module by means of first-order theories, and the interconnections between modules by means of special inference rules called bridge rules. Despite their restricted form in the original DFOL formulation, bridge rules have influenced several works in the areas of heterogeneous knowledge integration, modular knowledge representation, and schema/ontology matching. This, in turn, has fostered extensions and modifications of the original DFOL that have never been systematically described and published. This paper tackles the lack of a comprehensive description of DFOL by providing a systematic account of a completely revised and extended version of the logic, together with a sound and complete axiomatisation of a general form of bridge rules based on Natural Deduction. The resulting DFOL framework is then proposed as a clear formal tool for the representation of and reasoning about distributed knowledge and bridge rules.

      PubDate: 2017-09-02T00:31:24Z
      DOI: 10.1016/j.artint.2017.08.008
  • The MADLA Planner: Multi-Agent Planning by Combination of Distributed and
           Local Heuristic Search
    • Authors: Michal Štolba; Antonín Komenda
      Abstract: Publication date: Available online 30 August 2017
      Source:Artificial Intelligence
      Author(s): Michal Štolba, Antonín Komenda
      Real world applications often require cooperation of multiple independent entities. Classical planning is a well established technique solving various challenging problems such as logistic planning, factory process planning, military mission planning and high-level planning for robots. Multi-agent planning aims at solving similar problems in the presence of multiple independent entities (agents). Even though such entities might want to cooperate in order to fulfill a common goal, they may want to keep their internal information and processes private. In such case, we talk about privacy-preserving multi-agent planning. So far, multi-agent planners based on heuristic search used either a local heuristic estimating the particular agent's local subproblem or a distributed heuristic estimating the global problem as a whole. In this paper, we present the Multi-Agent Distributed and Local Asynchronous (MADLA) Planner, running a novel variant of a distributed state-space forward-chaining multi-heuristic search which combines the use of a local and a distributed heuristic in order to combine their benefits. In particular, the planner uses two variants of the well known Fast-Forward heuristic. We provide proofs of soundness and completeness of the search algorithm and show how much and what type of privacy it preserves. We also provide an improved privacy-preserving distribution scheme for the Fast-Forward heuristic. We experimentally compare the newly proposed multi-heuristic scheme and the two used heuristics separately. The results show that the proposed solution outperforms classical (single-heuristic) distributed search with either one of the heuristics used separately. In the detailed experimental analysis, we show limits of the planner and of the used heuristics based on particular properties of the benchmark domains. In a comprehensive set of multi-agent planning domains and problems, we show that the MADLA Planner outperforms all contemporary state-of-the-art privacy-preserving multi-agent planners using a compatible planning model.

      PubDate: 2017-09-02T00:31:24Z
      DOI: 10.1016/j.artint.2017.08.007
  • Three-valued semantics for hybrid MKNF knowledge bases revisited
    • Authors: Fangfang Liu; Jia-Huai You
      Abstract: Publication date: Available online 24 August 2017
      Source:Artificial Intelligence
      Author(s): Fangfang Liu, Jia-Huai You
      Knorr et al. (2011) formulated a three-valued formalism for the logic of Minimal Knowledge and Negation as Failure (MKNF), and proposed a well-founded semantics for hybrid MKNF knowledge bases. The main results state that if a hybrid MKNF knowledge base has a three-valued MKNF model, its well-founded MKNF model exists, which is unique and can be computed by an alternating fixpoint construction. However, we show that these claims are erroneous. The goal of this paper is two-fold. First, we provide insight into why these claims fail to hold. This leads to a classification of hybrid MKNF knowledge bases into a hierarchy of four classes, of which the smallest one is what works for the well-founded semantics proposed by Knorr et al. After that, we characterize three-valued MKNF models into what we call stable partitions, which are similar to partial stable models in logic programming. The intuitive notion of stable partitions allows us to discover a new proof-theoretic method to determine whether a three-valued MKNF model exists for a given hybrid MKNF knowledge base. We can guess partial partitions and verify their stability by computing alternating fixpoints and by performing a consistency check. This algorithm can be applied to compute all three-valued MKNF models, as well as all two-valued ones originally formulated by Motik and Rosati for query answering. As a result, our work provides a uniform characterization of well-founded, two-valued, and all three-valued MKNF models, in terms of stable partitions and the alternating fixpoint construction.

      PubDate: 2017-09-02T00:31:24Z
      DOI: 10.1016/j.artint.2017.08.003
  • Iterative voting and acyclic games
    • Authors: Reshef Meir; Maria Polukarov; Jeffrey S. Rosenschein; Nicholas R. Jennings
      Abstract: Publication date: Available online 24 August 2017
      Source:Artificial Intelligence
      Author(s): Reshef Meir, Maria Polukarov, Jeffrey S. Rosenschein, Nicholas R. Jennings
      Multi-agent decision problems, in which independent agents have to agree on a joint plan of action or allocation of resources, are central to artificial intelligence. In such situations, agents' individual preferences over available alternatives may vary, and they may try to reconcile these differences by voting. We consider scenarios where voters cannot coordinate their actions, but are allowed to change their vote after observing the current outcome, as is often the case both in offline committees and in online voting. Specifically, we are interested in identifying conditions under which such iterative voting processes are guaranteed to converge to a Nash equilibrium state—that is, under which this process is acyclic. We classify convergence results based on the underlying assumptions about the agent scheduler (the order in which the agents take their actions) and the action scheduler (the actions available to the agents at each step). By so doing, we position iterative voting models within the general framework of acyclic games and game forms. In more detail, our main technical results provide a complete picture of conditions for acyclicity in several variations of Plurality voting. In particular, we show that (a) under the traditional lexicographic tie-breaking, the game converges from any state and for any order of agents, under a weak restriction on voters' actions; and that (b) Plurality with randomized tie-breaking is not guaranteed to converge under arbitrary agent schedulers, but there is always some path of better replies from any initial state of the game to a Nash equilibrium. We thus show a first separation between order-free acyclicity and weak acyclicity of game forms, thereby settling an open question from [Kukushkin 2011]. In addition, we refute another conjecture of Kukushkin regarding strongly acyclic voting rules, by proving the existence of strongly acyclic separable game forms.

      PubDate: 2017-09-02T00:31:24Z
      DOI: 10.1016/j.artint.2017.08.002
  • Common Sense, The Turing Test, and The Quest for Real AI, Hector Levesque.
           The MIT Press (2017), 142
    • Authors: David Israel (Emeritus)
      Abstract: Publication date: Available online 23 August 2017
      Source:Artificial Intelligence
      Author(s): David Israel (Emeritus)

      PubDate: 2017-09-02T00:31:24Z
      DOI: 10.1016/j.artint.2017.08.004
  • MM: A bidirectional search algorithm that is guaranteed to meet in the
    • Authors: Robert C. Holte; Ariel Felner; Guni Sharon; Nathan R. Sturtevant; Jingwei Chen
      Abstract: Publication date: Available online 22 August 2017
      Source:Artificial Intelligence
      Author(s): Robert C. Holte, Ariel Felner, Guni Sharon, Nathan R. Sturtevant, Jingwei Chen
      Bidirectional search algorithms interleave two separate searches, a normal search forward from the start state, and a search backward from the goal. It is well known that adding a heuristic to unidirectional search dramatically reduces the search effort. By contrast, despite decades of research, bidirectional heuristic search has not yet had a major impact. Additionally, no comprehensive theory was ever devised to understand the nature of bidirectional heuristic search. In this paper we aim to close this gap. We first present MM, a novel bidirectional heuristic search algorithm. Unlike previous bidirectional heuristic search algorithms, MM's forward and backward searches are guaranteed to “meet in the middle”, i.e. never expand a node beyond the solution midpoint. Based on this unique attribute we present a novel framework for comparing MM, A*, and their brute-force variants. We do this by dividing the entire state space into disjoint regions based on their distance from the start and goal. This allows us to perform a comparison of these algorithms on a per region basis and identify conditions favoring each algorithm. Finally, we present experimental results that support our theoretical analysis.

      PubDate: 2017-09-02T00:31:24Z
      DOI: 10.1016/j.artint.2017.05.004
  • Generalized possibilistic logic: Foundations and applications to
           qualitative reasoning about uncertainty
    • Authors: Didier Dubois; Henri Prade; Steven Schockaert
      Abstract: Publication date: Available online 9 August 2017
      Source:Artificial Intelligence
      Author(s): Didier Dubois, Henri Prade, Steven Schockaert
      This paper introduces generalized possibilistic logic (GPL), a logic for epistemic reasoning based on possibility theory. Formulas in GPL correspond to propositional combinations of assertions such as “it is certain to degree λ that the propositional formula α is true”. As its name suggests, the logic generalizes possibilistic logic (PL), which at the syntactic level only allows conjunctions of the aforementioned type of assertions. At the semantic level, PL can only encode sets of epistemic states encompassed by a single least informed one, whereas GPL can encode any set of epistemic states. This feature makes GPL particularly suitable for reasoning about what an agent knows about the beliefs of another agent, e.g., allowing the former to draw conclusions about what the other agent does not know. We introduce an axiomatization for GPL and show its soundness and completeness w.r.t. possibilistic semantics. Subsequently, we highlight the usefulness of GPL as a powerful unifying framework for various knowledge representation formalisms. Among others, we show how comparative uncertainty and ignorance can be modelled in GPL. We also exhibit a close connection between GPL and various existing formalisms, including possibilistic logic with partially ordered formulas, a logic of conditional assertions in the style of Kraus, Lehmann and Magidor, answer set programming and a fragment of the logic of minimal belief and negation as failure. Finally, we analyse the computational complexity of reasoning in GPL, identifying decision problems at the first, second, third and fourth level of the polynomial hierarchy.

      PubDate: 2017-09-02T00:31:24Z
      DOI: 10.1016/j.artint.2017.08.001
  • Minimal sets on propositional formulae. Problems and reductions
    • Authors: Joao Marques-Silva; Mikoláš Janota; Carlos Mencía
      Abstract: Publication date: Available online 2 August 2017
      Source:Artificial Intelligence
      Author(s): Joao Marques-Silva, Mikoláš Janota, Carlos Mencía
      Boolean Satisfiability (SAT) is arguably the archetypical NP-complete decision problem. Progress in SAT solving algorithms has motivated an ever increasing number of practical applications in recent years. However, many practical uses of SAT involve solving function as opposed to decision problems. Concrete examples include computing minimal unsatisfiable subsets, minimal correction subsets, prime implicates and implicants, minimal models, backbone literals, and autarkies, among several others. In most cases, solving a function problem requires a number of adaptive or non-adaptive calls to a SAT solver. Given the computational complexity of SAT, it is therefore important to develop algorithms that either require the smallest possible number of calls to the SAT solver, or that involve simpler instances. This paper addresses a number of representative function problems defined on Boolean formulae, and shows that all these function problems can be reduced to a generic problem of computing a minimal set subject to a monotone predicate. This problem is referred to as the Minimal Set over Monotone Predicate (MSMP) problem. This work provides new ways for solving well-known function problems, including prime implicates, minimal correction subsets, backbone literals, independent variables and autarkies, among several others. Moreover, this work also motivates the development of more efficient algorithms for the MSMP problem. Finally the paper outlines a number of areas of future research related with extensions of the MSMP problem.

      PubDate: 2017-08-02T23:35:09Z
      DOI: 10.1016/j.artint.2017.07.005
  • Belief revision and projection in the epistemic situation calculus
    • Authors: Christoph Schwering; Gerhard Lakemeyer; Maurice Pagnucco
      Abstract: Publication date: Available online 31 July 2017
      Source:Artificial Intelligence
      Author(s): Christoph Schwering, Gerhard Lakemeyer, Maurice Pagnucco
      This article considers defeasible beliefs in dynamic settings. In particular, we examine the belief projection problem: what is believed after performing an action and/or receiving new information' The approach is based on an epistemic variant of Reiter's situation calculus, where actions not only have physical effects but may also provide new information to the agent. The preferential belief structure is initially determined using conditional statements. New information is then incorporated using two popular belief revision schemes, namely natural and lexicographic revision. The projection problem is solved twofold in this formalism: by goal regression and by knowledge base progression.

      PubDate: 2017-08-02T23:35:09Z
      DOI: 10.1016/j.artint.2017.07.004
  • Automatically Improving Constraint Models in Savile Row
    • Authors: Peter Nightingale; Özgür Akgün; Ian P. Gent; Christopher Jefferson; Ian Miguel; Patrick Spracklen
      Abstract: Publication date: Available online 13 July 2017
      Source:Artificial Intelligence
      Author(s): Peter Nightingale, Özgür Akgün, Ian P. Gent, Christopher Jefferson, Ian Miguel, Patrick Spracklen
      When solving a combinatorial problem using Constraint Programming (CP) or Satisfiability (SAT), modelling and formulation are vital and difficult tasks. Even an expert human may explore many alternatives in modelling a single problem. We make a number of contributions in the automated modelling and reformulation of constraint models. We study a range of automated reformulation techniques, finding combinations of techniques which perform particularly well together. We introduce and describe in detail a new algorithm, X-CSE, to perform Associative-Commutative Common Subexpression Elimination (AC-CSE) in constraint problems, significantly improving existing CSE techniques for associative and commutative operators such as +. We demonstrate that these reformulation techniques can be integrated in a single automated constraint modelling tool, called Savile Row, whose architecture we describe. We use Savile Row as an experimental testbed to evaluate each reformulation on a set of 50 problem classes, with 596 instances in total. Our recommended reformulations are well worthwhile even including overheads, especially on harder instances where solver time dominates. With a SAT solver we observed a geometric mean of 2.15 times speedup compared to a straightforward tailored model without recommended reformulations. Using a CP solver, we obtained a geometric mean of 5.96 times speedup for instances taking over 10 seconds to solve.

      PubDate: 2017-07-23T12:45:36Z
      DOI: 10.1016/j.artint.2017.07.001
  • Latent tree models for hierarchical topic detection
    • Authors: Peixian Chen; Nevin L. Zhang; Tengfei Liu; Leonard K.M. Poon; Zhourong Chen; Farhan Khawar
      Abstract: Publication date: Available online 29 June 2017
      Source:Artificial Intelligence
      Author(s): Peixian Chen, Nevin L. Zhang, Tengfei Liu, Leonard K.M. Poon, Zhourong Chen, Farhan Khawar
      We present a novel method for hierarchical topic detection where topics are obtained by clustering documents in multiple ways. Specifically, we model document collections using a class of graphical models called hierarchical latent tree models (HLTMs). The variables at the bottom level of an HLTM are observed binary variables that represent the presence/absence of words in a document. The variables at other levels are binary latent variables that represent word co-occurrence patterns or co-occurrences of such patterns. Each latent variable gives a soft partition of the documents, and document clusters in the partitions are interpreted as topics. Latent variables at high levels of the hierarchy capture long-range word co-occurrence patterns and hence give thematically more general topics, while those at low levels of the hierarchy capture short-range word co-occurrence patterns and give thematically more specific topics. In comparison with LDA-based methods, a key advantage of the new method is that it represents co-occurrence patterns explicitly using model structures. Extensive empirical results show that the new method significantly outperforms the LDA-based methods in term of model quality and meaningfulness of topics and topic hierarchies.

      PubDate: 2017-07-02T13:12:00Z
      DOI: 10.1016/j.artint.2017.06.004
  • Impossibility in Belief Merging
    • Authors: Amílcar Mata Díaz; Ramón Pino Pérez
      Abstract: Publication date: Available online 29 June 2017
      Source:Artificial Intelligence
      Author(s): Amílcar Mata Díaz, Ramón Pino Pérez
      With the aim of studying social properties of belief merging and having a better understanding of impossibility, we extend in three ways the framework of logic-based merging introduced by Konieczny and Pino Pérez. First, at the level of representation of the information, we pass from belief bases to complex epistemic states. Second, the profiles are represented as functions of finite societies to the set of epistemic states (a sort of vectors) and not as multisets of epistemic states. Third, we extend the set of rational postulates in order to consider the epistemic versions of the classical postulates of social choice theory: standard domain, Pareto property, independence of irrelevant alternatives and absence of dictator. These epistemic versions of social postulates are given, essentially, in terms of the finite propositional logic. We state some representation theorems for these operators. These extensions and representation theorems allow us to establish an epistemic and very general version of Arrow's impossibility theorem. One of the interesting features of our result, is that it holds for different representations of epistemic states; for instance conditionals, ordinal conditional functions and, of course, total preorders.

      PubDate: 2017-07-02T13:12:00Z
      DOI: 10.1016/j.artint.2017.06.003
  • Understanding the Complexity of Axiom Pinpointing in Lightweight
           Description Logics
    • Authors: Rafael Peñaloza; Barış Sertkaya
      Abstract: Publication date: Available online 17 June 2017
      Source:Artificial Intelligence
      Author(s): Rafael Peñaloza, Barış Sertkaya
      Lightweight description logics are knowledge representation formalisms characterised by the low complexity of their standard reasoning tasks. They have been successfully employed for constructing large ontologies that model domain knowledge in several different practical applications. In order to maintain these ontologies, it is often necessary to detect the axioms that cause a given consequence. This task is commonly known as axiom pinpointing. In this paper, we provide a thorough analysis of the complexity of several decision, counting, and enumeration problems associated to axiom pinpointing in lightweight description logics.

      PubDate: 2017-06-22T03:33:40Z
      DOI: 10.1016/j.artint.2017.06.002
  • A Progression Semantics for First-Order Logic Programs
    • Authors: Yi Zhou; Yan Zhang
      Abstract: Publication date: Available online 9 June 2017
      Source:Artificial Intelligence
      Author(s): Yi Zhou, Yan Zhang
      In this paper, we propose a progression semantics for first-order normal logic programs, and show that it is equivalent to the well-known stable model (answer set) semantics. The progressional definition sheds new insights into Answer Set Programming (ASP), for instance, its relationships to Datalog, First-Order Logic (FOL) and Satisfiability Modulo Theories (SMT). As an example, we extend the notion of boundedness in Datalog for ASP, and show that it coincides with the notions of recursion-freeness and loop-freeness under program equivalence. In addition, we prove that boundedness precisely captures first-order definability for normal logic programs on arbitrary structures. Finally, we show that the progressional definition suggests an alternative translation from ASP to SMT, which yields a new way of implementing first-order ASP.

      PubDate: 2017-06-12T01:05:42Z
      DOI: 10.1016/j.artint.2017.06.001
  • Discovering visual concept structure with sparse and incomplete tags
    • Authors: Jingya Wang; Xiatian Zhu; Shaogang Gong
      Abstract: Publication date: Available online 31 May 2017
      Source:Artificial Intelligence
      Author(s): Jingya Wang, Xiatian Zhu, Shaogang Gong
      Discovering automatically the semantic structure of tagged visual data (e.g. web videos and images) is important for visual data analysis and interpretation, enabling the machine intelligence for effectively processing the fast-growing amount of multi-media data. However, this is non-trivial due to the need for jointly learning underlying correlations between heterogeneous visual and tag data. The task is made more challenging by inherently sparse and incomplete tags. In this work, we develop a method for modelling the inherent visual data concept structures based on a novel Hierarchical-Multi-Label Random Forest model capable of correlating structured visual and tag information so as to more accurately interpret the visual semantics, e.g. disclosing meaningful visual groups with similar high-level concepts, and recovering missing tags for individual visual data samples. Specifically, our model exploits hierarchically structured tags of different semantic abstractness and multiple tag statistical correlations in addition to modelling visual and tag interactions. As a result, our model is able to discover more accurate semantic correlation between textual tags and visual features, and finally providing favourable visual semantics interpretation even with highly sparse and incomplete tags. We demonstrate the advantages of our proposed approach in two fundamental applications, visual data clustering and missing tag completion, on benchmarking video (i.e. TRECVID MED 2011) and image (i.e. NUS-WIDE) datasets.

      PubDate: 2017-06-02T09:00:03Z
      DOI: 10.1016/j.artint.2017.05.002
  • Low-rank decomposition meets kernel learning: A generalized Nyström
    • Authors: Liang Lan; Kai Zhang; Hancheng Ge; Wei Cheng; Jun Liu; Andreas Rauber; Xiao-Li Li; Jun Wang; Hongyuan Zha
      Abstract: Publication date: Available online 15 May 2017
      Source:Artificial Intelligence
      Author(s): Liang Lan, Kai Zhang, Hancheng Ge, Wei Cheng, Jun Liu, Andreas Rauber, Xiao-Li Li, Jun Wang, Hongyuan Zha
      Low-rank matrix decomposition and kernel learning are two useful techniques in building advanced learning systems. Low-rank decomposition can greatly reduce the computational cost of manipulating large kernel matrices. However, existing approaches are mostly unsupervised and do not incorporate side information such as class labels, making the decomposition less effective for a specific learning task. On the other hand, kernel learning techniques aim at constructing kernel matrices whose structure is well aligned with the learning target, which improves the generalization performance of kernel methods. However, most kernel learning approaches are computationally very expensive. To obtain the advantages of both techniques and address their limitations, in this paper we propose a novel kernel low-rank decomposition formulation called the generalized Nyström method. Our approach inherits the linear time and space complexity via matrix decomposition, while at the same time fully exploits (partial) label information in computing task-dependent decomposition. In addition, the resultant low-rank factors can generalize to arbitrary new samples, rendering great flexibility in inductive learning scenarios. We further extend the algorithm to a multiple kernel learning setup. The experimental results on semi-supervised classification demonstrate the usefulness of the proposed method.

      PubDate: 2017-05-18T08:47:00Z
      DOI: 10.1016/j.artint.2017.05.001
  • Thick Set Inversion
    • Authors: B. Desrochers; L. Jaulin
      Abstract: Publication date: Available online 20 April 2017
      Source:Artificial Intelligence
      Author(s): B. Desrochers, L. Jaulin
      This paper deals with the set inversion problem X = f − 1 ( Y ) in the case where the function f : R n → R m and the set Y are both uncertain. The uncertainty is treated under the form of intervals. More precisely, for all x, f ( x ) is inside the box [ f ] ( x ) and the uncertain set Y is bracketed between an inner set Y ⊂ and an outer set Y ⊃ . The introduction of new tools such as thick intervals and thick boxes will allow us to propose an efficient algorithm that handles the uncertainty of sets in an elegant and efficient manner. Some elementary test-cases that cannot be handled easily and properly by existing methods show the efficiency of the approach.

      PubDate: 2017-04-26T11:43:34Z
      DOI: 10.1016/j.artint.2017.04.004
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-