for Journals by Title or ISSN
for Articles by Keywords
help
Followed Journals
Journal you Follow: 0
 
Sign Up to follow journals, search in your chosen journals and, optionally, receive Email Alerts when new issues of your Followed Journals are published.
Already have an account? Sign In to see the journals you follow.
Journal Cover Artificial Intelligence
  [SJR: 2.426]   [H-I: 115]   [146 followers]  Follow
    
   Hybrid Journal Hybrid journal (It can contain Open Access articles)
   ISSN (Print) 0004-3702
   Published by Elsevier Homepage  [3089 journals]
  • A quality assuring, cost optimal multi-armed bandit mechanism for
           expertsourcing
    • Authors: Shweta Jain; Sujit Gujar; Satyanath Bhat; Onno Zoeter; Y. Narahari
      Pages: 44 - 63
      Abstract: Publication date: January 2018
      Source:Artificial Intelligence, Volume 254
      Author(s): Shweta Jain, Sujit Gujar, Satyanath Bhat, Onno Zoeter, Y. Narahari
      There are numerous situations when a service requester wishes to expertsource a series of identical but non-trivial tasks from a pool of experts so as to achieve an assured accuracy level for each task, in a cost optimal way. The experts available are typically heterogeneous with unknown but fixed qualities and different service costs. The service costs are usually private to the experts and the experts could be strategic about their costs. The problem is to select for each task an optimal subset of experts so that the outcome obtained after aggregating the opinions from the selected experts guarantees a target level of accuracy. The problem is a challenging one even in a non-strategic setting since the accuracy of an aggregated outcome depends on unknown qualities. We develop a novel multi-armed bandit (MAB) mechanism for solving this problem. First, we propose a framework, Assured Accuracy Bandit (AAB) framework, which leads to a MAB algorithm, Constrained Confidence Bound for Non-Strategic Setting (CCB-NS). We derive an upper bound on the number of time steps this algorithm chooses a sub-optimal set, which depends on the target accuracy and true qualities. A more challenging situation arises when the requester not only has to learn the qualities of the experts but has to elicit their true service costs as well. We modify the CCB-NS algorithm to obtain an adaptive exploration separated algorithm Constrained Confidence Bound for Strategic Setting (CCB-S). The CCB-S algorithm produces an ex-post monotone allocation rule that can then be transformed into an ex-post incentive compatible and ex-post individually rational mechanism. This mechanism learns the qualities of the experts and guarantees a given target accuracy level in a cost optimal way. We also provide a lower bound on the number of times any algorithm must select a sub-optimal set and we see that the lower bound matches our upper bound up to a constant factor. We provide insights on a practical implementation of this framework through an illustrative example and demonstrate the efficacy of our algorithms through simulations.

      PubDate: 2017-11-16T02:37:41Z
      DOI: 10.1016/j.artint.2017.10.001
      Issue No: Vol. 254 (2017)
       
  • Towards breaking more composition symmetries in partial symmetry breaking
    • Authors: Jimmy H.M. Lee; Zichen Zhu
      Pages: 51 - 82
      Abstract: Publication date: November 2017
      Source:Artificial Intelligence, Volume 252
      Author(s): Jimmy H.M. Lee, Zichen Zhu
      The paper proposes a dynamic method, Recursive Symmetry Breaking During Search (ReSBDS), for efficient partial symmetry breaking. We first demonstrate how Partial Symmetry Breaking During Search (ParSBDS) misses important pruning opportunities when given only a subset of symmetries to break. The investigation pinpoints the culprit and in turn suggests rectification. The main idea is to add extra symmetry breaking constraints during search recursively to prune also symmetric nodes of some pruned subtrees. Thus, ReSBDS can break extra symmetry compositions, but is carefully designed to break only the ones that are easy to identify and inexpensive to break. We present theorems to guarantee the soundness and termination of our approach, and compare our method with popular static and dynamic methods. When the variable (value) heuristic is static, ReSBDS is also complete in eliminating all interchangeable variables (values) given only the generator symmetries. We propose further a light version of ReSBDS method (LReSBDS), which has a slightly weaker pruning power of ReSBDS but with a reduced overhead. We give theoretical characterization on the soundness and termination of LReSBDS, and comparisons on pruning strengths against other symmetry breaking methods including ReSBDS. Extensive experimentations confirm the efficiency of ReSBDS and LReSBDS, when compared against state of the art methods.

      PubDate: 2017-09-26T05:22:05Z
      DOI: 10.1016/j.artint.2017.07.006
      Issue No: Vol. 252 (2017)
       
  • Forward bounding on pseudo-trees for DCOPs and ADCOPs
    • Authors: Omer Litov; Amnon Meisels
      Pages: 83 - 99
      Abstract: Publication date: November 2017
      Source:Artificial Intelligence, Volume 252
      Author(s): Omer Litov, Amnon Meisels
      Complete search algorithms for solving Distributed constraint optimization problems (DCOPs) can be divided into two groups: algorithms that use a pseudo tree and algorithms that do not use one. The best performing algorithms that do not use a pseudo tree use some form of forward bounding. In order to try and gain from both worlds, a new algorithm, which incorporates a hybrid approach of the two groups is presented. The proposed algorithm – Pseudo-Tree Forward-bounding (PT-FB) – is shown to perform very well. PT-FB is next extended to be able to solve Asymmetric DCOPs (ADCOPs). Here again, its performance is better than its competitors. An extensive experimental evaluation of the performance of each of the proposed algorithms is presented.

      PubDate: 2017-09-26T05:22:05Z
      DOI: 10.1016/j.artint.2017.07.003
      Issue No: Vol. 252 (2017)
       
  • Implicit, explicit and speculative knowledge
    • Abstract: Publication date: March 2018
      Source:Artificial Intelligence, Volume 256
      Author(s): Hans van Ditmarsch, Tim French, Fernando R. Velázquez-Quesada, Yì N. Wáng
      We compare different epistemic notions in the presence of awareness of propositional variables: the logic of implicit knowledge (in which explicit knowledge is definable), the logic of explicit knowledge, and the logic of speculative knowledge. Speculative knowledge is a novel epistemic notion that permits reasoning about unawareness. These logics are interpreted on epistemic awareness models: these are multi-agent Kripke structures for propositional awareness (in each state an agent may only be aware of formulas containing occurrences of a subset of all propositional variables). Different notions of bisimulation are suitable for these logics. We provide correspondence between bisimulation and modal equivalence on image-finite models for these logics. Expressivity and axiomatizations are investigated for models without restrictions, and for models with equivalence relations for all agents (modeling knowledge) and awareness introspection (agents know what they are aware of). We show that the logic of speculative knowledge is as expressive as the logic of explicit knowledge, and the logic of implicit knowledge is more expressive than the two other logics. We also present expressivity results for more restricted languages. We then provide and compare axiomatizations for the three logics; the axiomatizations for speculative knowledge are novel. We compare our results to those for awareness achieved in artificial intelligence, computer science, philosophy, and economics.

      PubDate: 2017-12-12T13:51:16Z
       
  • Reactive multi-context systems: Heterogeneous reasoning in dynamic
           environments
    • Abstract: Publication date: March 2018
      Source:Artificial Intelligence, Volume 256
      Author(s): Gerhard Brewka, Stefan Ellmauthaler, Ricardo Gonçalves, Matthias Knorr, João Leite, Jörg Pührer
      Managed multi-context systems (mMCSs) allow for the integration of heterogeneous knowledge sources in a modular and very general way. They were, however, mainly designed for static scenarios and are therefore not well-suited for dynamic environments in which continuous reasoning over such heterogeneous knowledge with constantly arriving streams of data is necessary. In this paper, we introduce reactive multi-context systems (rMCSs), a framework for reactive reasoning in the presence of heterogeneous knowledge sources and data streams. We show that rMCSs are indeed well-suited for this purpose by illustrating how several typical problems arising in the context of stream reasoning can be handled using them, by showing how inconsistencies possibly occurring in the integration of multiple knowledge sources can be handled, and by arguing that the potential non-determinism of rMCSs can be avoided if needed using an alternative, more skeptical well-founded semantics instead with beneficial computational properties. We also investigate the computational complexity of various reasoning problems related to rMCSs. Finally, we discuss related work, and show that rMCSs do not only generalize mMCSs to dynamic settings, but also capture/extend relevant approaches w.r.t. dynamics in knowledge representation and stream reasoning.

      PubDate: 2017-12-12T13:51:16Z
       
  • Fixpoint semantics for active integrity constraints
    • Abstract: Publication date: February 2018
      Source:Artificial Intelligence, Volume 255
      Author(s): Bart Bogaerts, Luís Cruz-Filipe
      Active integrity constraints (AICs) constitute a formalism to associate with a database not just the constraints it should adhere to, but also how to fix the database in case one or more of these constraints are violated. The intuitions regarding which repairs are “good” given such a description are closely related to intuitions that live in various areas of non-monotonic reasoning, such as logic programming and autoepistemic logic. In this paper, we apply approximation fixpoint theory, an abstract, algebraic framework designed to unify semantics of non-monotonic logics, to the field of AICs. This results in a new family of semantics for AICs. We study properties of our new semantics and relationships to existing semantics. In particular, we argue that two of the newly defined semantics stand out. Grounded repairs have a simple definition that is purely based on semantic principles that semantics for AICs should adhere to. And, as we show, they coincide with the intended interpretation of AICs on many examples. The second semantics of interest is the AFT-well-founded semantics: it is a computationally cheap semantics that provides upper and lower bounds for many other classes of repairs.

      PubDate: 2017-12-12T13:51:16Z
       
  • Strong Temporal Planning with Uncontrollable Durations
    • Abstract: Publication date: Available online 5 December 2017
      Source:Artificial Intelligence
      Author(s): Alessandro Cimatti, Minh Do, Andrea Micheli, Marco Roveri, David E. Smith
      Planning in real world domains often involves modeling and reasoning about the duration of actions. Temporal planning allows such modeling and reasoning by looking for plans that specify start and end time points for each action. In many practical cases, however, the duration of actions may be uncertain and not under the full control of the executor. For example, a navigation task may take more or less time, depending on external conditions such as terrain or weather. In this paper, we tackle the problem of strong temporal planning with uncontrollable action durations (STPUD). For actions with uncontrollable durations, the planner is only allowed to choose the start of the actions, while the end is chosen, within known bounds, by the environment. A solution plan must be robust with respect to all uncontrollable action durations, and must achieve the goal on all executions, despite the choices of the environment. We propose two complementary techniques. First, we discuss a dedicated planning method, that generalizes the state-space temporal planning framework, leveraging SMT-based techniques for temporal networks under uncertainty. Second, we present a compilation-based method, that reduces any STPUD problem to an ordinary temporal planning problem. Moreover, we investigate a set of sufficient conditions to simplify domains by removing some of the uncontrollability. We implemented both our approaches, and we experimentally evaluated our techniques on a large number of instances. Our results demonstrate the practical applicability of the two techniques, which show complementary behavior.

      PubDate: 2017-12-12T13:51:16Z
       
  • A Computational Framework for Conceptual Blending
    • Abstract: Publication date: Available online 2 December 2017
      Source:Artificial Intelligence
      Author(s): Manfred Eppe, Ewen Maclean, Roberto Confalonieri, Oliver Kutz, Marco Schorlemmer, Enric Plaza, Kai-Uwe Kühnberger
      We present a computational framework for conceptual blending, a concept invention method that is advocated in cognitive science as a fundamental and uniquely human engine for creative thinking. Our framework treats a crucial part of the blending process, namely the generalisation of input concepts, as a search problem that is solved by means of modern answer set programming methods to find commonalities among input concepts. We also address the problem of pruning the space of possible blends by introducing metrics that capture most of the so-called optimality principles, described in the cognitive science literature as guidelines to produce meaningful and serendipitous blends. As a proof of concept, we demonstrate how our system invents novel concepts and theories in domains where creativity is crucial, namely mathematics and music.

      PubDate: 2017-12-12T13:51:16Z
       
  • Rational Deployment of Multiple Heuristics in Optimal State-Space Search
    • Authors: Erez Karpas; Oded Betzalel; Solomon Eyal Shimony; David Tolpin; Ariel Felner
      Abstract: Publication date: Available online 7 November 2017
      Source:Artificial Intelligence
      Author(s): Erez Karpas, Oded Betzalel, Solomon Eyal Shimony, David Tolpin, Ariel Felner
      The obvious way to use several admissible heuristics in searching for an optimal solution is to take their maximum. In this paper, we aim to reduce the time spent on computing heuristics within the context of A ⁎ and I D A ⁎ . We discuss Lazy A ⁎ and Lazy I D A ⁎ , variants of A ⁎ and I D A ⁎ , respectively, where heuristics are evaluated lazily: only when they are essential to a decision to be made in the search process. While these lazy algorithms outperform naive maximization, we can do even better by intelligently deciding when to compute the more expensive heuristic. We present a new rational metareasoning based scheme which decides whether to compute the more expensive heuristics at all, based on a myopic regret estimate. This scheme is used to create rational lazy A ⁎ and rational lazy I D A ⁎ . We also present different methods for estimating the parameters necessary for making such decisions. An empirical evaluation in several domains supports the theoretical results, and shows that the rational variants, rational lazy A ⁎ and rational lazy I D A ⁎ , are better than their non-rational counterparts.

      PubDate: 2017-11-09T02:30:24Z
      DOI: 10.1016/j.artint.2017.11.001
       
  • Fundamental properties of attack relations in structured argumentation
           with priorities
    • Authors: Phan Minh Dung; Phan Minh Thang
      Abstract: Publication date: Available online 7 November 2017
      Source:Artificial Intelligence
      Author(s): Phan Minh Dung, Phan Minh Thang
      Due to a proliferation and diversity of approaches to structured argumentation with prioritized rules, several simple and intuitive principles for characterization and evaluation of the proposed attack relations have recently been introduced in [23]. While the proposed principles and properties are helpful, they do not identify unique attack relations. Any user of structured argumentation still faces a fundamental problem of determining an appropriate attack relation for her/his application and further principles that could help in identifying such attack relation. We argue that a key purpose of introducing priorities between defeasible rules is to remove undesired attacks while keeping the set of removed attacks to a minimum. This intuitive idea could be viewed as a kind of minimal-removal-principle. We show in this paper that the minimal-removal-principle together with a new simple and intuitive property of inconsistency-resolving and previously proposed properties indeed characterize a unique attack relation referred to as the canonical attack relation. We show that canonical attack relations could be characterized in three distinct ways, as the supremum of a complete upper-semilattice of regular attack relations, or by removing the undesired attacks from the basic attack relations where the undesired attacks are captured by a least-fixed point of an intuitive removal function, or as the normal attack relations introduced in an earlier paper for a class of well-prioritized knowledge bases. We start our study with a language consisting only of literals and two type of attacks, rebut and undercut. We then show that our approach can easily be scaled up by showing that all key results still hold for general underlying logical languages and the inclusion of assumptions. We apply our proposed approach to valued-based argumentation and show that it also leads to the canonical semantics.

      PubDate: 2017-11-09T02:30:24Z
      DOI: 10.1016/j.artint.2017.11.002
       
  • Taking Account of the Actions of Others in Value-based Reasoning
    • Authors: Katie Atkinson; Trevor Bench-Capon
      Abstract: Publication date: Available online 28 September 2017
      Source:Artificial Intelligence
      Author(s): Katie Atkinson, Trevor Bench-Capon
      Practical reasoning, reasoning about what actions should be chosen, is highly dependent both on the individual values of the agent concerned and on what others choose to do. Hitherto, computational models of value-based argumentation for practical reasoning have required assumptions to be made about the beliefs and preferences of other agents. Here we present a new method for taking the actions of others into account that does not require these assumptions: the only beliefs and preferences considered are those of the agent engaged in the reasoning. Our new formalism draws on utility-based approaches and expresses the reasoning in the form of arguments and objections, to enable full integration with value-based practical reasoning. We illustrate our approach by showing how value-based reasoning is modelled in two scenarios used in experimental economics, the Ultimatum Game and the Prisoner's Dilemma, and we present an evaluation of our approach in terms of these experiments. The evaluation demonstrates that our model is able to reproduce computationally the results of ethnographic experiments, serving as an encouraging validation exercise.

      PubDate: 2017-10-04T05:01:40Z
      DOI: 10.1016/j.artint.2017.09.002
       
  • Resolving distributed knowledge
    • Authors: Thomas
      Abstract: Publication date: November 2017
      Source:Artificial Intelligence, Volume 252
      Author(s): Thomas Ågotnes, Yì N. Wáng
      In epistemic logic, a key formal theory for reasoning about knowledge in AI and other fields, different notions of group knowledge describe different ways in which knowledge can be associated with a group of agents. Distributed knowledge can be seen as the sum of the knowledge in a group; it is sometimes referred to as the potential knowledge of a group, or the joint knowledge they could obtain if they had unlimited means of communication. In epistemic logic, a formula of the form D G φ is intended to express the fact that group G has distributed knowledge of φ, that the total information in the group can be used to infer φ. In this paper we show that this is not the same as φ necessarily being true after the members of the group actually share all their information with each other – perhaps contrary to intuitive ideas about what distributed knowledge is. We furthermore introduce a new operator R G , such that R G φ means that φ is true after G have shared all their information with each other – after G's distributed knowledge has been resolved. The R G operators are called resolution operators. We study logics with different combinations of resolution operators and operators for common and distributed knowledge. Of particular interest is the relationship between distributed and common knowledge. The main results are characterizations of expressive power, and sound and complete axiomatizations. We also study the relationship to public announcement logic.

      PubDate: 2017-09-26T05:22:05Z
       
  • Intelligent agent supporting human-multi-robot Team collaboration
    • Authors: Ariel Rosenfeld; Noa Agmon; Oleg Maksimov; Sarit Kraus
      Abstract: Publication date: Available online 31 August 2017
      Source:Artificial Intelligence
      Author(s): Ariel Rosenfeld, Noa Agmon, Oleg Maksimov, Sarit Kraus
      The number of multi-robot systems deployed in field applications has risen dramatically over the years. Nevertheless, supervising and operating multiple robots simultaneously is a difficult task for a single operator to execute. In this article we propose a novel approach for utilizing automated advising agents in assisting an operator to better manage a team of multiple robots in complex environments. We introduce an advice provision methodology and exemplify its implementation using automated advising agents in two real-world human-multi-robot team collaboration tasks: the Search And Rescue (SAR) and the warehouse operation tasks. Our intelligent advising agents were evaluated through extensive field trials, with over 150 human operators using both simulated and physical mobile robots, and showed a significant improvement in the team's performance.

      PubDate: 2017-09-02T00:31:24Z
      DOI: 10.1016/j.artint.2017.08.005
       
  • Distributed first order logic
    • Authors: Chiara Ghidini; Luciano Serafini
      Abstract: Publication date: Available online 30 August 2017
      Source:Artificial Intelligence
      Author(s): Chiara Ghidini, Luciano Serafini
      Distributed First Order Logic (DFOL) has been introduced more than ten years ago with the purpose of formalising distributed knowledge-based systems, where knowledge about heterogeneous domains is scattered into a set of interconnected modules. DFOL formalises the knowledge contained in each module by means of first-order theories, and the interconnections between modules by means of special inference rules called bridge rules. Despite their restricted form in the original DFOL formulation, bridge rules have influenced several works in the areas of heterogeneous knowledge integration, modular knowledge representation, and schema/ontology matching. This, in turn, has fostered extensions and modifications of the original DFOL that have never been systematically described and published. This paper tackles the lack of a comprehensive description of DFOL by providing a systematic account of a completely revised and extended version of the logic, together with a sound and complete axiomatisation of a general form of bridge rules based on Natural Deduction. The resulting DFOL framework is then proposed as a clear formal tool for the representation of and reasoning about distributed knowledge and bridge rules.

      PubDate: 2017-09-02T00:31:24Z
      DOI: 10.1016/j.artint.2017.08.008
       
  • The MADLA Planner: Multi-Agent Planning by Combination of Distributed and
           Local Heuristic Search
    • Authors: Michal Štolba; Antonín Komenda
      Abstract: Publication date: Available online 30 August 2017
      Source:Artificial Intelligence
      Author(s): Michal Štolba, Antonín Komenda
      Real world applications often require cooperation of multiple independent entities. Classical planning is a well established technique solving various challenging problems such as logistic planning, factory process planning, military mission planning and high-level planning for robots. Multi-agent planning aims at solving similar problems in the presence of multiple independent entities (agents). Even though such entities might want to cooperate in order to fulfill a common goal, they may want to keep their internal information and processes private. In such case, we talk about privacy-preserving multi-agent planning. So far, multi-agent planners based on heuristic search used either a local heuristic estimating the particular agent's local subproblem or a distributed heuristic estimating the global problem as a whole. In this paper, we present the Multi-Agent Distributed and Local Asynchronous (MADLA) Planner, running a novel variant of a distributed state-space forward-chaining multi-heuristic search which combines the use of a local and a distributed heuristic in order to combine their benefits. In particular, the planner uses two variants of the well known Fast-Forward heuristic. We provide proofs of soundness and completeness of the search algorithm and show how much and what type of privacy it preserves. We also provide an improved privacy-preserving distribution scheme for the Fast-Forward heuristic. We experimentally compare the newly proposed multi-heuristic scheme and the two used heuristics separately. The results show that the proposed solution outperforms classical (single-heuristic) distributed search with either one of the heuristics used separately. In the detailed experimental analysis, we show limits of the planner and of the used heuristics based on particular properties of the benchmark domains. In a comprehensive set of multi-agent planning domains and problems, we show that the MADLA Planner outperforms all contemporary state-of-the-art privacy-preserving multi-agent planners using a compatible planning model.

      PubDate: 2017-09-02T00:31:24Z
      DOI: 10.1016/j.artint.2017.08.007
       
  • Three-valued semantics for hybrid MKNF knowledge bases revisited
    • Authors: Fangfang Liu; Jia-Huai You
      Abstract: Publication date: Available online 24 August 2017
      Source:Artificial Intelligence
      Author(s): Fangfang Liu, Jia-Huai You
      Knorr et al. (2011) formulated a three-valued formalism for the logic of Minimal Knowledge and Negation as Failure (MKNF), and proposed a well-founded semantics for hybrid MKNF knowledge bases. The main results state that if a hybrid MKNF knowledge base has a three-valued MKNF model, its well-founded MKNF model exists, which is unique and can be computed by an alternating fixpoint construction. However, we show that these claims are erroneous. The goal of this paper is two-fold. First, we provide insight into why these claims fail to hold. This leads to a classification of hybrid MKNF knowledge bases into a hierarchy of four classes, of which the smallest one is what works for the well-founded semantics proposed by Knorr et al. After that, we characterize three-valued MKNF models into what we call stable partitions, which are similar to partial stable models in logic programming. The intuitive notion of stable partitions allows us to discover a new proof-theoretic method to determine whether a three-valued MKNF model exists for a given hybrid MKNF knowledge base. We can guess partial partitions and verify their stability by computing alternating fixpoints and by performing a consistency check. This algorithm can be applied to compute all three-valued MKNF models, as well as all two-valued ones originally formulated by Motik and Rosati for query answering. As a result, our work provides a uniform characterization of well-founded, two-valued, and all three-valued MKNF models, in terms of stable partitions and the alternating fixpoint construction.

      PubDate: 2017-09-02T00:31:24Z
      DOI: 10.1016/j.artint.2017.08.003
       
  • Iterative voting and acyclic games
    • Authors: Reshef Meir; Maria Polukarov; Jeffrey S. Rosenschein; Nicholas R. Jennings
      Abstract: Publication date: Available online 24 August 2017
      Source:Artificial Intelligence
      Author(s): Reshef Meir, Maria Polukarov, Jeffrey S. Rosenschein, Nicholas R. Jennings
      Multi-agent decision problems, in which independent agents have to agree on a joint plan of action or allocation of resources, are central to artificial intelligence. In such situations, agents' individual preferences over available alternatives may vary, and they may try to reconcile these differences by voting. We consider scenarios where voters cannot coordinate their actions, but are allowed to change their vote after observing the current outcome, as is often the case both in offline committees and in online voting. Specifically, we are interested in identifying conditions under which such iterative voting processes are guaranteed to converge to a Nash equilibrium state—that is, under which this process is acyclic. We classify convergence results based on the underlying assumptions about the agent scheduler (the order in which the agents take their actions) and the action scheduler (the actions available to the agents at each step). By so doing, we position iterative voting models within the general framework of acyclic games and game forms. In more detail, our main technical results provide a complete picture of conditions for acyclicity in several variations of Plurality voting. In particular, we show that (a) under the traditional lexicographic tie-breaking, the game converges from any state and for any order of agents, under a weak restriction on voters' actions; and that (b) Plurality with randomized tie-breaking is not guaranteed to converge under arbitrary agent schedulers, but there is always some path of better replies from any initial state of the game to a Nash equilibrium. We thus show a first separation between order-free acyclicity and weak acyclicity of game forms, thereby settling an open question from [Kukushkin 2011]. In addition, we refute another conjecture of Kukushkin regarding strongly acyclic voting rules, by proving the existence of strongly acyclic separable game forms.

      PubDate: 2017-09-02T00:31:24Z
      DOI: 10.1016/j.artint.2017.08.002
       
  • Common Sense, The Turing Test, and The Quest for Real AI, Hector Levesque.
           The MIT Press (2017), 142
    • Authors: David Israel (Emeritus)
      Abstract: Publication date: Available online 23 August 2017
      Source:Artificial Intelligence
      Author(s): David Israel (Emeritus)


      PubDate: 2017-09-02T00:31:24Z
      DOI: 10.1016/j.artint.2017.08.004
       
  • MM: A bidirectional search algorithm that is guaranteed to meet in the
           middle
    • Authors: Robert C. Holte; Ariel Felner; Guni Sharon; Nathan R. Sturtevant; Jingwei Chen
      Abstract: Publication date: Available online 22 August 2017
      Source:Artificial Intelligence
      Author(s): Robert C. Holte, Ariel Felner, Guni Sharon, Nathan R. Sturtevant, Jingwei Chen
      Bidirectional search algorithms interleave two separate searches, a normal search forward from the start state, and a search backward from the goal. It is well known that adding a heuristic to unidirectional search dramatically reduces the search effort. By contrast, despite decades of research, bidirectional heuristic search has not yet had a major impact. Additionally, no comprehensive theory was ever devised to understand the nature of bidirectional heuristic search. In this paper we aim to close this gap. We first present MM, a novel bidirectional heuristic search algorithm. Unlike previous bidirectional heuristic search algorithms, MM's forward and backward searches are guaranteed to “meet in the middle”, i.e. never expand a node beyond the solution midpoint. Based on this unique attribute we present a novel framework for comparing MM, A*, and their brute-force variants. We do this by dividing the entire state space into disjoint regions based on their distance from the start and goal. This allows us to perform a comparison of these algorithms on a per region basis and identify conditions favoring each algorithm. Finally, we present experimental results that support our theoretical analysis.

      PubDate: 2017-09-02T00:31:24Z
      DOI: 10.1016/j.artint.2017.05.004
       
  • Generalized possibilistic logic: Foundations and applications to
           qualitative reasoning about uncertainty
    • Authors: Didier Dubois; Henri Prade; Steven Schockaert
      Abstract: Publication date: Available online 9 August 2017
      Source:Artificial Intelligence
      Author(s): Didier Dubois, Henri Prade, Steven Schockaert
      This paper introduces generalized possibilistic logic (GPL), a logic for epistemic reasoning based on possibility theory. Formulas in GPL correspond to propositional combinations of assertions such as “it is certain to degree λ that the propositional formula α is true”. As its name suggests, the logic generalizes possibilistic logic (PL), which at the syntactic level only allows conjunctions of the aforementioned type of assertions. At the semantic level, PL can only encode sets of epistemic states encompassed by a single least informed one, whereas GPL can encode any set of epistemic states. This feature makes GPL particularly suitable for reasoning about what an agent knows about the beliefs of another agent, e.g., allowing the former to draw conclusions about what the other agent does not know. We introduce an axiomatization for GPL and show its soundness and completeness w.r.t. possibilistic semantics. Subsequently, we highlight the usefulness of GPL as a powerful unifying framework for various knowledge representation formalisms. Among others, we show how comparative uncertainty and ignorance can be modelled in GPL. We also exhibit a close connection between GPL and various existing formalisms, including possibilistic logic with partially ordered formulas, a logic of conditional assertions in the style of Kraus, Lehmann and Magidor, answer set programming and a fragment of the logic of minimal belief and negation as failure. Finally, we analyse the computational complexity of reasoning in GPL, identifying decision problems at the first, second, third and fourth level of the polynomial hierarchy.

      PubDate: 2017-09-02T00:31:24Z
      DOI: 10.1016/j.artint.2017.08.001
       
  • Minimal sets on propositional formulae. Problems and reductions
    • Authors: Joao Marques-Silva; Mikoláš Janota; Carlos Mencía
      Abstract: Publication date: Available online 2 August 2017
      Source:Artificial Intelligence
      Author(s): Joao Marques-Silva, Mikoláš Janota, Carlos Mencía
      Boolean Satisfiability (SAT) is arguably the archetypical NP-complete decision problem. Progress in SAT solving algorithms has motivated an ever increasing number of practical applications in recent years. However, many practical uses of SAT involve solving function as opposed to decision problems. Concrete examples include computing minimal unsatisfiable subsets, minimal correction subsets, prime implicates and implicants, minimal models, backbone literals, and autarkies, among several others. In most cases, solving a function problem requires a number of adaptive or non-adaptive calls to a SAT solver. Given the computational complexity of SAT, it is therefore important to develop algorithms that either require the smallest possible number of calls to the SAT solver, or that involve simpler instances. This paper addresses a number of representative function problems defined on Boolean formulae, and shows that all these function problems can be reduced to a generic problem of computing a minimal set subject to a monotone predicate. This problem is referred to as the Minimal Set over Monotone Predicate (MSMP) problem. This work provides new ways for solving well-known function problems, including prime implicates, minimal correction subsets, backbone literals, independent variables and autarkies, among several others. Moreover, this work also motivates the development of more efficient algorithms for the MSMP problem. Finally the paper outlines a number of areas of future research related with extensions of the MSMP problem.

      PubDate: 2017-08-02T23:35:09Z
      DOI: 10.1016/j.artint.2017.07.005
       
  • Belief revision and projection in the epistemic situation calculus
    • Authors: Christoph Schwering; Gerhard Lakemeyer; Maurice Pagnucco
      Abstract: Publication date: Available online 31 July 2017
      Source:Artificial Intelligence
      Author(s): Christoph Schwering, Gerhard Lakemeyer, Maurice Pagnucco
      This article considers defeasible beliefs in dynamic settings. In particular, we examine the belief projection problem: what is believed after performing an action and/or receiving new information' The approach is based on an epistemic variant of Reiter's situation calculus, where actions not only have physical effects but may also provide new information to the agent. The preferential belief structure is initially determined using conditional statements. New information is then incorporated using two popular belief revision schemes, namely natural and lexicographic revision. The projection problem is solved twofold in this formalism: by goal regression and by knowledge base progression.

      PubDate: 2017-08-02T23:35:09Z
      DOI: 10.1016/j.artint.2017.07.004
       
  • Automatically Improving Constraint Models in Savile Row
    • Authors: Peter Nightingale; Özgür Akgün; Ian P. Gent; Christopher Jefferson; Ian Miguel; Patrick Spracklen
      Abstract: Publication date: Available online 13 July 2017
      Source:Artificial Intelligence
      Author(s): Peter Nightingale, Özgür Akgün, Ian P. Gent, Christopher Jefferson, Ian Miguel, Patrick Spracklen
      When solving a combinatorial problem using Constraint Programming (CP) or Satisfiability (SAT), modelling and formulation are vital and difficult tasks. Even an expert human may explore many alternatives in modelling a single problem. We make a number of contributions in the automated modelling and reformulation of constraint models. We study a range of automated reformulation techniques, finding combinations of techniques which perform particularly well together. We introduce and describe in detail a new algorithm, X-CSE, to perform Associative-Commutative Common Subexpression Elimination (AC-CSE) in constraint problems, significantly improving existing CSE techniques for associative and commutative operators such as +. We demonstrate that these reformulation techniques can be integrated in a single automated constraint modelling tool, called Savile Row, whose architecture we describe. We use Savile Row as an experimental testbed to evaluate each reformulation on a set of 50 problem classes, with 596 instances in total. Our recommended reformulations are well worthwhile even including overheads, especially on harder instances where solver time dominates. With a SAT solver we observed a geometric mean of 2.15 times speedup compared to a straightforward tailored model without recommended reformulations. Using a CP solver, we obtained a geometric mean of 5.96 times speedup for instances taking over 10 seconds to solve.

      PubDate: 2017-07-23T12:45:36Z
      DOI: 10.1016/j.artint.2017.07.001
       
  • Latent tree models for hierarchical topic detection
    • Authors: Peixian Chen; Nevin L. Zhang; Tengfei Liu; Leonard K.M. Poon; Zhourong Chen; Farhan Khawar
      Abstract: Publication date: Available online 29 June 2017
      Source:Artificial Intelligence
      Author(s): Peixian Chen, Nevin L. Zhang, Tengfei Liu, Leonard K.M. Poon, Zhourong Chen, Farhan Khawar
      We present a novel method for hierarchical topic detection where topics are obtained by clustering documents in multiple ways. Specifically, we model document collections using a class of graphical models called hierarchical latent tree models (HLTMs). The variables at the bottom level of an HLTM are observed binary variables that represent the presence/absence of words in a document. The variables at other levels are binary latent variables that represent word co-occurrence patterns or co-occurrences of such patterns. Each latent variable gives a soft partition of the documents, and document clusters in the partitions are interpreted as topics. Latent variables at high levels of the hierarchy capture long-range word co-occurrence patterns and hence give thematically more general topics, while those at low levels of the hierarchy capture short-range word co-occurrence patterns and give thematically more specific topics. In comparison with LDA-based methods, a key advantage of the new method is that it represents co-occurrence patterns explicitly using model structures. Extensive empirical results show that the new method significantly outperforms the LDA-based methods in term of model quality and meaningfulness of topics and topic hierarchies.

      PubDate: 2017-07-02T13:12:00Z
      DOI: 10.1016/j.artint.2017.06.004
       
  • Impossibility in Belief Merging
    • Authors: Amílcar Mata Díaz; Ramón Pino Pérez
      Abstract: Publication date: Available online 29 June 2017
      Source:Artificial Intelligence
      Author(s): Amílcar Mata Díaz, Ramón Pino Pérez
      With the aim of studying social properties of belief merging and having a better understanding of impossibility, we extend in three ways the framework of logic-based merging introduced by Konieczny and Pino Pérez. First, at the level of representation of the information, we pass from belief bases to complex epistemic states. Second, the profiles are represented as functions of finite societies to the set of epistemic states (a sort of vectors) and not as multisets of epistemic states. Third, we extend the set of rational postulates in order to consider the epistemic versions of the classical postulates of social choice theory: standard domain, Pareto property, independence of irrelevant alternatives and absence of dictator. These epistemic versions of social postulates are given, essentially, in terms of the finite propositional logic. We state some representation theorems for these operators. These extensions and representation theorems allow us to establish an epistemic and very general version of Arrow's impossibility theorem. One of the interesting features of our result, is that it holds for different representations of epistemic states; for instance conditionals, ordinal conditional functions and, of course, total preorders.

      PubDate: 2017-07-02T13:12:00Z
      DOI: 10.1016/j.artint.2017.06.003
       
  • Understanding the Complexity of Axiom Pinpointing in Lightweight
           Description Logics
    • Authors: Rafael Peñaloza; Barış Sertkaya
      Abstract: Publication date: Available online 17 June 2017
      Source:Artificial Intelligence
      Author(s): Rafael Peñaloza, Barış Sertkaya
      Lightweight description logics are knowledge representation formalisms characterised by the low complexity of their standard reasoning tasks. They have been successfully employed for constructing large ontologies that model domain knowledge in several different practical applications. In order to maintain these ontologies, it is often necessary to detect the axioms that cause a given consequence. This task is commonly known as axiom pinpointing. In this paper, we provide a thorough analysis of the complexity of several decision, counting, and enumeration problems associated to axiom pinpointing in lightweight description logics.

      PubDate: 2017-06-22T03:33:40Z
      DOI: 10.1016/j.artint.2017.06.002
       
  • A Progression Semantics for First-Order Logic Programs
    • Authors: Yi Zhou; Yan Zhang
      Abstract: Publication date: Available online 9 June 2017
      Source:Artificial Intelligence
      Author(s): Yi Zhou, Yan Zhang
      In this paper, we propose a progression semantics for first-order normal logic programs, and show that it is equivalent to the well-known stable model (answer set) semantics. The progressional definition sheds new insights into Answer Set Programming (ASP), for instance, its relationships to Datalog, First-Order Logic (FOL) and Satisfiability Modulo Theories (SMT). As an example, we extend the notion of boundedness in Datalog for ASP, and show that it coincides with the notions of recursion-freeness and loop-freeness under program equivalence. In addition, we prove that boundedness precisely captures first-order definability for normal logic programs on arbitrary structures. Finally, we show that the progressional definition suggests an alternative translation from ASP to SMT, which yields a new way of implementing first-order ASP.

      PubDate: 2017-06-12T01:05:42Z
      DOI: 10.1016/j.artint.2017.06.001
       
  • Discovering visual concept structure with sparse and incomplete tags
    • Authors: Jingya Wang; Xiatian Zhu; Shaogang Gong
      Abstract: Publication date: Available online 31 May 2017
      Source:Artificial Intelligence
      Author(s): Jingya Wang, Xiatian Zhu, Shaogang Gong
      Discovering automatically the semantic structure of tagged visual data (e.g. web videos and images) is important for visual data analysis and interpretation, enabling the machine intelligence for effectively processing the fast-growing amount of multi-media data. However, this is non-trivial due to the need for jointly learning underlying correlations between heterogeneous visual and tag data. The task is made more challenging by inherently sparse and incomplete tags. In this work, we develop a method for modelling the inherent visual data concept structures based on a novel Hierarchical-Multi-Label Random Forest model capable of correlating structured visual and tag information so as to more accurately interpret the visual semantics, e.g. disclosing meaningful visual groups with similar high-level concepts, and recovering missing tags for individual visual data samples. Specifically, our model exploits hierarchically structured tags of different semantic abstractness and multiple tag statistical correlations in addition to modelling visual and tag interactions. As a result, our model is able to discover more accurate semantic correlation between textual tags and visual features, and finally providing favourable visual semantics interpretation even with highly sparse and incomplete tags. We demonstrate the advantages of our proposed approach in two fundamental applications, visual data clustering and missing tag completion, on benchmarking video (i.e. TRECVID MED 2011) and image (i.e. NUS-WIDE) datasets.

      PubDate: 2017-06-02T09:00:03Z
      DOI: 10.1016/j.artint.2017.05.002
       
  • Low-rank decomposition meets kernel learning: A generalized Nyström
           method
    • Authors: Liang Lan; Kai Zhang; Hancheng Ge; Wei Cheng; Jun Liu; Andreas Rauber; Xiao-Li Li; Jun Wang; Hongyuan Zha
      Abstract: Publication date: Available online 15 May 2017
      Source:Artificial Intelligence
      Author(s): Liang Lan, Kai Zhang, Hancheng Ge, Wei Cheng, Jun Liu, Andreas Rauber, Xiao-Li Li, Jun Wang, Hongyuan Zha
      Low-rank matrix decomposition and kernel learning are two useful techniques in building advanced learning systems. Low-rank decomposition can greatly reduce the computational cost of manipulating large kernel matrices. However, existing approaches are mostly unsupervised and do not incorporate side information such as class labels, making the decomposition less effective for a specific learning task. On the other hand, kernel learning techniques aim at constructing kernel matrices whose structure is well aligned with the learning target, which improves the generalization performance of kernel methods. However, most kernel learning approaches are computationally very expensive. To obtain the advantages of both techniques and address their limitations, in this paper we propose a novel kernel low-rank decomposition formulation called the generalized Nyström method. Our approach inherits the linear time and space complexity via matrix decomposition, while at the same time fully exploits (partial) label information in computing task-dependent decomposition. In addition, the resultant low-rank factors can generalize to arbitrary new samples, rendering great flexibility in inductive learning scenarios. We further extend the algorithm to a multiple kernel learning setup. The experimental results on semi-supervised classification demonstrate the usefulness of the proposed method.

      PubDate: 2017-05-18T08:47:00Z
      DOI: 10.1016/j.artint.2017.05.001
       
  • Strategy-proof School Choice Mechanisms with Minimum Quotas and Initial
           Endowments
    • Authors: Naoto Hamada; Chia-Ling Hsu; Ryoji Kurata; Takamasa Suzuki; Suguru Ueda; Makoto Yokoo
      Abstract: Publication date: Available online 28 April 2017
      Source:Artificial Intelligence
      Author(s): Naoto Hamada, Chia-Ling Hsu, Ryoji Kurata, Takamasa Suzuki, Suguru Ueda, Makoto Yokoo
      We consider a school choice program where minimum quotas are imposed for each school, i.e., a school must be assigned at least a certain number of students to operate. We require that the obtained matching must respect the initial endowments, i.e., each student must be assigned to a school that is at least as good as her initial endowment school. Although minimum quotas are relevant in school choice programs and strategy-proofness is important to many policymakers, few existing mechanisms simultaneously achieve both. One difficulty is that no strategy-proof mechanism exists that is both efficient and fair under the presence of minimum quotas. Furthermore, existing mechanisms require that all students consider all schools acceptable to obtain a feasible matching that respects minimum quotas. This assumption is unrealistic in a school choice program. We consider the environment where a student considers her initial endowment school acceptable and the initial endowments satisfy all the minimum quotas. We develop two strategy-proof mechanisms. One mechanism, which we call the Top Trading Cycles among Representatives with Supplementary Seats (TTCR-SS), is based on the Top Trading Cycles (TTC) mechanism and is significantly extended to handle the supplementary seats of schools while respecting minimum quotas. TTCR-SS is Pareto efficient. The other mechanism, which we call Priority List-based Deferred Acceptance with Minimum Quotas (PLDA-MQ), is based on the Deferred Acceptance (DA) mechanism. PLDA-MQ is fair, satisfies a concept called Priority List-based (PL-) stability, and obtains the student-optimal matching within all PL-stable matchings. Our simulation results show that our new mechanisms are significantly better than simple extensions of the existing mechanisms.

      PubDate: 2017-05-02T21:41:18Z
      DOI: 10.1016/j.artint.2017.04.006
       
  • Constrained coalition formation on valuation structures: Formal framework,
           applications, and islands of tractability
    • Authors: Gianluigi Greco; Antonella Guzzo
      Abstract: Publication date: Available online 21 April 2017
      Source:Artificial Intelligence
      Author(s): Gianluigi Greco, Antonella Guzzo
      Coalition structure generation is the problem of partitioning the agents of a given environment into disjoint and exhaustive coalitions so that the whole available worth is maximized. While this problem has been classically studied in settings where all coalitions are allowed to form, it has been recently reconsidered in the literature moving from the observation that environments often forbid the formation of certain coalitions. By following this latter perspective, a model for coalition structure generation is proposed where constraints of two different kinds can be expressed simultaneously. Indeed, the model is based on the concept of valuation structure, which consists of a set of pivotal agents that are pairwise incompatible, plus an interaction graph prescribing that a coalition C can form only if the subgraph induced over the nodes/agents in C is connected. It is shown that valuation structures can be used to model a number of relevant problems arising in real-world application domains. Then, the complexity of coalition structure generation over valuation structures is studied, by assuming that the functions associating each coalition with its worth are given as input according to some compact encoding—rather than explicitly listing all exponentially-many associations. In particular, islands of tractability are identified based on the topological properties of the underlying interaction graphs and on suitable algebraic properties of the given worth functions. Finally, stability issues over valuation structures are studied too, by considering the core as the prototypical solution concept.

      PubDate: 2017-04-26T11:43:34Z
      DOI: 10.1016/j.artint.2017.04.005
       
  • Thick Set Inversion
    • Authors: B. Desrochers; L. Jaulin
      Abstract: Publication date: Available online 20 April 2017
      Source:Artificial Intelligence
      Author(s): B. Desrochers, L. Jaulin
      This paper deals with the set inversion problem X = f − 1 ( Y ) in the case where the function f : R n → R m and the set Y are both uncertain. The uncertainty is treated under the form of intervals. More precisely, for all x, f ( x ) is inside the box [ f ] ( x ) and the uncertain set Y is bracketed between an inner set Y ⊂ and an outer set Y ⊃ . The introduction of new tools such as thick intervals and thick boxes will allow us to propose an efficient algorithm that handles the uncertainty of sets in an elegant and efficient manner. Some elementary test-cases that cannot be handled easily and properly by existing methods show the efficiency of the approach.

      PubDate: 2017-04-26T11:43:34Z
      DOI: 10.1016/j.artint.2017.04.004
       
  • From model checking to equilibrium checking: Reactive modules for rational
           verification
    • Authors: Julian Gutierrez; Paul Harrenstein; Michael Wooldridge
      Abstract: Publication date: Available online 12 April 2017
      Source:Artificial Intelligence
      Author(s): Julian Gutierrez, Paul Harrenstein, Michael Wooldridge
      Model checking is the best-known and most successful approach to formally verifying that systems satisfy specifications, expressed as temporal logic formulae. In this article, we develop the theory of equilibrium checking, a related but distinct problem. Equilibrium checking is relevant for multi-agent systems in which system components (agents) are assumed to be acting rationally in pursuit of delegated goals, and is concerned with understanding what temporal properties hold of such systems under the assumption that agents select strategies in equilibrium. The formal framework we use to study this problem assumes agents are modelled using Reactive Modules, a system modelling language that is used in a range of practical model checking systems. Each agent (or player) in a Reactive Modules game is specified as a nondeterministic guarded command program, and each player's goal is specified with a temporal logic formula that the player desires to see satisfied. A strategy for a player in a Reactive Modules game defines how that player selects enabled guarded commands for execution over successive rounds of the game. For this general setting, we investigate games in which players have goals specified in Linear Temporal Logic (in which case it is assumed that players choose deterministic strategies) and in Computation Tree Logic (in which case players select nondeterministic strategies). For each of these cases, after formally defining the game setting, we characterise the complexity of a range of problems relating to Nash equilibria (e.g., the computation or the verification of existence of a Nash equilibrium or checking whether a given temporal formula is satisfied on some Nash equilibrium). We then go on to show how the model we present can be used to encode, for example, games in which the choices available to players are specified using STRIPS planning operators.

      PubDate: 2017-04-19T11:34:46Z
      DOI: 10.1016/j.artint.2017.04.003
       
  • On the Logical Properties of the Nonmonotonic Description Logic DLN
    • Authors: P.A. Bonatti; L. Sauro
      Abstract: Publication date: Available online 7 April 2017
      Source:Artificial Intelligence
      Author(s): P.A. Bonatti, L. Sauro
      DL N is a recent nonmonotonic description logic, designed for satisfying independently proposed knowledge engineering requirements, and for removing some recurrent drawbacks of traditional nonmonotonic semantics. In this paper we study the logical properties of DL N and illustrate some of the relationships between the KLM postulates and the characteristic features of DL N , including its novel way of dealing with unresolved conflicts between defeasible axioms. Moreover, we fix a problem affecting the original semantics of DL N and accordingly adapt the reduction from DL N inferences to classical inferences. Along the paper, we use various versions of the KLM postulates to deepen the comparison with related work, and illustrate the different tradeoffs between opposite requirements adopted by each approach.

      PubDate: 2017-04-12T13:56:02Z
      DOI: 10.1016/j.artint.2017.04.001
       
  • On the Complexity of the Partner Units Decision Problem
    • Authors: Erich Christian Teppan
      Abstract: Publication date: Available online 6 April 2017
      Source:Artificial Intelligence
      Author(s): Erich Christian Teppan
      The partner units problem is an acknowledged hard benchmark problem for the logic programming community with various industrial application fields like CCTV surveillance or railway safety systems. Whereas many complexity results exist for the optimization version of the problem, complexity for the decision variant, which from a practical point of view is more important, is widely unknown. In this article we show that the partner units decision problem is NP-complete in general and also for various subproblems of industrial importance.

      PubDate: 2017-04-12T13:56:02Z
      DOI: 10.1016/j.artint.2017.04.002
       
  • Commonsense reasoning about containers using radically incomplete
           information
    • Authors: Ernest Davis; Gary Marcus; Noah Frazier-Logue
      Abstract: Publication date: Available online 4 April 2017
      Source:Artificial Intelligence
      Author(s): Ernest Davis, Gary Marcus, Noah Frazier-Logue
      In physical reasoning, humans are often able to carry out useful reasoning based on radically incomplete information. One physical domain that is ubiquitous both in everyday interactions and in many kinds of scientific applications, where reasoning from incomplete information is very common, is the interaction of containers and their contents. We have developed a preliminary knowledge base for qualitative reasoning about containers, expressed in a sorted first-order language of time, geometry, objects, histories, and actions. We have demonstrated that the knowledge suffices to justify a number of commonsense physical inferences, based on very incomplete knowledge.

      PubDate: 2017-04-05T13:11:40Z
      DOI: 10.1016/j.artint.2017.03.004
       
  • How many diagnoses do we need?
    • Authors: Roni Stern; Meir Kalech; Shelly Rogov; Alexander Feldman
      Abstract: Publication date: Available online 8 March 2017
      Source:Artificial Intelligence
      Author(s): Roni Stern, Meir Kalech, Shelly Rogov, Alexander Feldman
      A known limitation of many diagnosis algorithms is that the number of diagnoses they return can be very large. This is both time consuming and not very helpful from the perspective of a human operator: presenting hundreds of diagnoses to a human operator (charged with repairing the system) is meaningless. In various settings, including decision support for a human operator and automated troubleshooting processes, it is sufficient to be able to answer a basic diagnostic question: is a given component faulty? We propose a way to aggregate an arbitrarily large set of diagnoses to return an estimate of the likelihood of a given component to be faulty. The resulting mapping of components to their likelihood of being faulty is called the system's health state. We propose two metrics for evaluating the accuracy of a health state and show that an accurate health state can be found without finding all diagnoses. An empirical study explores the question of how many diagnoses are needed to obtain an accurate enough health state, and an online stopping criteria is proposed.

      PubDate: 2017-03-10T01:41:18Z
      DOI: 10.1016/j.artint.2017.03.002
       
  • Efficiency and complexity of price competition among single-product
           vendors
    • Authors: Ioannis Caragiannis; Xenophon Chatzigeorgiou; Panagiotis Kanellopoulos; George A. Krimpas; Nikos Protopapas; Alexandros A. Voudouris
      Abstract: Publication date: Available online 8 March 2017
      Source:Artificial Intelligence
      Author(s): Ioannis Caragiannis, Xenophon Chatzigeorgiou, Panagiotis Kanellopoulos, George A. Krimpas, Nikos Protopapas, Alexandros A. Voudouris
      Motivated by recent progress on pricing in the AI literature, we study marketplaces that contain multiple vendors offering identical or similar products and unit-demand buyers with different valuations on these vendors. The objective of each vendor is to set the price of its product to a fixed value so that its profit is maximized. The profit depends on the vendor's price itself and the total volume of buyers that find the particular price more attractive than the price of the vendor's competitors. We model the behaviour of buyers and vendors as a two-stage full-information game and study a series of questions related to the existence, efficiency (price of anarchy) and computational complexity of equilibria in this game. To overcome situations where equilibria do not exist or exist but are highly inefficient, we consider the scenario where some of the vendors are subsidized in order to keep prices low and buyers highly satisfied.

      PubDate: 2017-03-10T01:41:18Z
      DOI: 10.1016/j.artint.2017.03.001
       
  • Characterizing causal action theories and their implementations in answer
           set programming
    • Authors: Haodi Zhang; Fangzhen Lin
      Abstract: Publication date: Available online 2 March 2017
      Source:Artificial Intelligence
      Author(s): Haodi Zhang, Fangzhen Lin
      We consider a simple language for writing causal action theories, and postulate several properties for the state transition models of these theories. We then consider some possible embeddings of these causal action theories in some other action formalisms, and their implementations in logic programs with answer set semantics. In particular, we propose to consider what we call permissible translations from these causal action theories to logic programs. We identify two sets of properties, and prove that for each set, there is only one permissible translation, under strong equivalence, that can satisfy all properties in the set. We also show that these two sets of conditions are minimal in that removing any condition from each of them will result in multiple permissible mappings. Furthermore, as it turns out, for one set, the unique permissible translation is essentially the same as Balduccini and Gelfond's translation from Gelfond and Lifschitz's action language B to logic programs. For the other, it is essentially the same as Lifschitz and Turner's translation from the action language C to logic programs. This work provides a new perspective on understanding, evaluating and comparing action languages by using sets of properties instead of examples. The results in this paper provide a characterization of two representative action languages B and C in terms of permissible mappings from our causal action theories to logic programs. It will be interesting to see if other action languages can be similarly characterized, and whether new action formalisms can be defined using different sets of properties.

      PubDate: 2017-03-04T14:40:36Z
      DOI: 10.1016/j.artint.2017.02.008
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 54.147.40.153
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-2016