Followed Journals
American Economic Journal : Applied Economics
American Journal of Occupational Therapy
Anaesthesia
Analytical Biochemistry
Animal Behaviour
Archives and Museum Informatics
Australian Library Journal, The
Aviation Week and Space Technology
Behavioural and Cognitive Psychotherapy
Bibliothek Forschung und Praxis
Biochemistry
BJOG An International Journal of Obstetrics and Gynaecology
British Journal of Educational Studies
British Journal of Nursing
Canadian Society of Forensic Science Journal
Cataloging
Common Market Law Review
Comparative Political Studies
Crime Prevention and Community Safety
Diabetes
Ecology
European Journal of Pharmaceutical Sciences
IEEE Aerospace and Electronic Systems Magazine
IEEE Software
IEEE Spectrum
Information Processing
International Journal of Digital Curation
JGR Space Physics
Journal of Applied Mechanics
Journal of Composite Materials
Journal of Guidance, Control, and Dynamics
Journal of Information
Journal of Police and Criminal Psychology
Journal of Political Economy
Journal of the Medical Library Association
Library
Machine Design
Natural Hazards
Nature Reviews: Molecular Cell Biology
Personality and Social Psychology Bulletin
Physical Review Letters
PLoS Computational Biology
Police Practice and Research: An International Journal
Policing: An International Journal of Police Strategies
Proceedings of the National Academy of Sciences
Propellants, Explosives, Pyrotechnics
Research Library Issues
The Cambridge Law Journal
The Journal of Bone and Joint Surgery
The Physics Teacher
The Police Journal
The Reference Librarian
Theory, Culture
Save to export Followed journals How to export your journals to your favourite RSS Reader?
Similar Journals
Journal Cover
Artificial Intelligence
Journal Prestige (SJR): 0.88
Citation Impact (citeScore): 4
Number of Followers: 246  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 0004-3702 - ISSN (Online) 0004-3702
Published by Elsevier Homepage  [3206 journals]
  • How do fairness definitions fare' Testing public attitudes towards
           three algorithmic definitions of fairness in loan allocations
    • Abstract: Publication date: Available online 20 February 2020Source: Artificial IntelligenceAuthor(s): Nripsuta Ani Saxena, Karen Huang, Evan DeFilippis, Goran Radanovic, David C. Parkes, Yang LiuAbstractWhat is the best way to define algorithmic fairness' While many definitions of fairness have been proposed in the computer science literature, there is no clear agreement over a particular definition. In this work, we investigate ordinary people's perceptions of three of these fairness definitions. Across three online experiments, we test which definitions people perceive to be the fairest in the context of loan decisions, and whether fairness perceptions change with the addition of sensitive information (i.e., race or gender of the loan applicants). Overall, one definition (calibrated fairness) tends to be more preferred than the others, and the results also provide support for the principle of affirmative action.
       
  • Autoepistemic equilibrium logic and epistemic specifications
    • Abstract: Publication date: Available online 19 February 2020Source: Artificial IntelligenceAuthor(s): Luis Fariñas del Cerro, Andreas Herzig, Ezgi Iraz SuAbstractEpistemic specifications extend disjunctive answer-set programs by an epistemic modal operator that may occur in the body of rules. Their semantics is in terms of world views, which are sets of answer sets, and the idea is that the epistemic modal operator quantifies over these answer sets. Several such semantics were proposed in the literature. We here propose a new semantics that is based on the logic of here-and-there: we add epistemic modal operators to its language and define epistemic here-and-there models. We then successively define epistemic equilibrium models and autoepistemic equilibrium models. The former are obtained from epistemic here-and-there models in exactly the same way as Pearce's equilibrium models are obtained from here-and-there models, viz. by minimising truth; they provide an epistemic extension of equilibrium logic. The latter are obtained from the former by maximising the set of epistemic possibilities, and they provide a new semantics for Gelfond's epistemic specifications. For both semantics we establish a strong equivalence result: we characterise strong equivalence of two epistemic programs by means of logical equivalence in epistemic here-and-there logic. We finally compare our approach to the existing semantics of epistemic specifications and discuss which formalisms provide more intuitive results by pointing out some formal properties a semantics proposal should satisfy.
       
  • Automated construction of bounded-loss imperfect-recall abstractions in
           extensive-form games
    • Abstract: Publication date: Available online 14 February 2020Source: Artificial IntelligenceAuthor(s): Jiří Čermák, Viliam Lisý, Branislav BošanskýExtensive-form games (EFGs) model finite sequential interactions between players. The amount of memory required to represent these games is the main bottleneck of algorithms for computing optimal strategies and the size of these strategies is often impractical for real-world applications. A common approach to tackle the memory bottleneck is to use information abstraction that removes parts of information available to players thus reducing the number of decision points in the game. However, existing information-abstraction techniques are either specific for a particular domain, they do not provide any quality guarantees, or they are applicable to very small subclasses of EFGs. We present domain-independent abstraction methods for creating imperfect recall abstractions in extensive-form games that allow computing strategies that are (near) optimal in the original game. To this end, we introduce two novel algorithms, FPIRA and CFR+IRA, based on fictitious play and counterfactual regret minimization. These algorithms can start with an arbitrary domain specific, or the coarsest possible, abstraction of the original game. The algorithms iteratively detect the missing information they require for computing a strategy for the abstract game that is (near) optimal in the original game. This information is then included back into the abstract game. Moreover, our algorithms are able to exploit imperfect-recall abstractions that allow players to forget even history of their own actions. However, the algorithms require traversing the complete unabstracted game tree. We experimentally show that our algorithms can closely approximate Nash equilibrium of large games using abstraction with as little as 0.9% of information sets of the original game. Moreover, the results suggest that memory savings increase with the increasing size of the original games.
       
  • Robust Learning with Imperfect Privileged Information
    • Abstract: Publication date: Available online 12 February 2020Source: Artificial IntelligenceAuthor(s): Xue Li, Bo Du, Chang Xu, Yipeng Zhang, Lefei Zhang, Dacheng TaoAbstractIn the learning using privileged information (LUPI) paradigm, example data cannot always be clean, while the gathered privileged information can be imperfect in practice. Here, imperfect privileged information can refer to auxiliary information that is not always accurate or perturbed by noise, or alternatively to incomplete privileged information, where privileged information is only available for part of the training data. Because of the lack of clear strategies for handling noise in example data and imperfect privileged information, existing learning using privileged information (LUPI) methods may encounter serious issues. Accordingly, in this paper, we propose a Robust SVM+ method to tackle imperfect data in LUPI. In order to make the SVM+ model robust to noise in example data and privileged information, Robust SVM+ maximizes the lower bound of the perturbations that may influence the judgement based on a rigorous theoretical analysis. Moreover, in order to deal with the incomplete privileged information, we use the available privileged information to help us in approximating the missing privileged information of training data. The optimization problem of the proposed method can be efficiently solved by employing a two-step alternating optimization strategy, based on iteratively deploying off-the-shelf quadratic programming solvers and the alternating direction method of multipliers (ADMM) technique. Comprehensive experiments on real-world datasets demonstrate the effectiveness of the proposed Robust SVM+ method in handling imperfect privileged information.
       
  • Rethinking epistemic logic with belief bases
    • Abstract: Publication date: Available online 10 February 2020Source: Artificial IntelligenceAuthor(s): Emiliano LoriniAbstractWe introduce a new semantics for a family of logics of explicit and implicit belief based on the concept of multi-agent belief base. Differently from standard semantics for epistemic logic in which the notions of possible world and doxastic/epistemic alternative are primitive, in our semantics they are non-primitive but are computed from the concept of belief base. We provide complete axiomatizations and prove decidability for our logics via finite model arguments. Furthermore, we provide polynomial embeddings of our logics into Fagin & Halpern's logic of general awareness and establish complexity results via the embeddings. We also present variants of the logics incorporating different forms of epistemic introspection for explicit and/or implicit belief and provide complexity results for some of these variants. Finally, we present a number of dynamic extensions of the static framework by informative actions of both public and private type, including public announcement, belief base expansion and forgetting. We illustrate the application potential of the logical framework with the aid of a concrete example taken from the domain of conversational agents.
       
  • Regression and Progression in Stochastic Domains
    • Abstract: Publication date: Available online 30 January 2020Source: Artificial IntelligenceAuthor(s): Vaishak Belle, Hector J. LevesqueAbstractReasoning about degrees of belief in uncertain dynamic worlds is fundamental to many applications, such as robotics and planning, where actions modify state properties and sensors provide measurements, both of which are prone to noise. With the exception of limited cases such as Gaussian processes over linear phenomena, belief state evolution can be complex and hard to reason with in a general way, especially when the agent has to deal with categorical assertions, incomplete information such as disjunctive knowledge, as well as probabilistic knowledge. Among the many approaches for reasoning about degrees of belief in the presence of noisy sensing and acting, the logical account proposed by Bacchus, Halpern, and Levesque is perhaps the most expressive, allowing for such belief states to be expressed naturally as constraints. While that proposal is powerful, the task of how to plan effectively is not addressed. In fact, at a more fundamental level, the task of projection, that of reasoning about beliefs effectively after acting and sensing, is left entirely open.To aid planning algorithms, we study the projection problem in this work. In the reasoning about actions literature, there are two main solutions to projection: regression and progression. Both of these have proven enormously useful for the design of logical agents, essentially paving the way for cognitive robotics. Roughly, regression reduces a query about the future to a query about the initial state. Progression, on the other hand, changes the initial state according to the effects of each action and then checks whether the formula holds in the updated state. In this work, we show how both of these generalize in the presence of degrees of belief, noisy acting and sensing. Our results allow for both discrete and continuous probability distributions to be used in the specification of beliefs and dynamics.
       
  • Swarm Intelligence for Self-Organized Clustering
    • Abstract: Publication date: Available online 28 January 2020Source: Artificial IntelligenceAuthor(s): Michael C. Thrun, Alfred UltschAbstractAlgorithms implementing populations of agents which interact with one another and sense their environment may exhibit emergent behavior such as self-organization and swarm intelligence. Here a swarm system, called Databionic swarm (DBS), is introduced which is able to adapt itself to structures of high-dimensional data characterized by distance and/or density-based structures in the data space. By exploiting the interrelations of swarm intelligence, self-organization and emergence, DBS serves as an alternative approach to the optimization of a global objective function in the task of clustering. The swarm omits the usage of a global objective function and is parameter-free because it searches for the Nash equilibrium during its annealing process.To our knowledge, DBS is the first swarm combining these approaches and its clustering can outperform common clustering methods such as K-means, PAM, single linkage, spectral clustering, model-based clustering, and Ward if no prior knowledge about the data is available. A central problem in clustering is the correct estimation of the number of clusters. This is addressed by a DBS visualization called topographic map which allows assessing the number of clusters. It is known that all clustering algorithms construct clusters, no matter if the data set contains clusters or not. In contrast to most other clustering algorithms, the topographic map identifies that clustering of the data is meaningless if the data contains no (natural) clusters. The performance of DBS is demonstrated on a set of benchmark data constructed to pose difficult clustering problems and in two real-world applications.
       
  • Ethical approaches and autonomous systems
    • Abstract: Publication date: Available online 21 January 2020Source: Artificial IntelligenceAuthor(s): T.J.M. Bench-CaponAbstractIn this paper we consider how the three main approaches to ethics – deontology, consequentialism and virtue ethics – relate to the implementation of ethical agents. We provide a description of each approach and how agents might be implemented by designers following the different approaches. Although there are numerous examples of agents implemented within the consequentialist and deontological approaches, this is not so for virtue ethics. We therefore propose a novel means of implementing agents within the virtue ethics approach. It is seen that each approach has its own particular strengths and weaknesses when considered as the basis for implementing ethical agents, and that the different approaches are appropriate to different kinds of system.
       
  • Epistemic graphs for representing and reasoning with positive and negative
           influences of arguments
    • Abstract: Publication date: Available online 13 January 2020Source: Artificial IntelligenceAuthor(s): Anthony Hunter, Sylwia Polberg, Matthias ThimmAbstractThis paper introduces epistemic graphs as a generalization of the epistemic approach to probabilistic argumentation. In these graphs, an argument can be believed or disbelieved up to a given degree, thus providing a more fine–grained alternative to the standard Dung's approaches when it comes to determining the status of a given argument. Furthermore, the flexibility of the epistemic approach allows us to both model the rationale behind the existing semantics as well as completely deviate from them when required. Epistemic graphs can model both attack and support as well as relations that are neither support nor attack. The way other arguments influence a given argument is expressed by the epistemic constraints that can restrict the belief we have in an argument with a varying degree of specificity. The fact that we can specify the rules under which arguments should be evaluated and we can include constraints between unrelated arguments permits the framework to be more context–sensitive. It also allows for better modelling of imperfect agents, which can be important in multi–agent applications.
       
  • Story embedding: Learning distributed representations of stories based on
           character networks
    • Abstract: Publication date: Available online 13 January 2020Source: Artificial IntelligenceAuthor(s): O-Joun Lee, Jason J. JungAbstractThis study aims to learn representations of stories in narrative works (i.e., creative works that contain stories) using fixed-length vectors. Vector representations of stories enable us to compare narrative works regardless of their media or formats. To computationally represent stories, we focus on social networks among characters (character networks). We assume that the structural features of the character networks reflect the characteristics of stories. By extending substructure-based graph embedding models, we propose models to learn distributed representations of character networks in stories. The proposed models consist of three parts: (i) discovering substructures of character networks, (ii) embedding each substructure (Char2Vec), and (iii) learning vector representations of each character network (Story2Vec). We find substructures around each character in multiple scales based on proximity between characters. We suppose that a character's substructures signify its ‘social roles’. Subsequently, a Char2Vec model is designed to embed a social role based on co-occurred social roles. Since character networks are dynamic social networks that temporally evolve, we use temporal changes and adjacency of social roles to determine their co-occurrence. Finally, Story2Vec models predict occurrences of social roles in each story for embedding the story. To predict the occurrences, we apply two approaches: (i) considering temporal changes in social roles as with the Char2Vec model and (ii) focusing on the final social roles of each character. We call the embedding model with the first approach ‘flow-oriented Story2Vec.’ This approach can reflect the context and flow of stories if the dynamics of character networks is well understood. Second, based on the final states of social roles, we can emphasize the denouement of stories, which is an overview of the static structure of the character networks. We name this model as ‘denouement-oriented Story2Vec.’ In addition, we suggest ‘unified Story2Vec’ as a combination of these two models. We evaluated the quality of vector representations generated by the proposed embedding models using movies in the real world.
       
  • Synchronous bidirectional inference for neural sequence generation
    • Abstract: Publication date: Available online 8 January 2020Source: Artificial IntelligenceAuthor(s): Jiajun Zhang, Long Zhou, Yang Zhao, Chengqing ZongIn sequence to sequence generation tasks (e.g. machine translation and abstractive summarization), inference is generally performed in a left-to-right manner to produce the result token by token. The neural approaches, such as LSTM and self-attention networks, are now able to make full use of all the predicted history hypotheses from left side during inference, but cannot meanwhile access any future (right side) information and usually generate unbalanced outputs (e.g. left parts are much more accurate than right ones in Chinese-English translation). In this work, we propose a synchronous bidirectional inference model to generate outputs using both left-to-right and right-to-left decoding simultaneously and interactively. First, we introduce a novel beam search algorithm that facilitates synchronous bidirectional decoding. Then, we present the core approach which enables left-to-right and right-to-left decoding to interact with each other, so as to utilize both the history and future predictions simultaneously during inference. We apply the proposed model to both LSTM and self-attention networks. Furthermore, we propose a novel fine-tuning based parameter optimization algorithm in addition to the simple two-pass strategy. The extensive experiments on machine translation and abstractive summarization demonstrate that our synchronous bidirectional inference model can achieve remarkable improvements over the strong baselines.
       
  • Definability for model counting
    • Abstract: Publication date: Available online 7 January 2020Source: Artificial IntelligenceAuthor(s): Jean-Marie Lagniez, Emmanuel Lonca, Pierre MarquisAbstractWe define and evaluate a new preprocessing technique for propositional model counting. This technique leverages definability, i.e., the ability to determine that some gates are implied by the input formula Σ. Such gates can be exploited to simplify Σ without modifying its number of models. Unlike previous techniques based on gate detection and replacement, gates do not need to be made explicit in our approach. Our preprocessing technique thus consists of two phases: computing a bipartition 〈I,O〉 of the variables of Σ where the variables from O are defined in Σ in terms of I, then eliminating some variables of O in Σ. Our experiments show the computational benefits which can be achieved by taking advantage of our preprocessing technique for model counting.
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 34.204.183.113
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-