for Journals by Title or ISSN
for Articles by Keywords
help

Publisher: IBM   (Total: 1 journals)   [Sort alphabetically]

Showing 1 - 1 of 1 Journals sorted by number of followers
IBM J. of Research and Development     Hybrid Journal   (Followers: 18, SJR: 0.275, CiteScore: 1)
Similar Journals
Journal Cover
IBM Journal of Research and Development
Journal Prestige (SJR): 0.275
Citation Impact (citeScore): 1
Number of Followers: 18  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 0018-8646
Published by IBM Homepage  [1 journal]
  • Preface: AI Ethics
    • Pages: 1 - 1
      PubDate: July-Sept. 1 2019
      Issue No: Vol. 63, No. 4/5 (2019)
       
  • Using multi-armed bandits to learn ethical priorities for online AI
           systems
    • Authors: A. Balakrishnan;D. Bouneffouf;N. Mattei;F. Rossi;
      Pages: 1:1 - 1:13
      Abstract: AI systems that learn through reward feedback about the actions they take are deployed in domains that have significant impact on our daily life. However, in many cases the online rewards should not be the only guiding criteria, as there are additional constraints and/or priorities imposed by regulations, values, preferences, or ethical principles. We detail a novel online agent that learns a set of behavioral constraints by observation and uses these learned constraints when making decisions in an online setting, while still being reactive to reward feedback. We propose a novel extension to the contextual multi-armed bandit setting and provide a new algorithm called Behavior Constrained Thompson Sampling (BCTS) that allows for online learning while obeying exogenous constraints. Our agent learns a constrained policy that implements observed behavioral constraints demonstrated by a teacher agent, and uses this constrained policy to guide its online exploration and exploitation. We characterize the upper bound on the expected regret of BCTS and provide a case study with real-world data in two application domains. Our experiments show that the designed agent is able to act within the set of behavior constraints without significantly degrading its overall reward performance.
      PubDate: July-Sept. 1 2019
      Issue No: Vol. 63, No. 4/5 (2019)
       
  • Teaching AI agents ethical values using reinforcement learning and policy
           orchestration
    • Authors: R. Noothigattu;D. Bouneffouf;N. Mattei;R. Chandra;P. Madan;K. R. Varshney;M. Campbell;M. Singh;F. Rossi;
      Pages: 2:1 - 2:9
      Abstract: Autonomous cyber-physical agents play an increasingly large role in our lives. To ensure that they behave in ways aligned with the values of society, we must develop techniques that allow these agents to not only maximize their reward in an environment, but also to learn and follow the implicit constraints of society. We detail a novel approach that uses inverse reinforcement learning to learn a set of unspecified constraints from demonstrations and reinforcement learning to learn to maximize environmental rewards. A contextual-bandit-based orchestrator then picks between the two policies: constraint-based and environment reward-based. The contextual bandit orchestrator allows the agent to mix policies in novel ways, taking the best actions from either a reward-maximizing or constrained policy. In addition, the orchestrator is transparent on which policy is being employed at each time step. We test our algorithms using Pac-Man and show that the agent is able to learn to act optimally, act within the demonstrated constraints, and mix these two functions in complex ways.
      PubDate: July-Sept. 1 2019
      Issue No: Vol. 63, No. 4/5 (2019)
       
  • Fairness GAN: Generating datasets with fairness properties using a
           generative adversarial network
    • Authors: P. Sattigeri;S. C. Hoffman;V. Chenthamarakshan;K. R. Varshney;
      Pages: 3:1 - 3:9
      Abstract: We introduce the Fairness GAN (generative adversarial network), an approach for generating a dataset that is plausibly similar to a given multimedia dataset, but is more fair with respect to protected attributes in decision making. We propose a novel auxiliary classifier GAN that strives for demographic parity or equality of opportunity and show empirical results on several datasets, including the CelebFaces Attributes (CelebA) dataset, the Quick, Draw! dataset, and a dataset of soccer player images and the offenses for which they were called. The proposed formulation is well suited to absorbing unlabeled data; we leverage this to augment the soccer dataset with the much larger CelebA dataset. The methodology tends to improve demographic parity and equality of opportunity while generating plausible images.
      PubDate: July-Sept. 1 2019
      Issue No: Vol. 63, No. 4/5 (2019)
       
  • AI Fairness 360: An extensible toolkit for detecting and mitigating
           algorithmic bias
    • Authors: R. K. E. Bellamy;K. Dey;M. Hind;S. C. Hoffman;S. Houde;K. Kannan;P. Lohia;J. Martino;S. Mehta;A. Mojsilović;S. Nagar;K. Natesan Ramamurthy;J. Richards;D. Saha;P. Sattigeri;M. Singh;K. R. Varshney;Y. Zhang;
      Pages: 4:1 - 4:15
      Abstract: Fairness is an increasingly important concern as machine learning models are used to support decision making in high-stakes applications such as mortgage lending, hiring, and prison sentencing. This article introduces a new open-source Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released under an Apache v2.0 license (https://github.com/ibm/aif360). The main objectives of this toolkit are to help facilitate the transition of fairness research algorithms for use in an industrial setting and to provide a common framework for fairness researchers to share and evaluate algorithms. The package includes a comprehensive set of fairness metrics for datasets and models, explanations for these metrics, and algorithms to mitigate bias in datasets and models. It also includes an interactive Web experience that provides a gentle introduction to the concepts and capabilities for line-of-business users, researchers, and developers to extend the toolkit with their new algorithms and improvements and to use it for performance benchmarking. A built-in testing infrastructure maintains code quality.
      PubDate: July-Sept. 1 2019
      Issue No: Vol. 63, No. 4/5 (2019)
       
  • Rating AI systems for bias to promote trustable applications
    • Authors: B. Srivastava;F. Rossi;
      Pages: 5:1 - 5:9
      Abstract: New decision-support systems are being built using AI services that draw insights from a large corpus of data and incorporate those insights in human-in-the-loop decision environments. They promise to transform businesses, such as health care, with better, affordable, and timely decisions. However, it will be unreasonable to expect people to trust AI systems out of the box if they have been shown to exhibit discrimination across a variety of data usages: unstructured text, structured data, or images. Thus, AI systems come with certain risks, such as failing to recognize people or objects, introducing errors in their output, and leading to unintended harm. In response, we propose ratings as a way to communicate bias risk and methods to rate AI services for bias in a black-box fashion without accessing services training data. Our method is designed not only to work on single services, but also the composition of services, which is how complex AI applications are built. Thus, the proposed method can be used to rate a composite application, like a chatbot, for the severity of its bias by rating its constituent services and then composing the rating, rather than rating the whole system.
      PubDate: July-Sept. 1 2019
      Issue No: Vol. 63, No. 4/5 (2019)
       
  • FactSheets: Increasing trust in AI services through supplier's
           declarations of conformity
    • Authors: M. Arnold;R. K. E. Bellamy;M. Hind;S. Houde;S. Mehta;A. Mojsilović;R. Nair;K. Natesan Ramamurthy;A. Olteanu;D. Piorkowski;D. Reimer;J. Richards;J. Tsay;K. R. Varshney;
      Pages: 6:1 - 6:13
      Abstract: Accuracy is an important concern for suppliers of artificial intelligence (AI) services, but considerations beyond accuracy, such as safety (which includes fairness and explainability), security, and provenance, are also critical elements to engender consumers’ trust in a service. Many industries use transparent, standardized, but often not legally required documents called supplier's declarations of conformity (SDoCs) to describe the lineage of a product along with the safety and performance testing it has undergone. SDoCs may be considered multidimensional fact sheets that capture and quantify various aspects of the product and its development to make it worthy of consumers’ trust. In this article, inspired by this practice, we propose FactSheets to help increase trust in AI services. We envision such documents to contain purpose, performance, safety, security, and provenance information to be completed by AI service providers for examination by consumers. We suggest a comprehensive set of declaration items tailored to AI in the Appendix of this article.
      PubDate: July-Sept. 1 2019
      Issue No: Vol. 63, No. 4/5 (2019)
       
  • An instrument to evaluate the maturity of bias governance capability in
           artificial intelligence projects
    • Authors: D. L. Coates;A. Martin;
      Pages: 7:1 - 7:15
      Abstract: Artificial intelligence (AI) promises unprecedented contributions to both business and society, attracting a surge of interest from many organizations. However, there is evidence that bias is already prevalent in AI datasets and algorithms, which, albeit unintended, is considered to be unethical, suboptimal, unsustainable, and challenging to manage. It is believed that the governance of data and algorithmic bias must be deeply embedded in the values, mindsets, and procedures of AI software development teams, but currently there is a paucity of actionable mechanisms to help. In this paper, we describe a maturity framework based on ethical principles and best practices, which can be used to evaluate an organization's capability to govern bias. We also design, construct, validate, and test an original instrument for operationalizing the framework, which considers both technical and organizational aspects. The instrument has been developed and validated through a two-phase study involving field experts and academics. The framework and instrument are presented for ongoing evolution and utilization.
      PubDate: July-Sept. 1 2019
      Issue No: Vol. 63, No. 4/5 (2019)
       
  • Bridging the gap: Social work insights for ethical algorithmic
           decision-making in human services
    • Authors: M. Y. Rodriguez;D. DePanfilis;P. Lanier;
      Pages: 8:1 - 8:8
      Abstract: Artificial intelligence (AI), when combined with statistical techniques such as predictive analytics, has been increasingly applied in high-stakes decision-making systems seeking to predict and/or classify the risk of clients experiencing negative outcomes while receiving services. One such system is child welfare, where the disproportionate involvement of marginalized and vulnerable children and families raises ethical concerns about building fair and equitable models. One central issue in this debate is the over-representation of risk factors in algorithmic inputs and outputs, as well as the concomitant over-reliance on predicting risk. Would models perform better across groups if variables represented risk and protective factors associated with outcomes of interest' In addition, would models be more equitable across groups if they predicted alternative service outcomes' Using a risk-and-resilience framework applied in the field of social work, and the child welfare system as an illustrative example, this article explores a strengths-based approach to predictive model building. We define risk and protective factors, and then identify and illustrate how protective factors perform in a model trained to predict an alternative outcome of child welfare service involvement: the unsubstantiation of an allegation of maltreatment.
      PubDate: July-Sept. 1 2019
      Issue No: Vol. 63, No. 4/5 (2019)
       
  • HR analytics and ethics
    • Authors: K. Simbeck;
      Pages: 9:1 - 9:12
      Abstract: The systematic application of analytical methods on human resources (HR)-related (big) data is referred to as HR analytics or people analytics. Typical problems in HR analytics include the estimation of churn rates, the identification of knowledge and skill in an organization, and the prediction of success on a job. HR analytics, as opposed to the simple use of key performance indicators, is a growing field of interest because of the rapid growth of volume, velocity, and variety of HR data, driven by the digitalization of work processes. Personnel files used to be in steel lockers in the past. They are now stored in company systems, along with data from hiring processes, employee satisfaction surveys, e-mails, and process data. With the growing prevalence of HR analytics, a discussion around its ethics needs to occur. The objective of this paper is to discuss the ethical implications of the application of sophisticated analytical methods to questions in HR management. This paper builds on previous literature in algorithmic fairness that focuses on technical options to identify, measure, and reduce discrimination in data analysis. This paper applies to HR analytics the ethical frameworks discussed in other fields including medicine, robotics, learning analytics, and coaching.
      PubDate: July-Sept. 1 2019
      Issue No: Vol. 63, No. 4/5 (2019)
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 3.234.214.179
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-