Subjects -> COMPUTER SCIENCE (Total: 2313 journals)
    - ANIMATION AND SIMULATION (33 journals)
    - ARTIFICIAL INTELLIGENCE (133 journals)
    - AUTOMATION AND ROBOTICS (116 journals)
    - CLOUD COMPUTING AND NETWORKS (75 journals)
    - COMPUTER ARCHITECTURE (11 journals)
    - COMPUTER ENGINEERING (12 journals)
    - COMPUTER GAMES (23 journals)
    - COMPUTER PROGRAMMING (25 journals)
    - COMPUTER SCIENCE (1305 journals)
    - COMPUTER SECURITY (59 journals)
    - DATA BASE MANAGEMENT (21 journals)
    - DATA MINING (50 journals)
    - E-BUSINESS (21 journals)
    - E-LEARNING (30 journals)
    - ELECTRONIC DATA PROCESSING (23 journals)
    - IMAGE AND VIDEO PROCESSING (42 journals)
    - INFORMATION SYSTEMS (109 journals)
    - INTERNET (111 journals)
    - SOCIAL WEB (61 journals)
    - SOFTWARE (43 journals)
    - THEORY OF COMPUTING (10 journals)

COMPUTER SCIENCE (1305 journals)            First | 1 2 3 4 5 6 7 | Last

Showing 201 - 400 of 872 Journals sorted alphabetically
Computational Ecology and Software     Open Access   (Followers: 9)
Computational Economics     Hybrid Journal   (Followers: 12)
Computational Geosciences     Hybrid Journal   (Followers: 17)
Computational Linguistics     Open Access   (Followers: 23)
Computational Management Science     Hybrid Journal  
Computational Mathematics and Modeling     Hybrid Journal   (Followers: 8)
Computational Mechanics     Hybrid Journal   (Followers: 11)
Computational Methods and Function Theory     Hybrid Journal  
Computational Molecular Bioscience     Open Access   (Followers: 1)
Computational Optimization and Applications     Hybrid Journal   (Followers: 9)
Computational Particle Mechanics     Hybrid Journal   (Followers: 1)
Computational Science and Techniques     Open Access  
Computational Statistics     Hybrid Journal   (Followers: 15)
Computational Statistics & Data Analysis     Hybrid Journal   (Followers: 35)
Computational Toxicology     Hybrid Journal  
Computer     Full-text available via subscription   (Followers: 141)
Computer Aided Surgery     Open Access   (Followers: 5)
Computer Applications in Engineering Education     Hybrid Journal   (Followers: 6)
Computer Communications     Hybrid Journal   (Followers: 19)
Computer Engineering and Applications Journal     Open Access   (Followers: 8)
Computer Journal     Hybrid Journal   (Followers: 7)
Computer Methods in Applied Mechanics and Engineering     Hybrid Journal   (Followers: 25)
Computer Methods in Biomechanics and Biomedical Engineering     Hybrid Journal   (Followers: 10)
Computer Methods in Biomechanics and Biomedical Engineering : Imaging & Visualization     Hybrid Journal  
Computer Music Journal     Hybrid Journal   (Followers: 18)
Computer Physics Communications     Hybrid Journal   (Followers: 9)
Computer Science - Research and Development     Hybrid Journal   (Followers: 7)
Computer Science and Engineering     Open Access   (Followers: 15)
Computer Science and Information Technology     Open Access   (Followers: 12)
Computer Science Education     Hybrid Journal   (Followers: 15)
Computer Science Journal     Open Access   (Followers: 20)
Computer Science Review     Hybrid Journal   (Followers: 12)
Computer Standards & Interfaces     Hybrid Journal   (Followers: 3)
Computer Supported Cooperative Work (CSCW)     Hybrid Journal   (Followers: 8)
Computer-aided Civil and Infrastructure Engineering     Hybrid Journal   (Followers: 9)
Computer-Aided Design and Applications     Hybrid Journal   (Followers: 6)
Computers     Open Access   (Followers: 2)
Computers & Chemical Engineering     Hybrid Journal   (Followers: 12)
Computers & Education     Hybrid Journal   (Followers: 92)
Computers & Electrical Engineering     Hybrid Journal   (Followers: 8)
Computers & Geosciences     Hybrid Journal   (Followers: 30)
Computers & Mathematics with Applications     Full-text available via subscription   (Followers: 9)
Computers & Structures     Hybrid Journal   (Followers: 43)
Computers & Education Open     Open Access   (Followers: 2)
Computers & Industrial Engineering     Hybrid Journal   (Followers: 6)
Computers and Composition     Hybrid Journal   (Followers: 11)
Computers and Education: Artificial Intelligence     Open Access   (Followers: 3)
Computers and Electronics in Agriculture     Hybrid Journal   (Followers: 7)
Computers and Geotechnics     Hybrid Journal   (Followers: 12)
Computers in Biology and Medicine     Hybrid Journal   (Followers: 11)
Computers in Entertainment     Hybrid Journal  
Computers in Human Behavior Reports     Open Access  
Computers in Industry     Hybrid Journal   (Followers: 7)
Computers in the Schools     Hybrid Journal   (Followers: 8)
Computers, Environment and Urban Systems     Hybrid Journal   (Followers: 11)
Computerworld Magazine     Free   (Followers: 2)
Computing     Hybrid Journal   (Followers: 2)
Computing and Software for Big Science     Hybrid Journal   (Followers: 1)
Computing and Visualization in Science     Hybrid Journal   (Followers: 6)
Computing in Science & Engineering     Full-text available via subscription   (Followers: 31)
Computing Reviews     Full-text available via subscription   (Followers: 1)
Concurrency and Computation: Practice & Experience     Hybrid Journal  
Connection Science     Hybrid Journal  
Control Engineering Practice     Hybrid Journal   (Followers: 46)
Cryptologia     Hybrid Journal   (Followers: 3)
CSI Transactions on ICT     Hybrid Journal   (Followers: 2)
Cuadernos de Documentación Multimedia     Open Access  
Current Science     Open Access   (Followers: 116)
Cyber-Physical Systems     Hybrid Journal  
Cyberspace : Jurnal Pendidikan Teknologi Informasi     Open Access  
DAIMI Report Series     Open Access  
Data     Open Access   (Followers: 4)
Data & Policy     Open Access   (Followers: 3)
Data Science and Engineering     Open Access   (Followers: 6)
Data Technologies and Applications     Hybrid Journal   (Followers: 210)
Data-Centric Engineering     Open Access  
Datenbank-Spektrum     Hybrid Journal   (Followers: 1)
Datenschutz und Datensicherheit - DuD     Hybrid Journal  
Decision Analytics     Open Access   (Followers: 3)
Decision Support Systems     Hybrid Journal   (Followers: 13)
Design Journal : An International Journal for All Aspects of Design     Hybrid Journal   (Followers: 33)
Digital Biomarkers     Open Access   (Followers: 1)
Digital Chemical Engineering     Open Access  
Digital Chinese Medicine     Open Access  
Digital Creativity     Hybrid Journal   (Followers: 11)
Digital Experiences in Mathematics Education     Hybrid Journal   (Followers: 3)
Digital Finance : Smart Data Analytics, Investment Innovation, and Financial Technology     Hybrid Journal   (Followers: 3)
Digital Geography and Society     Open Access  
Digital Government : Research and Practice     Open Access   (Followers: 1)
Digital Health     Open Access   (Followers: 10)
Digital Journalism     Hybrid Journal   (Followers: 7)
Digital Medicine     Open Access   (Followers: 3)
Digital Platform: Information Technologies in Sociocultural Sphere     Open Access   (Followers: 1)
Digital Policy, Regulation and Governance     Hybrid Journal   (Followers: 2)
Digital War     Hybrid Journal   (Followers: 1)
Digitale Welt : Das Wirtschaftsmagazin zur Digitalisierung     Hybrid Journal  
Digitális Bölcsészet / Digital Humanities     Open Access   (Followers: 2)
Disaster Prevention and Management     Hybrid Journal   (Followers: 30)
Discours     Open Access   (Followers: 1)
Discourse & Communication     Hybrid Journal   (Followers: 26)
Discover Internet of Things     Open Access   (Followers: 2)
Discrete and Continuous Models and Applied Computational Science     Open Access  
Discrete Event Dynamic Systems     Hybrid Journal   (Followers: 3)
Discrete Mathematics & Theoretical Computer Science     Open Access   (Followers: 1)
Discrete Optimization     Full-text available via subscription   (Followers: 7)
Displays     Hybrid Journal  
Distributed and Parallel Databases     Hybrid Journal   (Followers: 2)
e-learning and education (eleed)     Open Access   (Followers: 39)
Ecological Indicators     Hybrid Journal   (Followers: 22)
Ecological Informatics     Hybrid Journal   (Followers: 3)
Ecological Management & Restoration     Hybrid Journal   (Followers: 15)
Ecosystems     Hybrid Journal   (Followers: 32)
Edu Komputika Journal     Open Access   (Followers: 1)
Education and Information Technologies     Hybrid Journal   (Followers: 53)
Educational Philosophy and Theory     Hybrid Journal   (Followers: 10)
Educational Psychology in Practice: theory, research and practice in educational psychology     Hybrid Journal   (Followers: 13)
Educational Research and Evaluation: An International Journal on Theory and Practice     Hybrid Journal   (Followers: 7)
Educational Theory     Hybrid Journal   (Followers: 9)
Egyptian Informatics Journal     Open Access   (Followers: 5)
Electronic Commerce Research and Applications     Hybrid Journal   (Followers: 5)
Electronic Design     Partially Free   (Followers: 125)
Electronic Letters on Computer Vision and Image Analysis     Open Access   (Followers: 10)
Elektron     Open Access  
Empirical Software Engineering     Hybrid Journal   (Followers: 8)
Energy for Sustainable Development     Hybrid Journal   (Followers: 13)
Engineering & Technology     Hybrid Journal   (Followers: 22)
Engineering Applications of Computational Fluid Mechanics     Open Access   (Followers: 23)
Engineering Computations     Hybrid Journal   (Followers: 3)
Engineering Economist, The     Hybrid Journal   (Followers: 4)
Engineering Optimization     Hybrid Journal   (Followers: 19)
Engineering With Computers     Hybrid Journal   (Followers: 5)
Enterprise Information Systems     Hybrid Journal   (Followers: 2)
Entertainment Computing     Hybrid Journal   (Followers: 2)
Environmental and Ecological Statistics     Hybrid Journal   (Followers: 7)
Environmental Communication: A Journal of Nature and Culture     Hybrid Journal   (Followers: 16)
EPJ Data Science     Open Access   (Followers: 10)
ESAIM: Control Optimisation and Calculus of Variations     Open Access   (Followers: 2)
Ethics and Information Technology     Hybrid Journal   (Followers: 64)
eTransportation     Open Access   (Followers: 1)
EURO Journal on Computational Optimization     Open Access   (Followers: 5)
EuroCALL Review     Open Access  
European Food Research and Technology     Hybrid Journal   (Followers: 8)
European Journal of Combinatorics     Full-text available via subscription   (Followers: 3)
European Journal of Computational Mechanics     Hybrid Journal   (Followers: 1)
European Journal of Information Systems     Hybrid Journal   (Followers: 85)
European Journal of Law and Technology     Open Access   (Followers: 18)
European Journal of Political Theory     Hybrid Journal   (Followers: 27)
Evolutionary Computation     Hybrid Journal   (Followers: 11)
Fibreculture Journal     Open Access   (Followers: 9)
Finite Fields and Their Applications     Full-text available via subscription   (Followers: 5)
Fixed Point Theory and Applications     Open Access  
Focus on Catalysts     Full-text available via subscription  
Focus on Pigments     Full-text available via subscription   (Followers: 3)
Focus on Powder Coatings     Full-text available via subscription   (Followers: 5)
Forensic Science International: Digital Investigation     Full-text available via subscription   (Followers: 317)
Formal Aspects of Computing     Hybrid Journal   (Followers: 3)
Formal Methods in System Design     Hybrid Journal   (Followers: 6)
Forschung     Hybrid Journal   (Followers: 1)
Foundations and Trends® in Communications and Information Theory     Full-text available via subscription   (Followers: 6)
Foundations and Trends® in Databases     Full-text available via subscription   (Followers: 2)
Foundations and Trends® in Human-Computer Interaction     Full-text available via subscription   (Followers: 5)
Foundations and Trends® in Information Retrieval     Full-text available via subscription   (Followers: 30)
Foundations and Trends® in Networking     Full-text available via subscription   (Followers: 1)
Foundations and Trends® in Signal Processing     Full-text available via subscription   (Followers: 7)
Foundations and Trends® in Theoretical Computer Science     Full-text available via subscription   (Followers: 1)
Foundations of Computational Mathematics     Hybrid Journal  
Foundations of Computing and Decision Sciences     Open Access  
Frontiers in Computational Neuroscience     Open Access   (Followers: 23)
Frontiers in Computer Science     Open Access   (Followers: 1)
Frontiers in Digital Health     Open Access   (Followers: 4)
Frontiers in Digital Humanities     Open Access   (Followers: 7)
Frontiers in ICT     Open Access  
Frontiers in Neuromorphic Engineering     Open Access   (Followers: 2)
Frontiers in Research Metrics and Analytics     Open Access   (Followers: 4)
Frontiers of Computer Science in China     Hybrid Journal   (Followers: 2)
Frontiers of Environmental Science & Engineering     Hybrid Journal   (Followers: 3)
Frontiers of Information Technology & Electronic Engineering     Hybrid Journal  
Fuel Cells Bulletin     Full-text available via subscription   (Followers: 9)
Functional Analysis and Its Applications     Hybrid Journal   (Followers: 3)
Future Computing and Informatics Journal     Open Access  
Future Generation Computer Systems     Hybrid Journal   (Followers: 2)
Geo-spatial Information Science     Open Access   (Followers: 7)
Geoforum Perspektiv     Open Access  
GeoInformatica     Hybrid Journal   (Followers: 7)
Geoinformatics FCE CTU     Open Access   (Followers: 8)
GetMobile : Mobile Computing and Communications     Full-text available via subscription   (Followers: 1)
Government Information Quarterly     Hybrid Journal   (Followers: 28)
Granular Computing     Hybrid Journal  
Graphics and Visual Computing     Open Access  
Grey Room     Hybrid Journal   (Followers: 16)
Group Dynamics : Theory, Research, and Practice     Full-text available via subscription   (Followers: 15)
Groups, Complexity, Cryptology     Open Access   (Followers: 2)
HardwareX     Open Access  
Harvard Data Science Review     Open Access   (Followers: 3)
Health Services Management Research     Hybrid Journal   (Followers: 16)
Healthcare Technology Letters     Open Access  
High Frequency     Hybrid Journal  
High-Confidence Computing     Open Access   (Followers: 1)
Home Cultures     Full-text available via subscription   (Followers: 7)
Home Health Care Management & Practice     Hybrid Journal   (Followers: 1)

  First | 1 2 3 4 5 6 7 | Last

Similar Journals
Journal Cover
Ethics and Information Technology
Journal Prestige (SJR): 0.512
Citation Impact (citeScore): 2
Number of Followers: 64  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 1572-8439 - ISSN (Online) 1388-1957
Published by Springer-Verlag Homepage  [2469 journals]
  • Putting explainable AI in context: institutional explanations for medical
           AI

    • Free pre-print version: Loading...

      Abstract: Abstract There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations—and it is unclear either address the epistemic worries of the medical professionals using these systems. We argue these systems do require an explanation, but an institutional explanation. These types of explanations provide the reasons why the medical professional should rely on the system in practice—that is, they focus on trying to address the epistemic concerns of those using the system in specific contexts and specific occasions. But ensuring that these institutional explanations are fit for purpose means ensuring the institutions designing and deploying these systems are transparent about the assumptions baked into the system. This requires coordination with experts and end-users concerning how it will function in the field, the metrics used to evaluate its accuracy, and the procedures for auditing the system to prevent biases and failures from going unaddressed. We contend this broader explanation is necessary for either post hoc explanations or accuracy scores to be epistemically meaningful to the medical professional, making it possible for them to rely on these systems as effective and useful tools in their practices.
      PubDate: 2022-05-06
       
  • Disability, fairness, and algorithmic bias in AI recruitment

    • Free pre-print version: Loading...

      Abstract: While rapid advances in artificial intelligence (AI) hiring tools promise to transform the workplace, these algorithms risk exacerbating existing biases against marginalized groups. In light of these ethical issues, AI vendors have sought to translate normative concepts such as fairness into measurable, mathematical criteria that can be optimized for. However, questions of disability and access often are omitted from these ongoing discussions about algorithmic bias. In this paper, I argue that the multiplicity of different kinds and intensities of people’s disabilities and the fluid, contextual ways in which they manifest point to the limits of algorithmic fairness initiatives. In particular, existing de-biasing measures tend to flatten variance within and among disabled people and abstract away information in ways that reinforce pathologization. While fair machine learning methods can help mitigate certain disparities, I argue that fairness alone is insufficient to secure accessible, inclusive AI. I then outline a disability justice approach, which provides a framework for centering disabled people’s experiences and attending to the structures and norms that underpin algorithmic bias.
      PubDate: 2022-04-19
       
  • Epistemo-ethical constraints on AI-human decision making for diagnostic
           purposes

    • Free pre-print version: Loading...

      Abstract: Abstract This paper approaches the interaction of a health professional with an AI system for diagnostic purposes as a hybrid decision making process and conceptualizes epistemo-ethical constraints on this process. We argue for the importance of the understanding of the underlying machine epistemology in order to raise awareness of and facilitate realistic expectations from AI as a decision support system, both among healthcare professionals and the potential benefiters (patients). Understanding the epistemic abilities and limitations of such systems is essential if we are to integrate AI into the decision making processes in a way that takes into account its applicability boundaries. This will help to mitigate potential harm due to misjudgments and, as a result, to raise the trust—understood here as a belief in reliability of—in the AI system. We aim at a minimal requirement for AI meta-explanation which should distinguish machine epistemic processes from similar processes in human epistemology in order to avoid confusion and error in judgment and application. An informed approach to the integration of AI systems into the decision making for diagnostic purposes is crucial given its high impact on health and well-being of patients.
      PubDate: 2022-04-19
       
  • The video gamer’s dilemmas

    • Free pre-print version: Loading...

      Abstract: Abstract The gamer’s dilemma offers three plausible but jointly inconsistent premises: (1) Virtual murder in video games is morally permissible. (2) Virtual paedophelia in video games is not morally permissible. (3) There is no morally relevant difference between virtual murder and virtual paedophelia in video games. In this paper I argue that the gamer’s dilemma can be understood as one of three distinct dilemmas, depending on how we understand two key ideas in Morgan Luck’s (2009) original formulation. The two ideas are those of (1) occurring in a video game and (2) being a virtual instance of murder or paedophelia. Depending on the weight placed on the gaming context, the dilemma is either about in-game acts or virtual acts. And depending on the type of virtual acts we have in mind, the dilemma is either about virtual representations or virtual partial reproductions of murder and paedophelia. This gives us three dilemmas worth resolving: a gaming dilemma, a representation dilemma, and a simulation dilemma. I argue that these dilemmas are about different issues, apply to different cases, and are susceptible to different solutions. I also consider how different participants in the debate have interpreted the dilemma in one or more of these three ways.
      PubDate: 2022-04-06
       
  • Relative explainability and double standards in medical decision-making

    • Free pre-print version: Loading...

      Abstract: Abstract The increased presence of medical AI in clinical use raises the ethical question which standard of explainability is required for an acceptable and responsible implementation of AI-based applications in medical contexts. In this paper, we elaborate on the emerging debate surrounding the standards of explainability for medical AI. For this, we first distinguish several goods explainability is usually considered to contribute to the use of AI in general, and medical AI in specific. Second, we propose to understand the value of explainability relative to other available norms of explainable decision-making. Third, in pointing out that we usually accept heuristics and uses of bounded rationality for medical decision-making by physicians, we argue that the explainability of medical decisions should not be measured against an idealized diagnostic process, but according to practical considerations. We conclude, fourth, to resolve the issue of explainability-standards by relocating the issue to the AI’s certifiability and interpretability.
      PubDate: 2022-04-05
       
  • Algorithmic decision-making employing profiling: will trade secrecy
           protection render the right to explanation toothless'

    • Free pre-print version: Loading...

      Abstract: Abstract Algorithmic decision-making based on profiling may significantly affect people’s destinies. As a rule, however, explanations for such decisions are lacking. What are the chances for a “right to explanation” to be realized soon' After an exploration of the regulatory efforts that are currently pushing for such a right it is concluded that, at the moment, the GDPR stands out as the main force to be reckoned with. In cases of profiling, data subjects are granted the right to receive meaningful information about the functionality of the system in use; for fully automated profiling decisions even an explanation has to be given. However, the trade secrets and intellectual property rights (IPRs) involved must be respected as well. These conflicting rights must be balanced against each other; what will be the outcome' Looking back to 1995, when a similar kind of balancing had been decreed in Europe concerning the right of access (DPD), Wachter et al. (2017) find that according to judicial opinion only generalities of the algorithm had to be disclosed, not specific details. This hardly augurs well for a future right of access let alone to explanation. Thereupon the landscape of IPRs for machine learning (ML) is analysed. Spurred by new USPTO guidelines that clarify when inventions are eligible to be patented, the number of patent applications in the US related to ML in general, and to “predictive analytics” in particular, has soared since 2010—and Europe has followed. I conjecture that in such a climate of intensified protection of intellectual property, companies may legitimately claim that the more their application combines several ML assets that, in addition, are useful in multiple sectors, the more value is at stake when confronted with a call for explanation by data subjects. Consequently, the right to explanation may be severely crippled.
      PubDate: 2022-04-05
       
  • Reflection machines: increasing meaningful human control over Decision
           Support Systems

    • Free pre-print version: Loading...

      Abstract: Abstract Rapid developments in Artificial Intelligence are leading to an increasing human reliance on machine decision making. Even in collaborative efforts with Decision Support Systems (DSSs), where a human expert is expected to make the final decisions, it can be hard to keep the expert actively involved throughout the decision process. DSSs suggest their own solutions and thus invite passive decision making. To keep humans actively ‘on’ the decision-making loop and counter overreliance on machines, we propose a ‘reflection machine’ (RM). This system asks users questions about their decision strategy and thereby prompts them to evaluate their own decisions critically. We discuss what forms RMs can take and present a proof-of-concept implementation of a RM that can produce feedback on users’ decisions in the medical and law domains. We show that the prototype requires very little domain knowledge to create reasonably intelligent critiquing questions. With this prototype, we demonstrate the technical feasibility to develop RMs and hope to pave the way for future research into their effectiveness and value.
      PubDate: 2022-03-21
       
  • Wisdom in the digital age: a conceptual and practical framework for
           understanding and cultivating cyber-wisdom

    • Free pre-print version: Loading...

      Abstract: Abstract The internet presents not just opportunities but also risks that range, to name a few, from online abuse and misinformation to the polarisation of public debate. Given the increasingly digital nature of our societies, these risks make it essential for users to learn how to wisely use digital technologies as part of a more holistic approach to promoting human flourishing. However, insofar as they are exacerbated by both the affordances and the political economy of the internet, this article argues that a new understanding of wisdom that is germane to the digital age is needed. As a result, we propose a framework for conceptualising what we call cyber-wisdom, and how this can be cultivated via formal education, in ways that are grounded in neo-Aristotelian virtue ethics and that build on three prominent existing models of wisdom. The framework, according to which cyber-wisdom is crucial to navigating online risks and opportunities through the deployment of character virtues necessary for flourishing online, suggests that cyber-wisdom consists of four components: cyber-wisdom literacy, cyber-wisdom reasoning, cyber-wisdom self-reflection, cyber-wisdom motivation. Unlike the models on which it builds, the framework accounts for the specificity of the digital age and is both conceptual and practical. On the one hand, each component has conceptual implications for what it means to be wise in the digital age. On the other hand, informed by character education literature and practice, it has practical implications for how to cultivate cyber-wisdom in the classroom through teaching methods that match its different components.
      PubDate: 2022-03-04
      DOI: 10.1007/s10676-022-09640-3
       
  • Ethical implications of fairness interventions: what might be hidden
           behind engineering choices'

    • Free pre-print version: Loading...

      Abstract: Abstract The importance of fairness in machine learning models is widely acknowledged, and ongoing academic debate revolves around how to determine the appropriate fairness definition, and how to tackle the trade-off between fairness and model performance. In this paper we argue that besides these concerns, there can be ethical implications behind seemingly purely technical choices in fairness interventions in a typical model development pipeline. As an example we show that the technical choice between in-processing and post-processing is not necessarily value-free and may have serious implications in terms of who will be affected by the specific fairness intervention. The paper reveals how assessing the technical choices in terms of their ethical consequences can contribute to the design of fair models and to the related societal discussions.
      PubDate: 2022-02-28
      DOI: 10.1007/s10676-022-09636-z
       
  • Positive risk balance: a comprehensive framework to ensure vehicle safety

    • Free pre-print version: Loading...

      Abstract: Abstract The introduction of automated vehicles promises an increase in traffic safety. Prior to its launch proof of the anticipated reduction in the sense of a positive risk balance compared with human driving performance is required from various stakeholders such as the European Union Commission, the German Ethic Commission, and the ISO TR 4804. To meet this requirement and to generate acceptance by the public and the regulatory authorities, a qualitative Risk- Benefit framework has been defined. This framework is based on literature research on approaches applied in other disciplines. This report depicts the framework, adapted from the pharmaceutical sector called PROACT-URL which serves as a structured procedure to demonstrate a positive risk balance in an understandable and transparent manner. The qualitative framework needs to be turned in quantitative methods once it should be applied. Therefore, two steps of the framework are discussed in more detail: First, the definition of adequate development thresholds that are required at an early stage of the development. Second the simulation-based assessment to prove the positive risk balance prior to the market introduction.
      PubDate: 2022-02-28
      DOI: 10.1007/s10676-022-09625-2
       
  • Explanatory pragmatism: a context-sensitive framework for explainable
           medical AI

    • Free pre-print version: Loading...

      Abstract: Abstract Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we seek to address in this paper. We outline a framework, called Explanatory Pragmatism, which we argue has two attractive features. First, it allows us to conceptualise explainability in explicitly context-, audience- and purpose-relative terms, while retaining a unified underlying definition of explainability. Second, it makes visible any normative disagreements that may underpin conflicting claims about explainability regarding the purposes for which explanations are sought. Third, it allows us to distinguish several dimensions of AI explainability. We illustrate this framework by applying it to a case study involving a machine learning model for predicting whether patients suffering disorders of consciousness were likely to recover consciousness.
      PubDate: 2022-02-28
      DOI: 10.1007/s10676-022-09632-3
       
  • The Bitcoin protocol as a system of power

    • Free pre-print version: Loading...

      Abstract: Abstract In this study, I use the Critical Realism perspective of power to explain how the Bitcoin protocol operates as a system of power. I trace the ideological underpinnings of the protocol in the Cypherpunk movement to consider how notions of power shaped the protocol. The protocol by design encompasses structures, namely Proof of Work and Trustlessness that reproduce asymmetrical constraints on the entities that comprise it. These constraining structures generate constraining mechanisms, those of cost effectiveness and deanonymisation, which further restrict participating entities’ ‘power to act’, reinforcing others’ ‘power over’ them. In doing so, I illustrate that the Bitcoin protocol, rather than decentralising and distributing power across a network of numerous anonymous, trustless peers, it has instead shifted it, from the traditional actors (e.g., state, regulators) to newly emergent ones.
      PubDate: 2022-02-28
      DOI: 10.1007/s10676-022-09626-1
       
  • Ethical responsibility and computational design: bespoke surgical tools as
           an instructive case study

    • Free pre-print version: Loading...

      Abstract: Abstract Computational design uses artificial intelligence (AI) to optimise designs towards user-determined goals. When combined with 3D printing, it is possible to develop and construct physical products in a wide range of geometries and materials and encapsulating a range of functionality, with minimal input from human designers. One potential application is the development of bespoke surgical tools, whereby computational design optimises a tool’s morphology for a specific patient’s anatomy and the requirements of the surgical procedure to improve surgical outcomes. This emerging application of AI and 3D printing provides an opportunity to examine whether new technologies affect the ethical responsibilities of those operating in high-consequence domains such as healthcare. This research draws on stakeholder interviews to identify how a range of different professions involved in the design, production, and adoption of computationally designed surgical tools, identify and attribute responsibility within the different stages of a computationally designed tool’s development and deployment. Those interviewed included surgeons and radiologists, fabricators experienced with 3D printing, computational designers, healthcare regulators, bioethicists, and patient advocates. Based on our findings, we identify additional responsibilities that surround the process of creating and using these tools. Additionally, the responsibilities of most professional stakeholders are not limited to individual stages of the tool design and deployment process, and the close collaboration between stakeholders at various stages of the process suggests that collective ethical responsibility may be appropriate in these cases. The role responsibilities of the stakeholders involved in developing the process to create computationally designed tools also change as the technology moves from research and development (R&D) to approved use.
      PubDate: 2022-02-23
      DOI: 10.1007/s10676-022-09641-2
       
  • A Capability Approach to worker dignity under Algorithmic Management

    • Free pre-print version: Loading...

      Abstract: Abstract This paper proposes a conceptual framework to study and evaluate the impact of ‘Algorithmic Management’ (AM) on worker dignity. While the literature on AM addresses many concerns that relate to the dignity of workers, a shared understanding of what worker dignity means, and a framework to study it, in the context of software algorithms at work is lacking. We advance a conceptual framework based on a Capability Approach (CA) as a route to understanding worker dignity under AM. This paper contributes to the existing AM literature which currently is mainly focused on exploitation and violations of dignity and its protection. By using a CA, we expand this focus and can evaluate the possibility that AM might also enable and promote dignity. We conclude that our CA-based conceptual framework provides a valuable means to study AM and then discuss avenues for future research into the complex relationship between worker dignity and AM systems.
      PubDate: 2022-02-03
      DOI: 10.1007/s10676-022-09637-y
       
  • The emergence of “truth machines”': Artificial intelligence
           approaches to lie detection

    • Free pre-print version: Loading...

      Abstract: Abstract This article analyzes emerging artificial intelligence (AI)-enhanced lie detection systems from ethical and human resource (HR) management perspectives. I show how these AI enhancements transform lie detection, followed with analyses as to how the changes can lead to moral problems. Specifically, I examine how these applications of AI introduce human rights issues of fairness, mental privacy, and bias and outline the implications of these changes for HR management. The changes that AI is making to lie detection are altering the roles of human test administrators and human subjects, adding machine learning-based AI agents to the situation and establishing invasive data collection processes as well as introducing certain biases in results. I project that the potentials for pervasive and continuous lie detection initiatives (“truth machines”) are substantial, displacing human-centered efforts to establish trust and foster integrity in organizations. I argue that if it is possible for HR managers to do so, they should cease using technologically-based lie detection systems entirely and work to foster trust and accountability on a human scale. However, if these AI-enhanced technologies are put into place by organizations by law, agency mandate, or other compulsory measures, care should be taken that the impacts of the technologies on human rights and wellbeing are considered. The article explores how AI can displace the human agent in some aspects of lie detection and credibility assessment scenarios, expanding the prospects for inscrutable, “black box” processes and novel physiological constructs (such as “biomarkers of deceit”) that may increase the potential for such human rights concerns as fairness, mental privacy, and bias. Employee interactions with autonomous lie detection systems rather with than human beings who administer specific tests can reframe organizational processes and rules concerning the assessment of personal honesty and integrity. The dystopian projection of organizational life in which analyses and judgments of the honesty of one’s utterances are made automatically and in conjunction with one’s personal profile provides unsettling prospects for the autonomy of self-representation.
      PubDate: 2022-01-24
      DOI: 10.1007/s10676-022-09621-6
       
  • A sociotechnical perspective for the future of AI: narratives,
           inequalities, and human control

    • Free pre-print version: Loading...

      Abstract: Abstract Different people have different perceptions about artificial intelligence (AI). It is extremely important to bring together all the alternative frames of thinking—from the various communities of developers, researchers, business leaders, policymakers, and citizens—to properly start acknowledging AI. This article highlights the ‘fruitful collaboration’ that sociology and AI could develop in both social and technical terms. We discuss how biases and unfairness are among the major challenges to be addressed in such a sociotechnical perspective. First, as intelligent machines reveal their nature of ‘magnifying glasses’ in the automation of existing inequalities, we show how the AI technical community is calling for transparency and explainability, accountability and contestability. Not to be considered as panaceas, they all contribute to ensuring human control in novel practices that include requirement, design and development methodologies for a fairer AI. Second, we elaborate on the mounting attention for technological narratives as technology is recognized as a social practice within a specific institutional context. Not only do narratives reflect organizing visions for society, but they also are a tangible sign of the traditional lines of social, economic, and political inequalities. We conclude with a call for a diverse approach within the AI community and a richer knowledge about narratives as they help in better addressing future technical developments, public debate, and policy. AI practice is interdisciplinary by nature and it will benefit from a socio-technical perspective.
      PubDate: 2022-01-24
      DOI: 10.1007/s10676-022-09624-3
       
  • Rethinking explainability: toward a postphenomenology of black-box
           artificial intelligence in medicine

    • Free pre-print version: Loading...

      Abstract: Abstract In recent years, increasingly advanced artificial intelligence (AI), and in particular machine learning, has shown great promise as a tool in various healthcare contexts. Yet as machine learning in medicine has become more useful and more widely adopted, concerns have arisen about the “black-box” nature of some of these AI models, or the inability to understand—and explain—the inner workings of the technology. Some critics argue that AI algorithms must be explainable to be responsibly used in the clinical encounter, while supporters of AI dismiss the importance of explainability and instead highlight the many benefits the application of this technology could have for medicine. However, this dichotomy fails to consider the particular ways in which machine learning technologies mediate relations in the clinical encounter, and in doing so, makes explainability more of a problem than it actually is. We argue that postphenomenology is a highly useful theoretical lens through which to examine black-box AI, because it helps us better understand the particular mediating effects this type of technology brings to clinical encounters and moves beyond the explainability stalemate. Using a postphenomenological approach, we argue that explainability is more of a concern for physicians than it is for patients, and that a lack of explainability does not introduce a novel concern to the physician–patient encounter. Explainability is just one feature of technological mediation and need not be the central concern on which the use of black-box AI hinges.
      PubDate: 2022-01-24
      DOI: 10.1007/s10676-022-09631-4
       
  • Deny, dismiss and downplay: developers’ attitudes towards risk and their
           role in risk creation in the field of healthcare-AI

    • Free pre-print version: Loading...

      Abstract: Abstract Developers are often the engine behind the creation and implementation of new technologies, including in the artificial intelligence surge that is currently underway. In many cases these new technologies introduce significant risk to affected stakeholders; risks that can be reduced and mitigated by such a dominant party. This is fully recognized by texts that analyze risks in the current AI transformation, which suggest voluntary adoption of ethical standards and imposing ethical standards via regulation and oversight as tools to compel developers to reduce such risks. However, what these texts usually sidestep is the question of how aware developers are to the risks they are creating with these new AI technologies, and what their attitudes are towards such risks. This paper asks to rectify this gap in research, by analyzing an ongoing case study. Focusing on six Israeli AI startups in the field of radiology, I carry out a content analysis of their online material in order to examine these companies’ stances towards the potential threat their automated tools pose to patient safety and to the work-standing of healthcare professionals. Results show that these developers are aware of the risks their AI products pose, but tend to deny their own role in the technological transformation and dismiss or downplay the risks to stakeholders. I conclude by tying these findings back to current risk-reduction recommendations with regards to advanced AI technologies, and suggest which of them hold more promise in light of developers’ attitudes.
      PubDate: 2022-01-24
      DOI: 10.1007/s10676-022-09627-0
       
  • Weapons of moral construction' On the value of fairness in algorithmic
           decision-making

    • Free pre-print version: Loading...

      Abstract: Abstract Fairness is one of the most prominent values in the Ethics and Artificial Intelligence (AI) debate and, specifically, in the discussion on algorithmic decision-making (ADM). However, while the need for fairness in ADM is widely acknowledged, the very concept of fairness has not been sufficiently explored so far. Our paper aims to fill this gap and claims that an ethically informed re-definition of fairness is needed to adequately investigate fairness in ADM. To achieve our goal, after an introductory section aimed at clarifying the aim and structure of the paper, in section “Fairness in algorithmic decision-making” we provide an overview of the state of the art of the discussion on fairness in ADM and show its shortcomings; in section “Fairness as an ethical value”, we pursue an ethical inquiry into the concept of fairness, drawing insights from accounts of fairness developed in moral philosophy, and define fairness as an ethical value. In particular, we argue that fairness is articulated in a distributive and socio-relational dimension; it comprises three main components: fair equality of opportunity, equal right to justification, and fair equality of relationship; these components are grounded in the need to respect persons both as persons and as particular individuals. In section “Fairness in algorithmic decision-making revised”, we analyze the implications of our redefinition of fairness as an ethical value on the discussion of fairness in ADM and show that each component of fairness has profound effects on the criteria that ADM ought to meet. Finally, in section “Concluding remarks”, we sketch some broader implications and conclude.
      PubDate: 2022-01-24
      DOI: 10.1007/s10676-022-09622-5
       
  • Instilling moral value alignment by means of multi-objective reinforcement
           learning

    • Free pre-print version: Loading...

      Abstract: Abstract AI research is being challenged with ensuring that autonomous agents learn to behave ethically, namely in alignment with moral values. Here, we propose a novel way of tackling the value alignment problem as a two-step process. The first step consists on formalising moral values and value aligned behaviour based on philosophical foundations. Our formalisation is compatible with the framework of (Multi-Objective) Reinforcement Learning, to ease the handling of an agent’s individual and ethical objectives. The second step consists in designing an environment wherein an agent learns to behave ethically while pursuing its individual objective. We leverage on our theoretical results to introduce an algorithm that automates our two-step approach. In the cases where value-aligned behaviour is possible, our algorithm produces a learning environment for the agent wherein it will learn a value-aligned behaviour.
      PubDate: 2022-01-24
      DOI: 10.1007/s10676-022-09635-0
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 44.200.74.241
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-