A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

  Subjects -> ELECTRONICS (Total: 207 journals)
The end of the list has been reached or no journals were found for your choice.
Similar Journals
Journal Cover
Electronic Markets
Journal Prestige (SJR): 0.834
Citation Impact (citeScore): 3
Number of Followers: 6  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 1019-6781 - ISSN (Online) 1422-8890
Published by Springer-Verlag Homepage  [2467 journals]
  • Electronic Markets on AI and standardization

    • Free pre-print version: Loading...

      PubDate: 2023-01-26
       
  • Users taking the blame' How service failure, recovery, and robot
           design affect user attributions and retention

    • Free pre-print version: Loading...

      Abstract: Abstract Firms use robots to deliver an ever-expanding range of services. However, as service failures are common, service recovery actions are necessary to prevent user churn. This research further suggests that firms need to know how to design service robots that avoid alienating users in case of service failures. Robust evidence across two experiments demonstrates that users attribute successful service outcomes internally, while robot-induced service failures are blamed on the firm (and not the robot), confirming the well-known self-serving bias. While this external attributional shift occurs regardless of the robot design (i.e., it is the same for warm vs. competent robots), the findings imply that service recovery minimizes the undesirable external shift and that this effect is particularly pronounced for warm robots. For practitioners, this implies prioritizing service robots with a warm design for maximizing user retention for either type of service outcome (i.e., success, failure, and failure with recovery). For theory, this work demonstrates that attribution represents a meaningful mechanism to explain the proposed relationships.
      PubDate: 2023-01-19
       
  • On the potentials of quantum computing – An interview with Heike
           Riel from IBM Research

    • Free pre-print version: Loading...

      Abstract: Abstract In this interview, Dr. Heike Riel, a leading scientist and Fellow at IBM Research, reports on the current state of research in the field of quantum computing. Building on the distinction of gateable quantum computers and quantum annealers, the interview sheds light on how research has evolved on gateable quantum computers, which are the path developed by IBM. These gateable quantum computers are described with their current status as well as the improvements and challenges regarding speed, scale, and quality. All three parameters are important for increasing the performance of these universal quantum computers and for leveraging their potentials compared to classical computers. In particular, complex mathematical problems being present in numerous applications in science and business may be solved. Among the examples mentioned are optimization problems that tend to scale exponentially with the number of parameters, for example, in material and natural sciences or simulation problems in the financial industry and in manufacturing. The interview concludes with a critical assessment of possible risks and expectations for the future.
      PubDate: 2023-01-04
       
  • Applying XAI to an AI-based system for candidate management to mitigate
           bias and discrimination in hiring

    • Free pre-print version: Loading...

      Abstract: Abstract Assuming that potential biases of Artificial Intelligence (AI)-based systems can be identified and controlled for (e.g., by providing high quality training data), employing such systems to augment human resource (HR)-decision makers in candidate selection provides an opportunity to make selection processes more objective. However, as the final hiring decision is likely to remain with humans, prevalent human biases could still cause discrimination. This work investigates the impact of an AI-based system’s candidate recommendations on humans’ hiring decisions and how this relation could be moderated by an Explainable AI (XAI) approach. We used a self-developed platform and conducted an online experiment with 194 participants. Our quantitative and qualitative findings suggest that the recommendations of an AI-based system can reduce discrimination against older and female candidates but appear to cause fewer selections of foreign-race candidates. Contrary to our expectations, the same XAI approach moderated these effects differently depending on the context.
      PubDate: 2022-12-20
       
  • Global reconstruction of language models with linguistic rules –
           Explainable AI for online consumer reviews

    • Free pre-print version: Loading...

      Abstract: Abstract Analyzing textual data by means of AI models has been recognized as highly relevant in information systems research and practice, since a vast amount of data on eCommerce platforms, review portals or social media is given in textual form. Here, language models such as BERT, which are deep learning AI models, constitute a breakthrough and achieve leading-edge results in many applications of text analytics such as sentiment analysis in online consumer reviews. However, these language models are “black boxes”: It is unclear how they arrive at their predictions. Yet, applications of language models, for instance, in eCommerce require checks and justifications by means of global reconstruction of their predictions, since the decisions based thereon can have large impacts or are even mandatory due to regulations such as the GDPR. To this end, we propose a novel XAI approach for global reconstructions of language model predictions for token-level classifications (e.g., aspect term detection) by means of linguistic rules based on NLP building blocks (e.g., part-of-speech). The approach is analyzed on different datasets of online consumer reviews and NLP tasks. Since our approach allows for different setups, we further are the first to analyze the trade-off between comprehensibility and fidelity of global reconstructions of language model predictions. With respect to this trade-off, we find that our approach indeed allows for balanced setups for global reconstructions of BERT’s predictions. Thus, our approach paves the way for a thorough understanding of language model predictions in text analytics. In practice, our approach can assist businesses in their decision-making and supports compliance with regulatory requirements.
      PubDate: 2022-12-13
       
  • Designing a feature selection method based on explainable artificial
           intelligence

    • Free pre-print version: Loading...

      Abstract: Abstract Nowadays, artificial intelligence (AI) systems make predictions in numerous high stakes domains, including credit-risk assessment and medical diagnostics. Consequently, AI systems increasingly affect humans, yet many state-of-the-art systems lack transparency and thus, deny the individual’s “right to explanation”. As a remedy, researchers and practitioners have developed explainable AI, which provides reasoning on how AI systems infer individual predictions. However, with recent legal initiatives demanding comprehensive explainability throughout the (development of an) AI system, we argue that the pre-processing stage has been unjustifiably neglected and should receive greater attention in current efforts to establish explainability. In this paper, we focus on introducing explainability to an integral part of the pre-processing stage: feature selection. Specifically, we build upon design science research to develop a design framework for explainable feature selection. We instantiate the design framework in a running software artifact and evaluate it in two focus group sessions. Our artifact helps organizations to persuasively justify feature selection to stakeholders and, thus, comply with upcoming AI legislation. We further provide researchers and practitioners with a design framework consisting of meta-requirements and design principles for explainable feature selection.
      PubDate: 2022-12-12
       
  • A nascent design theory for explainable intelligent systems

    • Free pre-print version: Loading...

      Abstract: Abstract Due to computational advances in the past decades, so-called intelligent systems can learn from increasingly complex data, analyze situations, and support users in their decision-making to address them. However, in practice, the complexity of these intelligent systems renders the user hardly able to comprehend the inherent decision logic of the underlying machine learning model. As a result, the adoption of this technology, especially for high-stake scenarios, is hampered. In this context, explainable artificial intelligence offers numerous starting points for making the inherent logic explainable to people. While research manifests the necessity for incorporating explainable artificial intelligence into intelligent systems, there is still a lack of knowledge about how to socio-technically design these systems to address acceptance barriers among different user groups. In response, we have derived and evaluated a nascent design theory for explainable intelligent systems based on a structured literature review, two qualitative expert studies, a real-world use case application, and quantitative research. Our design theory includes design requirements, design principles, and design features covering the topics of global explainability, local explainability, personalized interface design, as well as psychological/emotional factors.
      PubDate: 2022-12-12
       
  • Explainable and responsible artificial intelligence

    • Free pre-print version: Loading...

      PubDate: 2022-11-29
       
  • Standardization for platforms ecosystems

    • Free pre-print version: Loading...

      PubDate: 2022-11-28
       
  • Trust in artificial intelligence: From a Foundational Trust Framework to
           emerging research opportunities

    • Free pre-print version: Loading...

      Abstract: Abstract With the rise of artificial intelligence (AI), the issue of trust in AI emerges as a paramount societal concern. Despite increased attention of researchers, the topic remains fragmented without a common conceptual and theoretical foundation. To facilitate systematic research on this topic, we develop a Foundational Trust Framework to provide a conceptual, theoretical, and methodological foundation for trust research in general. The framework positions trust in general and trust in AI specifically as a problem of interaction among systems and applies systems thinking and general systems theory to trust and trust in AI. The Foundational Trust Framework is then used to gain a deeper understanding of the nature of trust in AI. From doing so, a research agenda emerges that proposes significant questions to facilitate further advances in empirical, theoretical, and design research on trust in AI.
      PubDate: 2022-11-28
       
  • Understanding the process of meanings, materials, and competencies in
           adoption of mobile banking

    • Free pre-print version: Loading...

      Abstract: Abstract COVID-19 has changed the way people live, bank, shop, and work by moving them toward digitalization. It has also driven the trend toward a cashless society, and this change has taken place in an increasingly uncertain and fearful environment. This study explores the social practice of mobile banking (MB) adoption during the global COVID-19 pandemic. Data were collected from banking customers and managers using online customer reviews, semi-structured interviews, and focus groups to develop an in-depth understanding of the subjective realities of their use of MB. This approach also ensured that social distancing practices were maintained during interviews conducted during the COVID-19 outbreak. Analysis of the data suggests that social media, social circles, family members, and teams of customer service agents play an important role in developing the social practice of MB. This study culminates in the presentation of the social practice of MB adoption (SPOTA) framework. This framework is based on extended social practice theory in the context of MB adoption. The study discusses the practical implications of the findings for systems developers. The many expectations of people with or without disabilities of MB are discussed and the findings could be used to improve the accessibility and habitual practice of MB adoption.
      PubDate: 2022-11-28
       
  • Smart cities and smart governance models for future cities

    • Free pre-print version: Loading...

      PubDate: 2022-11-28
       
  • Decision support for efficient XAI services - A morphological analysis,
           business model archetypes, and a decision tree

    • Free pre-print version: Loading...

      Abstract: Abstract The black-box nature of Artificial Intelligence (AI) models and their associated explainability limitations create a major adoption barrier. Explainable Artificial Intelligence (XAI) aims to make AI models more transparent to address this challenge. Researchers and practitioners apply XAI services to explore relationships in data, improve AI methods, justify AI decisions, and control AI technologies with the goals to improve knowledge about AI and address user needs. The market volume of XAI services has grown significantly. As a result, trustworthiness, reliability, transferability, fairness, and accessibility are required capabilities of XAI for a range of relevant stakeholders, including managers, regulators, users of XAI models, developers, and consumers. We contribute to theory and practice by deducing XAI archetypes and developing a user-centric decision support framework to identify the XAI services most suitable for the requirements of relevant stakeholders. Our decision tree is founded on a literature-based morphological box and a classification of real-world XAI services. Finally, we discussed archetypical business models of XAI services and exemplary use cases.
      PubDate: 2022-11-23
       
  • Is trust in artificial intelligence systems related to user
           personality' Review of empirical evidence and future research
           directions

    • Free pre-print version: Loading...

      Abstract: Abstract Artificial intelligence (AI) refers to technologies which support the execution of tasks normally requiring human intelligence (e.g., visual perception, speech recognition, or decision-making). Examples for AI systems are chatbots, robots, or autonomous vehicles, all of which have become an important phenomenon in the economy and society. Determining which AI system to trust and which not to trust is critical, because such systems carry out tasks autonomously and influence human-decision making. This growing importance of trust in AI systems has paralleled another trend: the increasing understanding that user personality is related to trust, thereby affecting the acceptance and adoption of AI systems. We developed a framework of user personality and trust in AI systems which distinguishes universal personality traits (e.g., Big Five), specific personality traits (e.g., propensity to trust), general behavioral tendencies (e.g., trust in a specific AI system), and specific behaviors (e.g., adherence to the recommendation of an AI system in a decision-making context). Based on this framework, we reviewed the scientific literature. We analyzed N = 58 empirical studies published in various scientific disciplines and developed a “big picture” view, revealing significant relationships between personality traits and trust in AI systems. However, our review also shows several unexplored research areas. In particular, it was found that prescriptive knowledge about how to design trustworthy AI systems as a function of user personality lags far behind descriptive knowledge about the use and trust effects of AI systems. Based on these findings, we discuss possible directions for future research, including adaptive systems as focus of future design science research.
      PubDate: 2022-11-23
       
  • Understanding the adoption of the mask-supply information platforms during
           the COVID-19

    • Free pre-print version: Loading...

      Abstract: Abstract Since late 2019, coronavirus disease 2019 (COVID-19) has led to a significant increase in the demand for medical resources. To publish data on face mask supplies, the Taiwanese government collaborated with program developers to construct a mask-supply information transitional platform (MITP). To comprehend the adoption of MITP, the study proposes a research model that integrates the health behavior model (HBM) and IS/IT continuance model for examining the factors affecting intention to use an MITP. Survey data collected from 524 respondents indicated that (1) intention to use an MITP is directly influenced by perceived threat of COVID-19 and beliefs toward using the MITP; (2) cues to action directly influence the perceived threat of COVID-19; and (3) perceived ease of use of MITP is a significant determinant of perceived usefulness of MITP. These results provide practical guidelines for health authorities and government to develop health information systems and strategies to control pandemics.
      PubDate: 2022-11-12
      DOI: 10.1007/s12525-022-00602-7
       
  • Artificial intelligence and machine learning

    • Free pre-print version: Loading...

      Abstract: Abstract Within the last decade, the application of “artificial intelligence” and “machine learning” has become popular across multiple disciplines, especially in information systems. The two terms are still used inconsistently in academia and industry—sometimes as synonyms, sometimes with different meanings. With this work, we try to clarify the relationship between these concepts. We review the relevant literature and develop a conceptual framework to specify the role of machine learning in building (artificial) intelligent agents. Additionally, we propose a consistent typology for AI-based information systems. We contribute to a deeper understanding of the nature of both concepts and to more terminological clarity and guidance—as a starting point for interdisciplinary discussions and future research.
      PubDate: 2022-11-09
      DOI: 10.1007/s12525-022-00598-0
       
  • Explainable product backorder prediction exploiting CNN: Introducing
           explainable models in businesses

    • Free pre-print version: Loading...

      Abstract: Abstract Due to expected positive impacts on business, the application of artificial intelligence has been widely increased. The decision-making procedures of those models are often complex and not easily understandable to the company’s stakeholders, i.e. the people having to follow up on recommendations or try to understand automated decisions of a system. This opaqueness and black-box nature might hinder adoption, as users struggle to make sense and trust the predictions of AI models. Recent research on eXplainable Artificial Intelligence (XAI) focused mainly on explaining the models to AI experts with the purpose of debugging and improving the performance of the models. In this article, we explore how such systems could be made explainable to the stakeholders. For doing so, we propose a new convolutional neural network (CNN)-based explainable predictive model for product backorder prediction in inventory management. Backorders are orders that customers place for products that are currently not in stock. The company now takes the risk to produce or acquire the backordered products while in the meantime, customers can cancel their orders if that takes too long, leaving the company with unsold items in their inventory. Hence, for their strategic inventory management, companies need to make decisions based on assumptions. Our argument is that these tasks can be improved by offering explanations for AI recommendations. Hence, our research investigates how such explanations could be provided, employing Shapley additive explanations to explain the overall models’ priority in decision-making. Besides that, we introduce locally interpretable surrogate models that can explain any individual prediction of a model. The experimental results demonstrate effectiveness in predicting backorders in terms of standard evaluation metrics and outperform known related works with AUC 0.9489. Our approach demonstrates how current limitations of predictive technologies can be addressed in the business domain.
      PubDate: 2022-11-09
      DOI: 10.1007/s12525-022-00599-z
       
  • User trust in artificial intelligence: A comprehensive conceptual
           framework

    • Free pre-print version: Loading...

      Abstract: Abstract This paper provides a systematic literature review of current studies between January 2015 and January 2022 on user trust in artificial intelligence (AI) that has been conducted from different perspectives. Such a review and analysis leads to the identification of the various components, influencing factors, and outcomes of users’ trust in AI. Based on the findings, a comprehensive conceptual framework is proposed for a better understanding of users’ trust in AI. This framework can further be tested and validated in various contexts for enhancing our knowledge of users’ trust in AI. This study also provides potential future research avenues. From a practical perspective, it helps AI-supported service providers comprehend the concept of user trust from different perspectives. The findings highlight the importance of building trust based on different facets to facilitate positive cognitive, affective, and behavioral changes among the users.
      PubDate: 2022-11-04
      DOI: 10.1007/s12525-022-00592-6
       
  • The effect of transparency and trust on intelligent system acceptance:
           Evidence from a user-based study

    • Free pre-print version: Loading...

      Abstract: Abstract Contemporary decision support systems are increasingly relying on artificial intelligence technology such as machine learning algorithms to form intelligent systems. These systems have human-like decision capacity for selected applications based on a decision rationale which cannot be looked-up conveniently and constitutes a black box. As a consequence, acceptance by end-users remains somewhat hesitant. While lacking transparency has been said to hinder trust and enforce aversion towards these systems, studies that connect user trust to transparency and subsequently acceptance are scarce. In response, our research is concerned with the development of a theoretical model that explains end-user acceptance of intelligent systems. We utilize the unified theory of acceptance and use in information technology as well as explanation theory and related theories on initial trust and user trust in information systems. The proposed model is tested in an industrial maintenance workplace scenario using maintenance experts as participants to represent the user group. Results show that acceptance is performance-driven at first sight. However, transparency plays an important indirect role in regulating trust and the perception of performance.
      PubDate: 2022-10-24
      DOI: 10.1007/s12525-022-00593-5
       
  • Calming the customers by AI: Investigating the role of chatbot acting-cute
           strategies in soothing negative customer emotions

    • Free pre-print version: Loading...

      Abstract: Abstract Although intelligent chatbot has been widely used in online customer service settings in modern E-business, scholars still have little understanding of the chatbot strategies implemented in product or service failure context. Aiming at this gap, this study explored whether, how, and when two chatbot acting-cute strategies (i.e. whimsical chatbot strategy and kindchenschema chatbot strategy) could soothe negative customer emotions when product or service failure happened. By using two experimental studies, the results demonstrated that both the whimsical chatbot strategy and the kindchenschema (baby schema) chatbot strategy could placate negative customer emotions via two mechanisms. In the high product or service failure severity context, the soothing effects of both strategies would weaken, while the kindchenschema chatbot strategy weakens less. The whimsical chatbot strategy is suitable for customers with high technology anxiety while the kindchenschema chatbot strategy is suitable for those who have low technology anxiety. The whimsical chatbot strategy was more effective with male customers than with female customers, while the kindchenschema chatbot strategy had the opposite effect. Finally, the theoretical and managerial implications were discussed.
      PubDate: 2022-10-24
      DOI: 10.1007/s12525-022-00596-2
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 3.236.80.119
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-