Advances in Human-Computer Interaction
[SJR: 0.233] [H-I: 5] [15 followers] Follow
Open Access journal
ISSN (Print) 1687-5893 - ISSN (Online) 1687-5907
Published by Hindawi Publishing Corporation [402 journals]
- Dynamic Arm Gesture Recognition Using Spherical Angle Features and Hidden
Abstract: We introduce a vision-based arm gesture recognition (AGR) system using Kinect. The AGR system learns the discrete Hidden Markov Model (HMM), an effective probabilistic graph model for gesture recognition, from the dynamic pose of the arm joints provided by the Kinect API. Because Kinect’s viewpoint and the subject’s arm length can substantially affect the estimated 3D pose of each joint, it is difficult to recognize gestures reliably with these features. The proposed system performs the feature transformation that changes the 3D Cartesian coordinates of each joint into the 2D spherical angles of the corresponding arm part to obtain view-invariant and more discriminative features. We confirmed high recognition performance of the proposed AGR system through experiments with two different datasets.
PubDate: Mon, 16 Nov 2015 13:55:14 +000
- Vibrotactile Stimulation as an Instructor for Mimicry-Based Physical
Abstract: The present aim was to investigate functionality of vibrotactile stimulation in mimicry-based behavioral regulation during physical exercise. Vibrotactile stimuli communicated instructions from an instructor to an exerciser to perform lower extremity movements. A wireless prototype was tested first in controlled laboratory conditions (Study 1) and was followed by a user study (Study 2) that was conducted in a group exercise situation for elderly participants with a new version of the system with improved construction and extended functionality. The results of Study 1 showed that vibrotactile instructions were successful in both supplementing and substituting visual knee lift instructions. Vibrotactile stimuli were accurately recognized, and exercise with the device received affirmative ratings. Interestingly, tactile stimulation appeared to stabilize acceleration magnitude of the knee lifts in comparison to visual instructions. In Study 2 it was found that user experience of the system was mainly positive by both the exercisers and their instructors. For example, exercise with vibrotactile instructions was experienced as more motivating than conventional exercise session. Together the results indicate that tactile instructions could increase possibilities for people having difficulties in following visual and auditory instructions to take part in mimicry-based group training. Both studies also revealed development areas that were primarily related to a slight delay in triggering the vibrotactile stimulation.
PubDate: Tue, 27 Oct 2015 11:53:23 +000
- NFC-Based User Interface for Smart Environments
Abstract: The physical support of a home automation system, joined with a simplified user-system interaction modality, may allow people affected by motor impairments or limitations, such as elderly and disabled people, to live safely and comfortably at home, by improving their autonomy and facilitating the execution of daily life tasks. The proposed solution takes advantage of the Near Field Communications technology, which is simple and intuitive to use, to enable advanced user interaction. The user can perform normal daily activities, such as lifting a gate or closing a window, through a device enabled to read NFC tags containing the commands for the home automation system. A passive Smart Panel is implemented, composed of multiple Near Field Communications tags properly programmed, to enable the execution of both individual commands and so-called scenarios. The work compares several versions of the proposed Smart Panel, differing for interrogation and composition of the single command, number of tags, and dynamic user interaction model, at a parity of the number of commands to issue. Main conclusions are drawn from the experimental results, about the effective adoption of Near Field Communications in smart assistive environments.
PubDate: Wed, 26 Aug 2015 11:54:25 +000
- Should I Stop Thinking About It: A Computational Exploration of
Reappraisal Based Emotion Regulation
Abstract: Agent-based simulation of people’s behaviors and minds has become increasingly popular in recent years. It provides a research platform to simulate and compare alternative psychological and social theories, as well as to create virtual characters that can interact with people or among each other to provide pedagogical or entertainment effects. In this paper, we investigate computationally modeling people’s coping behaviors and in particular in relation to depression, in decision-theoretic agents. Recent studies have suggested that depression can result from failed emotion regulation under limited cognitive resources. In this work, we demonstrate how reappraisal can fail under high levels of stress and limited cognitive resources using an agent-based simulation. Further, we explored the effectiveness of reappraisal under different conditions. Our experiments suggest that for people who are more likely to recall positive memories, it is more beneficial to think about the recalled events from multiple perspectives. However, for people who are more likely to recall negative memories, the better strategy is to not evaluate the recalled events against multiple goals.
PubDate: Wed, 12 Aug 2015 07:54:54 +000
- WozARd: A Wizard of Oz Method for Wearable Augmented Reality
Interaction—A Pilot Study
Abstract: Head-mounted displays and other wearable devices open up for innovative types of interaction for wearable augmented reality (AR). However, to design and evaluate these new types of AR user interfaces, it is essential to quickly simulate undeveloped components of the system and collect feedback from potential users early in the design process. One way of doing this is the wizard of Oz (WOZ) method. The basic idea behind WOZ is to create the illusion of a working system by having a human operator, performing some or all of the system’s functions. WozARd is a WOZ method developed for wearable AR interaction. The presented pilot study was an initial investigation of the capability of the WozARd method to simulate an AR city tour. Qualitative and quantitative data were collected from 21 participants performing a simulated AR city tour. The data analysis focused on seven categories that can have an impact on how the WozARd method is perceived by participants: precision, relevance, responsiveness, technical stability, visual fidelity, general user-experience, and human-operator performance. Overall, the results indicate that the participants perceived the simulated AR city tour as a relatively realistic experience despite a certain degree of technical instability and human-operator mistakes.
PubDate: Wed, 10 Jun 2015 13:45:39 +000
- Design and Validation of an Attention Model of Web Page Users
Abstract: In this paper, we propose a model to predict the locations of the most attended pictorial information on a web page and the attention sequence of the information. We propose to divide the content of a web page into conceptually coherent units or objects, based on a survey of more than 100 web pages. The proposed model takes into account three characteristics of an image object: chromatic contrast, size, and position and computes a numerical value, the attention factor. We can predict from the attention factor values the image objects most likely to draw attention and the sequence in which attention will be drawn. We have carried out empirical studies to both develop and determine the efficacy of the proposed model. The study results revealed a prediction accuracy of about 80% for a set of artificially designed web pages and about 60% for a set of real web pages sampled from the Internet. The performance was found to be better (in terms of prediction accuracy) than the visual saliency model, a popular model to predict human attention on an image.
PubDate: Sat, 28 Feb 2015 09:59:22 +000
- CaRo 2.0: An Interactive System for Expressive Music Rendering
Abstract: In several application contexts in multimedia field (educational, extreme gaming), the interaction with the user requests that system is able to render music in expressive way. The expressiveness is the added value of a performance and is part of the reason that music is interesting to listen. Understanding and modeling expressive content communication is important for many engineering applications in information technology (e.g., Music Information Retrieval, as well as several applications in the affective computing field). In this paper, we present an original approach to modify the expressive content of a performance in a gradual way, applying a smooth morphing among performances with different expressive content in order to adapt the audio expressive character to the user’s desires. The system won the final stage of Rencon 2011. This performance RENdering CONtest is a research project that organizes contests for computer systems generating expressive musical performances.
PubDate: Mon, 02 Feb 2015 09:01:28 +000
- Dimensions of Situatedness for Digital Public Displays
Abstract: Public displays are often strongly situated signs deeply embedded in their physical, social, and cultural setting. Understanding how the display is coupled with on-going situations, its level of situatedness, provides a key element for the interpretation of the displays themselves but is also an element for the interpretation of place, its situated practices, and its social context. Most digital displays, however, do not achieve the same sense of situatedness that seems so natural in their nondigital counterparts. This paper investigates people’s perception of situatedness when considering the connection between public displays and their context. We have collected over 300 photos of displays and conducted a set of analysis tasks involving focus groups and structured interviews with 15 participants. The contribution is a consolidated list of situatedness dimensions that should provide a valuable resource for reasoning about situatedness in digital displays and informing the design and development of display systems.
PubDate: Mon, 22 Dec 2014 00:10:04 +000
- The Interplay between Usability and Aesthetics: More Evidence for the
“What Is Usable Is Beautiful” Notion
Abstract: With respect to inconsistent findings on the interplay between usability and aesthetics, the current paper aimed to further examine the effect of these variables on perceived qualities of a mobile phone prototype. An experiment with four versions of the prototype varying on two factors, (1) usability (high versus low) and (2) aesthetics (high versus low), was conducted with perceived usability and perceived beauty, as well as hedonic experience and the system’s appeal as dependent variables. Participants of the experiment () were instructed to complete four typical tasks with the prototype before assessing its quality. Results showed that the mobile phone’s aesthetics does not affect its perceived usability, either directly or indirectly. Instead, results revealed an effect of usability on perceived beauty, which supports the “what is usable is beautiful” notion instead of “what is beautiful is usable.” Furthermore, effects of aesthetics and of usability on hedonic experience in terms of endowing identity and appeal were found, indicating that both instrumental (usability) and noninstrumental (beauty) qualities contribute to a positive user experience.
PubDate: Tue, 25 Nov 2014 14:46:56 +000
- Large Display Interaction via Multiple Acceleration Curves and Multifinger
Abstract: Large high-resolution displays combine high pixel density with ample physical dimensions. The combination of these factors creates a multiscale workspace where interactive targeting of on-screen objects requires both high speed for distant targets and high accuracy for small targets. Modern operating systems support implicit dynamic control-display gain adjustment (i.e., a pointer acceleration curve) that helps to maintain both speed and accuracy. However, large high-resolution displays require a broader range of control-display gains than a single acceleration curve can usably enable. Some interaction techniques attempt to solve the problem by utilizing multiple explicit modes of interaction, where different modes provide different levels of pointer precision. Here, we investigate the alternative hypothesis of using a single mode of interaction for continuous pointing that enables both (1) standard implicit granularity control via an acceleration curve and (2) explicit switching between multiple acceleration curves in an efficient and dynamic way. We evaluate a sample solution that augments standard touchpad accelerated pointer manipulation with multitouch capability, where the choice of acceleration curve dynamically changes depending on the number of fingers in contact with the touchpad. Specifically, users can dynamically switch among three different acceleration curves by using one, two, or three fingers on the touchpad.
PubDate: Tue, 25 Nov 2014 00:00:00 +000
- A Study of Correlations among Image Resolution, Reaction Time, and Extent
of Motion in Remote Motor Interactions
Abstract: Motor interaction in virtual sculpting, dance trainings, and physiological rehabilitation requires close virtual proximity of users, which may be hindered by low resolution of images and system latency. This paper reports on the results of our investigation aiming to explore the pros and cons of using ultrahigh 4K resolution displays (4096 × 2160 pixels) in remote motor interaction. 4K displays are able to overcome the problem of visible pixels and they are able to show more accurate image details on the level of textures, shadows, and reflections. It was our assumption that such image details can not only satisfy visual comfort of the users, but also provide detailed visual cues and improve the reaction time of users in motor interaction. To validate this hypothesis, we explored the relationships between the reaction time of subjects responding to a series of action-reaction type of games and resolution of the image used in an experiment. The results of our experiment showed that the subjects’ reaction time is significantly shorter in 4K images than in HD or VGA images in motor interaction with small motion envelope.
PubDate: Mon, 17 Nov 2014 00:00:00 +000
- Orchestrating End-User Perspectives in the Software Release Process: An
Integrated Release Management Framework
Abstract: Software bugs discovered by end-users are inevitable consequences of a vendor’s lack of testing. While they frequently result in costly system failures, one way to detect and prevent them is to engage the customer in acceptance testing during the release process. Yet, there is a considerable lack of empirical studies examining release management from end-users’ perspective. To address this gap, we propose and empirically test a release framework that positions the customer release manager in the center of the release process. Using a participatory action research strategy, a twenty-seven-month study was conducted to evaluate and improve the effectiveness of the framework through seven major and 39 minor releases.
PubDate: Sun, 16 Nov 2014 07:53:00 +000
- PaperCAD: A System for Interrogating CAD Drawings Using Small Mobile
Computing Devices Combined with Interactive Paper
Abstract: Smartphones have become indispensable computational tools. However, some tasks can be difficult to perform on a smartphone because these devices have small displays. Here, we explore methods for augmenting the display of a smartphone, or other PDA, using interactive paper. Specifically, we present a prototype interface that enables a user to interactively interrogate technical drawings using an Anoto-based smartpen and a PDA. Our software system, called PaperCAD, enables users to query geometric information from CAD drawings printed on Anoto dot-patterned paper. For example, the user can measure a distance by drawing a dimension arrow. The system provides output to the user via a smartpen’s audio speaker and the dynamic video display of a PDA. The user can select either verbose or concise audio feedback, and the PDA displays a video image of the portion of the drawing near the pen tip. The project entails advances in the interpretation of pen input, such as a method that uses contextual information to interpret ambiguous dimensions and a technique that uses a hidden Markov model to correct interpretation errors in handwritten equations. Results of a user study suggest that our user interface design and interpretation techniques are effective and that users are highly satisfied with the system.
PubDate: Thu, 13 Nov 2014 06:48:13 +000
- Encoding Theory of Mind in Character Design for Pedagogical Interactive
Abstract: Computer aided interactive narrative allows people to participate actively in a dynamically unfolding story, by playing a character or by exerting directorial control. Because of its potential for providing interesting stories as well as allowing user interaction, interactive narrative has been recognized as a promising tool for providing both education and entertainment. This paper discusses the challenges in creating interactive narratives for pedagogical applications and how the challenges can be addressed by using agent-based technologies. We argue that a rich model of characters and in particular a Theory of Mind capacity are needed. The character architect in the Thespian framework for interactive narrative is presented as an example of how decision-theoretic agents can be used for encoding Theory of Mind and for creating pedagogical interactive narratives.
PubDate: Thu, 23 Oct 2014 09:52:28 +000
- The Role of Verbal and Nonverbal Communication in a Two-Person,
Cooperative Manipulation Task
Abstract: Motivated by the differences between human and robot teams, we investigated the role of verbal communication between human teammates as they work together to move a large object to a series of target locations. Only one member of the group was told the target sequence by the experimenters, while the second teammate had no target knowledge. The two experimental conditions we compared were haptic-verbal (teammates are allowed to talk) and haptic only (no talking allowed). The team’s trajectory was recorded and evaluated. In addition, participants completed a NASA TLX-style postexperimental survey which gauges workload along 6 different dimensions. In our initial experiment we found no significant difference in performance when verbal communication was added. In a follow-up experiment, using a different manipulation task, we did find that the addition of verbal communication significantly improved performance and reduced the perceived workload. In both experiments, for the haptic-only condition, we found that a remarkable number of groups independently improvised common haptic communication protocols (CHIPs). We speculate that such protocols can be substituted for verbal communication and that the performance difference between verbal and nonverbal communication may be related to how easy it is to distinguish the CHIPs from motions required for task completion.
PubDate: Thu, 07 Aug 2014 06:56:36 +000
- A Proactive Approach of Robotic Framework for Making Eye Contact with
Abstract: Making eye contact is a most important prerequisite function of humans to initiate a conversation with others. However, it is not an easy task for a robot to make eye contact with a human if they are not facing each other initially or the human is intensely engaged his/her task. If the robot would like to start communication with a particular person, it should turn its gaze to that person and make eye contact with him/her. However, such a turning action alone is not enough to set up an eye contact phenomenon in all cases. Therefore, the robot should perform some stronger actions in some situations so that it can attract the target person before meeting his/her gaze. In this paper, we proposed a conceptual model of eye contact for social robots consisting of two phases: capturing attention and ensuring the attention capture. Evaluation experiments with human participants reveal the effectiveness of the proposed model in four viewing situations, namely, central field of view, near peripheral field of view, far peripheral field of view, and out of field of view.
PubDate: Wed, 23 Jul 2014 07:26:22 +000
- A Large-Scale Quantitative Survey of the German Geocaching Community in
Abstract: We present a large-scale quantitative contextual survey of the geocaching community in Germany, one of the world’s largest geocaching communities. We investigate the features, attitudes, interests, and motivations that characterise the German geocachers. Two anonymous surveys have been carried out on this issue in the year 2007. We conducted a large-scale qualitative general study based on web questionnaires and a more targeted study, which aimed at a comprehensive amount of revealed geocaches of a certain region. With sample sizes of (study 1: general study) and (study 2: regional study) we provide a representative basis to ground previous qualitative research in this domain. In addition, we investigated the usage of technology in combination with traditional paper-based media by the geocachers. This knowledge can be used to reflect on past and future trends within the geocaching community.
PubDate: Thu, 26 Jun 2014 06:59:42 +000
- Using Noninvasive Brain Measurement to Explore the Psychological Effects
of Computer Malfunctions on Users during Human-Computer Interactions
Abstract: In today’s technologically driven world, there is a need to better understand the ways that common computer malfunctions affect computer users. These malfunctions may have measurable influences on computer user’s cognitive, emotional, and behavioral responses. An experiment was conducted where participants conducted a series of web search tasks while wearing functional near-infrared spectroscopy (fNIRS) and galvanic skin response sensors. Two computer malfunctions were introduced during the sessions which had the potential to influence correlates of user trust and suspicion. Surveys were given after each session to measure user’s perceived emotional state, cognitive load, and perceived trust. Results suggest that fNIRS can be used to measure the different cognitive and emotional responses associated with computer malfunctions. These cognitive and emotional changes were correlated with users’ self-report levels of suspicion and trust, and they in turn suggest future work that further explores the capability of fNIRS for the measurement of user experience during human-computer interactions.
PubDate: Wed, 30 Apr 2014 09:05:13 +000
- Frame-Based Facial Expression Recognition Using Geometrical Features
Abstract: To improve the human-computer interaction (HCI) to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness), with the help of several geometrical features. Unlike many other geometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression; this knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to investigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation on two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition rate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior knowledge of the person-specific neutral expression. The expression recognition rate using geometrical features is adversely affected by the errors in the facial point localization, especially for the expressions with subtle facial deformations.
PubDate: Wed, 16 Apr 2014 08:05:05 +000
- An Intelligent Framework for Website Usability
Abstract: With the major advances of the Internet throughout the past couple of years, websites have come to play a central role in the modern marketing business program. However, simply owning a website is not enough for a business to prosper on the Web. Indeed, it is the level of usability of a website that determines if a user stays or abandons it for another competing one. It is therefore crucial to understand the importance of usability on the web, and consequently the need for its evaluation. Nonetheless, there exist a number of obstacles preventing software organizations from successfully applying sound website usability evaluation strategies in practice. From this point of view automation of the latter is extremely beneficial, which not only assists designers in creating more usable websites, but also enhances the Internet users’ experience on the Web and increases their level of satisfaction. As a means of addressing this problem, an Intelligent Usability Evaluation (IUE) tool is proposed that automates the usability evaluation process by employing a Heuristic Evaluation technique in an intelligent manner through the adoption of several research-based AI methods. Experimental results show there exists a high correlation between the tool and human annotators when identifying the considered usability violations.
PubDate: Mon, 14 Apr 2014 09:18:06 +000
- Interaction Tasks and Controls for Public Display Applications
Abstract: Public displays are becoming increasingly interactive and a broad range of interaction mechanisms can now be used to create multiple forms of interaction. However, the lack of interaction abstractions forces each developer to create specific approaches for dealing with interaction, preventing users from building consistent expectations on how to interact across different display systems. There is a clear analogy with the early days of the graphical user interface, when a similar problem was addressed with the emergence of high-level interaction abstractions that provided consistent interaction experiences to users and shielded developers from low-level details. This work takes a first step in that same direction by uncovering interaction abstractions that may lead to the emergence of interaction controls for applications in public displays. We identify a new set of interaction tasks focused on the specificities of public displays; we characterise interaction controls that may enable those interaction tasks to be integrated into applications; we create a mapping between the high-level abstractions provided by the interaction tasks and the concrete interaction mechanisms that can be implemented by those displays. Together, these contributions constitute a step towards the emergence of programming toolkits with widgets that developers could incorporate into their public display applications.
PubDate: Thu, 10 Apr 2014 11:10:59 +000
- A Hierarchical Probabilistic Framework for Recognizing Learners’
Interaction Experience Trends and Emotions
Abstract: We seek to model the users’ experience within an interactive learning environment. More precisely, we are interested in assessing the relationship between learners’ emotional reactions and three trends in the interaction experience, namely, flow: the optimal interaction (a perfect immersion within the task), stuck: the nonoptimal interaction (a difficulty to maintain focused attention), and off-task: the noninteraction (a dropout from the task). We propose a hierarchical probabilistic framework using a dynamic Bayesian network to model this relationship and to simultaneously recognize the probability of experiencing each trend as well as the emotional responses occurring subsequently. The framework combines three modality diagnostic variables that sense the learner’s experience including physiology, behavior, and performance, predictive variables that represent the current context and the learner’s profile, and a dynamic structure that tracks the evolution of the learner’s experience. An experimental study, with a specifically designed protocol for eliciting the targeted experiences, was conducted to validate our approach. Results revealed that multiple concurrent emotions can be associated with the experiences of flow, stuck, and off-task and that the same trend can be expressed differently from one individual to another. The evaluation of the framework showed promising results in predicting learners’ experience trends and emotional responses.
PubDate: Thu, 10 Apr 2014 08:04:46 +000
- Pointing Devices for Wearable Computers
Abstract: We present a survey of pointing devices for wearable computers, which are body-mounted devices that users can access at any time. Since traditional pointing devices (i.e., mouse, touchpad, and trackpoint) were designed to be used on a steady and flat surface they are inappropriate for wearable computers. Just as the advent of laptops resulted in the development of the touchpad and trackpoint, the emergence of wearable computers is leading to the development of pointing devices designed for them. However, unlike laptops, since wearable computers are operated from different body positions under different environmental conditions for different uses, researchers have developed a variety of innovative pointing devices for wearable computers characterized by their sensing mechanism, control mechanism, and form factor. We survey a representative set of pointing devices for wearable computers using an “adaptation of traditional devices” versus “new devices” dichotomy and study devices according to their control and sensing mechanisms and form factor. The objective of this paper is to showcase a variety of pointing devices developed for wearable computers and bring structure to the design space for wearable pointing devices. We conclude that a de facto pointing device for wearable computers, unlike laptops, is not likely to emerge.
PubDate: Mon, 24 Mar 2014 11:01:08 +000
- Users Behavior in Location-Aware Services: Digital Natives versus Digital
Abstract: Location-aware services may expose users to privacy risks as they usually attach user’s location to the generated contents. Different studies have focused on privacy in location-aware services, but the results are often conflicting. Our hypothesis is that users are not fully aware of the features of the location-aware scenario and this lack of knowledge affects the results. Hence, in this paper we present a different approach: the analysis is conducted on two different groups of users (digital natives and digital immigrants) and is divided into two steps: (i) understanding users’ knowledge of a location-aware scenario and (ii) investigating users’ opinion toward location-aware services after showing them an example of an effective location-aware service able to extract personal and sensitive information from contents publicly available in social media platforms. The analysis reveals that there is relation between users’ knowledge and users’ concerns toward privacy in location-aware services and also reveals that digital natives are more interested in the location-aware scenario than digital immigrants. The analysis also discloses that users’ concerns toward these services may be ameliorated if these services ask for users’ authorization and provide benefits to users. Other interesting findings allow us to draw guidelines that might be helpful in developing effective location-aware services.
PubDate: Wed, 19 Mar 2014 12:48:55 +000
- User-Centric Design for Mathematical Web Services
Abstract: A web service is a programmatically available application logic exposed over the internet and it has attracted much attention in recent years with the rapid development of e-commerce. Very few web services exist in the field of mathematics. The aim of this paper is to seamlessly provide user-centric mathematical web services to the service requester. In particular, this paper focuses on mathematical web services for prepositional logic and set theory which comes under discrete mathematics. A sophisticated user interface with virtual keyboard is created for accessing web services. Experimental results show that the web services and the created user interface are efficient and practical.
PubDate: Thu, 06 Mar 2014 13:11:32 +000
- Designing of a Personality Based Emotional Decision Model for Generating
Various Emotional Behavior of Social Robots
Abstract: All humans feel emotions, but individuals express their emotions differently because each has a different personality. We design an emotional decision model that focuses on the personality of individuals. The personality-based emotional decision model is designed with four linear dynamics, viz. reactive dynamic system, internal dynamic system, emotional dynamic system, and behavior dynamic system. Each dynamic system calculates the output values that reflect the personality, by being used as system matrices, input matrices, and output matrices. These responses are reflected in the final emotional behavior through a behavior dynamic system as with humans. The final emotional behavior includes multiple emotional values, and a social robot shows various emotional expressions. We perform some experiments using the cyber robot system, to verify the efficiency of the personality-based emotional decision model that generates various emotions according to the personality.
PubDate: Sun, 05 Jan 2014 11:55:30 +000
- Effects of a Social Robot's Autonomy and Group Orientation on Human
Abstract: Social attributes of intelligent robots are important for human-robot systems. This paper investigates influences of robot autonomy (i.e., high versus low) and group orientation (i.e., ingroup versus outgroup) on a human decision-making process. We conducted a laboratory experiment with 48 college students and tested the hypotheses with MANCOVA. We find that a robot with high autonomy has greater influence on human decisions than a robot with low autonomy. No significant effect is found on group orientation or on the interaction between group orientation and autonomy level. The results provide implications for social robot design.
PubDate: Thu, 19 Dec 2013 11:09:47 +000
- Blind Sailors’ Spatial Representation Using an On-Board Force
Feedback Arm: Two Case Studies
Abstract: Using a vocal, auditory, and haptic application designed for maritime navigation, blind sailors are able to set up and manage their voyages. However, investigation of the manner to present information remains a crucial issue to better understand spatial cognition and improve navigation without vision. In this study, we asked two participants to use SeaTouch on board and manage the ship headings during navigation in order to follow a predefined itinerary. Two conditions were tested. Firstly, blind sailors consulted the updated ship positions about the virtual map presented in an allocentric frame of reference (i.e., facing north). In the second case, they used the forced-feedback device in an egocentric frame of reference (i.e., facing the ship headings). Spatial performance tended to show that the egocentric condition was better for controlling the course during displacement, whereas the allocentric condition was more efficient for building mental representation and remembering it after the navigation task.
PubDate: Thu, 05 Dec 2013 18:07:58 +000
- Computer Breakdown as a Stress Factor during Task Completion under Time
Pressure: Identifying Gender Differences Based on Skin Conductance
Abstract: In today’s society, as computers, the Internet, and mobile phones pervade almost every corner of life, the impact of Information and Communication Technologies (ICT) on humans is dramatic. The use of ICT, however, may also have a negative side. Human interaction with technology may lead to notable stress perceptions, a phenomenon referred to as technostress. An investigation of the literature reveals that computer users’ gender has largely been ignored in technostress research, treating users as “gender-neutral.” To close this significant research gap, we conducted a laboratory experiment in which we investigated users’ physiological reaction to the malfunctioning of technology. Based on theories which explain that men, in contrast to women, are more sensitive to “achievement stress,” we predicted that male users would exhibit higher levels of stress than women in cases of system breakdown during the execution of a human-computer interaction task under time pressure, if compared to a breakdown situation without time pressure. Using skin conductance as a stress indicator, the hypothesis was confirmed. Thus, this study shows that user gender is crucial to better understanding the influence of stress factors such as computer malfunctions on physiological stress reactions.
PubDate: Wed, 23 Oct 2013 08:04:08 +000
- Enhanced Cognitive Walkthrough: Development of the Cognitive Walkthrough
Method to Better Predict, Identify, and Present Usability Problems
Abstract: To avoid use errors when handling medical equipment, it is important to develop products with a high degree of usability. This can be achieved by performing usability evaluations in the product development process to detect and mitigate potential usability problems. A commonly used method is cognitive walkthrough (CW), but this method shows three weaknesses: poor high-level perspective, insufficient categorisation of detected usability problems, and difficulties in overviewing the analytical results. This paper presents a further development of CW with the aim of overcoming its weaknesses. The new method is called enhanced cognitive walkthrough (ECW). ECW is a proactive analytical method for analysis of potential usability problems. The ECW method has been employed to evaluate user interface designs of medical equipment such as home-care ventilators, infusion pumps, dialysis machines, and insulin pumps. The method has proved capable of identifying several potential use problems in designs.
PubDate: Wed, 09 Oct 2013 13:23:00 +000