Advances in Human-Computer Interaction
[17 followers] Follow
Open Access journal
ISSN (Print) 1687-5893 - ISSN (Online) 1687-5907
Published by Hindawi Publishing Corporation [358 journals] [SJR: 0.343] [H-I: 5]
- The Role of Verbal and Nonverbal Communication in a Two-Person,
Cooperative Manipulation Task
Abstract: Motivated by the differences between human and robot teams, we investigated the role of verbal communication between human teammates as they work together to move a large object to a series of target locations. Only one member of the group was told the target sequence by the experimenters, while the second teammate had no target knowledge. The two experimental conditions we compared were haptic-verbal (teammates are allowed to talk) and haptic only (no talking allowed). The team’s trajectory was recorded and evaluated. In addition, participants completed a NASA TLX-style postexperimental survey which gauges workload along 6 different dimensions. In our initial experiment we found no significant difference in performance when verbal communication was added. In a follow-up experiment, using a different manipulation task, we did find that the addition of verbal communication significantly improved performance and reduced the perceived workload. In both experiments, for the haptic-only condition, we found that a remarkable number of groups independently improvised common haptic communication protocols (CHIPs). We speculate that such protocols can be substituted for verbal communication and that the performance difference between verbal and nonverbal communication may be related to how easy it is to distinguish the CHIPs from motions required for task completion.
PubDate: Thu, 07 Aug 2014 06:56:36 +000
- A Proactive Approach of Robotic Framework for Making Eye Contact with
Abstract: Making eye contact is a most important prerequisite function of humans to initiate a conversation with others. However, it is not an easy task for a robot to make eye contact with a human if they are not facing each other initially or the human is intensely engaged his/her task. If the robot would like to start communication with a particular person, it should turn its gaze to that person and make eye contact with him/her. However, such a turning action alone is not enough to set up an eye contact phenomenon in all cases. Therefore, the robot should perform some stronger actions in some situations so that it can attract the target person before meeting his/her gaze. In this paper, we proposed a conceptual model of eye contact for social robots consisting of two phases: capturing attention and ensuring the attention capture. Evaluation experiments with human participants reveal the effectiveness of the proposed model in four viewing situations, namely, central field of view, near peripheral field of view, far peripheral field of view, and out of field of view.
PubDate: Wed, 23 Jul 2014 07:26:22 +000
- A Large-Scale Quantitative Survey of the German Geocaching Community in
Abstract: We present a large-scale quantitative contextual survey of the geocaching community in Germany, one of the world’s largest geocaching communities. We investigate the features, attitudes, interests, and motivations that characterise the German geocachers. Two anonymous surveys have been carried out on this issue in the year 2007. We conducted a large-scale qualitative general study based on web questionnaires and a more targeted study, which aimed at a comprehensive amount of revealed geocaches of a certain region. With sample sizes of (study 1: general study) and (study 2: regional study) we provide a representative basis to ground previous qualitative research in this domain. In addition, we investigated the usage of technology in combination with traditional paper-based media by the geocachers. This knowledge can be used to reflect on past and future trends within the geocaching community.
PubDate: Thu, 26 Jun 2014 06:59:42 +000
- Using Noninvasive Brain Measurement to Explore the Psychological Effects
of Computer Malfunctions on Users during Human-Computer Interactions
Abstract: In today’s technologically driven world, there is a need to better understand the ways that common computer malfunctions affect computer users. These malfunctions may have measurable influences on computer user’s cognitive, emotional, and behavioral responses. An experiment was conducted where participants conducted a series of web search tasks while wearing functional near-infrared spectroscopy (fNIRS) and galvanic skin response sensors. Two computer malfunctions were introduced during the sessions which had the potential to influence correlates of user trust and suspicion. Surveys were given after each session to measure user’s perceived emotional state, cognitive load, and perceived trust. Results suggest that fNIRS can be used to measure the different cognitive and emotional responses associated with computer malfunctions. These cognitive and emotional changes were correlated with users’ self-report levels of suspicion and trust, and they in turn suggest future work that further explores the capability of fNIRS for the measurement of user experience during human-computer interactions.
PubDate: Wed, 30 Apr 2014 09:05:13 +000
- Frame-Based Facial Expression Recognition Using Geometrical Features
Abstract: To improve the human-computer interaction (HCI) to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness), with the help of several geometrical features. Unlike many other geometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression; this knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to investigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation on two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition rate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior knowledge of the person-specific neutral expression. The expression recognition rate using geometrical features is adversely affected by the errors in the facial point localization, especially for the expressions with subtle facial deformations.
PubDate: Wed, 16 Apr 2014 08:05:05 +000
- An Intelligent Framework for Website Usability
Abstract: With the major advances of the Internet throughout the past couple of years, websites have come to play a central role in the modern marketing business program. However, simply owning a website is not enough for a business to prosper on the Web. Indeed, it is the level of usability of a website that determines if a user stays or abandons it for another competing one. It is therefore crucial to understand the importance of usability on the web, and consequently the need for its evaluation. Nonetheless, there exist a number of obstacles preventing software organizations from successfully applying sound website usability evaluation strategies in practice. From this point of view automation of the latter is extremely beneficial, which not only assists designers in creating more usable websites, but also enhances the Internet users’ experience on the Web and increases their level of satisfaction. As a means of addressing this problem, an Intelligent Usability Evaluation (IUE) tool is proposed that automates the usability evaluation process by employing a Heuristic Evaluation technique in an intelligent manner through the adoption of several research-based AI methods. Experimental results show there exists a high correlation between the tool and human annotators when identifying the considered usability violations.
PubDate: Mon, 14 Apr 2014 09:18:06 +000
- Interaction Tasks and Controls for Public Display Applications
Abstract: Public displays are becoming increasingly interactive and a broad range of interaction mechanisms can now be used to create multiple forms of interaction. However, the lack of interaction abstractions forces each developer to create specific approaches for dealing with interaction, preventing users from building consistent expectations on how to interact across different display systems. There is a clear analogy with the early days of the graphical user interface, when a similar problem was addressed with the emergence of high-level interaction abstractions that provided consistent interaction experiences to users and shielded developers from low-level details. This work takes a first step in that same direction by uncovering interaction abstractions that may lead to the emergence of interaction controls for applications in public displays. We identify a new set of interaction tasks focused on the specificities of public displays; we characterise interaction controls that may enable those interaction tasks to be integrated into applications; we create a mapping between the high-level abstractions provided by the interaction tasks and the concrete interaction mechanisms that can be implemented by those displays. Together, these contributions constitute a step towards the emergence of programming toolkits with widgets that developers could incorporate into their public display applications.
PubDate: Thu, 10 Apr 2014 11:10:59 +000
- A Hierarchical Probabilistic Framework for Recognizing Learners’
Interaction Experience Trends and Emotions
Abstract: We seek to model the users’ experience within an interactive learning environment. More precisely, we are interested in assessing the relationship between learners’ emotional reactions and three trends in the interaction experience, namely, flow: the optimal interaction (a perfect immersion within the task), stuck: the nonoptimal interaction (a difficulty to maintain focused attention), and off-task: the noninteraction (a dropout from the task). We propose a hierarchical probabilistic framework using a dynamic Bayesian network to model this relationship and to simultaneously recognize the probability of experiencing each trend as well as the emotional responses occurring subsequently. The framework combines three modality diagnostic variables that sense the learner’s experience including physiology, behavior, and performance, predictive variables that represent the current context and the learner’s profile, and a dynamic structure that tracks the evolution of the learner’s experience. An experimental study, with a specifically designed protocol for eliciting the targeted experiences, was conducted to validate our approach. Results revealed that multiple concurrent emotions can be associated with the experiences of flow, stuck, and off-task and that the same trend can be expressed differently from one individual to another. The evaluation of the framework showed promising results in predicting learners’ experience trends and emotional responses.
PubDate: Thu, 10 Apr 2014 08:04:46 +000
- Pointing Devices for Wearable Computers
Abstract: We present a survey of pointing devices for wearable computers, which are body-mounted devices that users can access at any time. Since traditional pointing devices (i.e., mouse, touchpad, and trackpoint) were designed to be used on a steady and flat surface they are inappropriate for wearable computers. Just as the advent of laptops resulted in the development of the touchpad and trackpoint, the emergence of wearable computers is leading to the development of pointing devices designed for them. However, unlike laptops, since wearable computers are operated from different body positions under different environmental conditions for different uses, researchers have developed a variety of innovative pointing devices for wearable computers characterized by their sensing mechanism, control mechanism, and form factor. We survey a representative set of pointing devices for wearable computers using an “adaptation of traditional devices” versus “new devices” dichotomy and study devices according to their control and sensing mechanisms and form factor. The objective of this paper is to showcase a variety of pointing devices developed for wearable computers and bring structure to the design space for wearable pointing devices. We conclude that a de facto pointing device for wearable computers, unlike laptops, is not likely to emerge.
PubDate: Mon, 24 Mar 2014 11:01:08 +000
- Users Behavior in Location-Aware Services: Digital Natives versus Digital
Abstract: Location-aware services may expose users to privacy risks as they usually attach user’s location to the generated contents. Different studies have focused on privacy in location-aware services, but the results are often conflicting. Our hypothesis is that users are not fully aware of the features of the location-aware scenario and this lack of knowledge affects the results. Hence, in this paper we present a different approach: the analysis is conducted on two different groups of users (digital natives and digital immigrants) and is divided into two steps: (i) understanding users’ knowledge of a location-aware scenario and (ii) investigating users’ opinion toward location-aware services after showing them an example of an effective location-aware service able to extract personal and sensitive information from contents publicly available in social media platforms. The analysis reveals that there is relation between users’ knowledge and users’ concerns toward privacy in location-aware services and also reveals that digital natives are more interested in the location-aware scenario than digital immigrants. The analysis also discloses that users’ concerns toward these services may be ameliorated if these services ask for users’ authorization and provide benefits to users. Other interesting findings allow us to draw guidelines that might be helpful in developing effective location-aware services.
PubDate: Wed, 19 Mar 2014 12:48:55 +000
- User-Centric Design for Mathematical Web Services
Abstract: A web service is a programmatically available application logic exposed over the internet and it has attracted much attention in recent years with the rapid development of e-commerce. Very few web services exist in the field of mathematics. The aim of this paper is to seamlessly provide user-centric mathematical web services to the service requester. In particular, this paper focuses on mathematical web services for prepositional logic and set theory which comes under discrete mathematics. A sophisticated user interface with virtual keyboard is created for accessing web services. Experimental results show that the web services and the created user interface are efficient and practical.
PubDate: Thu, 06 Mar 2014 13:11:32 +000
- Designing of a Personality Based Emotional Decision Model for Generating
Various Emotional Behavior of Social Robots
Abstract: All humans feel emotions, but individuals express their emotions differently because each has a different personality. We design an emotional decision model that focuses on the personality of individuals. The personality-based emotional decision model is designed with four linear dynamics, viz. reactive dynamic system, internal dynamic system, emotional dynamic system, and behavior dynamic system. Each dynamic system calculates the output values that reflect the personality, by being used as system matrices, input matrices, and output matrices. These responses are reflected in the final emotional behavior through a behavior dynamic system as with humans. The final emotional behavior includes multiple emotional values, and a social robot shows various emotional expressions. We perform some experiments using the cyber robot system, to verify the efficiency of the personality-based emotional decision model that generates various emotions according to the personality.
PubDate: Sun, 05 Jan 2014 11:55:30 +000
- Effects of a Social Robot's Autonomy and Group Orientation on Human
Abstract: Social attributes of intelligent robots are important for human-robot systems. This paper investigates influences of robot autonomy (i.e., high versus low) and group orientation (i.e., ingroup versus outgroup) on a human decision-making process. We conducted a laboratory experiment with 48 college students and tested the hypotheses with MANCOVA. We find that a robot with high autonomy has greater influence on human decisions than a robot with low autonomy. No significant effect is found on group orientation or on the interaction between group orientation and autonomy level. The results provide implications for social robot design.
PubDate: Thu, 19 Dec 2013 11:09:47 +000
- Blind Sailors’ Spatial Representation Using an On-Board Force
Feedback Arm: Two Case Studies
Abstract: Using a vocal, auditory, and haptic application designed for maritime navigation, blind sailors are able to set up and manage their voyages. However, investigation of the manner to present information remains a crucial issue to better understand spatial cognition and improve navigation without vision. In this study, we asked two participants to use SeaTouch on board and manage the ship headings during navigation in order to follow a predefined itinerary. Two conditions were tested. Firstly, blind sailors consulted the updated ship positions about the virtual map presented in an allocentric frame of reference (i.e., facing north). In the second case, they used the forced-feedback device in an egocentric frame of reference (i.e., facing the ship headings). Spatial performance tended to show that the egocentric condition was better for controlling the course during displacement, whereas the allocentric condition was more efficient for building mental representation and remembering it after the navigation task.
PubDate: Thu, 05 Dec 2013 18:07:58 +000
- Computer Breakdown as a Stress Factor during Task Completion under Time
Pressure: Identifying Gender Differences Based on Skin Conductance
Abstract: In today’s society, as computers, the Internet, and mobile phones pervade almost every corner of life, the impact of Information and Communication Technologies (ICT) on humans is dramatic. The use of ICT, however, may also have a negative side. Human interaction with technology may lead to notable stress perceptions, a phenomenon referred to as technostress. An investigation of the literature reveals that computer users’ gender has largely been ignored in technostress research, treating users as “gender-neutral.” To close this significant research gap, we conducted a laboratory experiment in which we investigated users’ physiological reaction to the malfunctioning of technology. Based on theories which explain that men, in contrast to women, are more sensitive to “achievement stress,” we predicted that male users would exhibit higher levels of stress than women in cases of system breakdown during the execution of a human-computer interaction task under time pressure, if compared to a breakdown situation without time pressure. Using skin conductance as a stress indicator, the hypothesis was confirmed. Thus, this study shows that user gender is crucial to better understanding the influence of stress factors such as computer malfunctions on physiological stress reactions.
PubDate: Wed, 23 Oct 2013 08:04:08 +000
- Enhanced Cognitive Walkthrough: Development of the Cognitive Walkthrough
Method to Better Predict, Identify, and Present Usability Problems
Abstract: To avoid use errors when handling medical equipment, it is important to develop products with a high degree of usability. This can be achieved by performing usability evaluations in the product development process to detect and mitigate potential usability problems. A commonly used method is cognitive walkthrough (CW), but this method shows three weaknesses: poor high-level perspective, insufficient categorisation of detected usability problems, and difficulties in overviewing the analytical results. This paper presents a further development of CW with the aim of overcoming its weaknesses. The new method is called enhanced cognitive walkthrough (ECW). ECW is a proactive analytical method for analysis of potential usability problems. The ECW method has been employed to evaluate user interface designs of medical equipment such as home-care ventilators, infusion pumps, dialysis machines, and insulin pumps. The method has proved capable of identifying several potential use problems in designs.
PubDate: Wed, 09 Oct 2013 13:23:00 +000
- Virtual/Real Transfer in a Large-Scale Environment: Impact of Active
Navigation as a Function of the Viewpoint Displacement Effect and Recall
Abstract: The purpose of this study was to examine the effect of navigation mode (passive versus active) on the virtual/real transfer of spatial learning, according to viewpoint displacement (ground: 1 m 75 versus aerial: 4 m) and as a function of the recall tasks used. We hypothesize that active navigation during learning can enhance performances when route strategy is favored by egocentric match between learning (ground-level viewpoint) and recall (egocentric frame-based tasks). Sixty-four subjects (32 men and 32 women) participated in the experiment. Spatial learning consisted of route learning in a virtual district (four conditions: passive/ground, passive/aerial, active/ground, or active/aerial), evaluated by three tasks: wayfinding, sketch-mapping, and picture-sorting. In the wayfinding task, subjects who were assigned the ground-level viewpoint in the virtual environment (VE) performed better than those with the aerial-level viewpoint, especially in combination with active navigation. In the sketch-mapping task, aerial-level learning in the VE resulted in better performance than the ground-level condition, while active navigation was only beneficial in the ground-level condition. The best performance in the picture-sorting task was obtained with the ground-level viewpoint, especially with active navigation. This study confirmed the expected results that the benefit of active navigation was linked with egocentric frame-based situations.
PubDate: Tue, 24 Sep 2013 10:56:06 +000
- Development of Estimating Equation of Machine Operational Skill by
Utilizing Eye Movement Measurement and Analysis of Stress and Fatigue
Abstract: For an establishment of a skill evaluation method for human support systems, development of an estimating equation of the machine operational skill is presented. Factors of the eye movement such as frequency, velocity, and moving distance of saccade were computed using the developed eye gaze measurement system, and the eye movement features were determined from these factors. The estimating equation was derived through an outlier test (to eliminate nonstandard data) and a principal component analysis (to find dominant components). Using a cooperative carrying task (cc-task) simulator, the eye movement and operational data of the machine operators were recorded, and effectiveness of the derived estimating equation was investigated. As a result, it was confirmed that the estimating equation was effective strongly against actual simple skill levels (). In addition, effects of internal condition such as fatigue and stress on the estimating equation were analyzed. Using heart rate (HR) and coefficient of variation of R-R interval (). Correlation analysis between these biosignal indexes and the estimating equation of operational skill found that the equation reflected effects of stress and fatigue, although the equation could estimate the skill level adequately.
PubDate: Thu, 01 Aug 2013 13:31:13 +000
- Virtual Sectioning and Haptic Exploration of Volumetric Shapes in the
Absence of Visual Feedback
Abstract: The reduced behavior for exploration of volumetric data based on the virtual sectioning concept was compared with the free scanning at the use of the StickGrip linkage-free haptic device. Profiles of the virtual surface were simulated through the penholder displacements in relation to the pen tip of the stylus. One or two geometric shapes (cylinder, trapezoidal prism, ball, and torus) or their halves and the ripple surface were explored in the absence of visual feedback. In the free scanning, the person physically moved the stylus. In the parallel scanning, cross-sectional profiles were generated automatically starting from the location indicated by the stylus. Analysis of the performance of 18 subjects demonstrated that the new haptic visualization and exploration technique allowed to create accurate mental images, to recognize and identify virtual shapes. The mean number of errors was about 2.5% in the free scanning mode and 1.9% and 1.5% in the parallel scanning mode at the playback velocity of 28 mm/s and 42 mm/s, respectively. All participants agreed that the haptic visualization of the 3D virtual surface presented as the cross-sectional slices of the workspace was robust and easy to use. The method was developed for visualization of spatially distributed data collected by sensors.
PubDate: Tue, 16 Jul 2013 11:50:04 +000
- Designing Interactive Applications to Support Novel Activities
Abstract: R&D in media-related technologies including multimedia, information retrieval, computer vision, and the semantic web is experimenting on a variety of computational tools that, if sufficiently matured, could support many novel activities that are not practiced today. Interactive technology demonstration systems produced typically at the end of their projects show great potential for taking advantage of technological possibilities. These demo systems or “demonstrators” are, even if crude or farfetched, a significant manifestation of the technologists’ visions in transforming emerging technologies into novel usage scenarios and applications. In this paper, we reflect on design processes and crucial design decisions made while designing some successful, web-based interactive demonstrators developed by the authors. We identify methodological issues in applying today’s requirement-driven usability engineering method to designing this type of novel applications and solicit a clearer distinction between designing mainstream applications and designing novel applications. More solution-oriented approaches leveraging design thinking are required, and more pragmatic evaluation criteria is needed that assess the role of the system in exploiting the technological possibilities to provoke further brainstorming and discussion. Such an approach will support a more efficient channelling of the technology-to-application transformation which are becoming increasingly crucial in today’s context of rich technological possibilities.
PubDate: Sat, 15 Jun 2013 14:36:54 +000
- Using Brain Waves to Control Computers and Machines
PubDate: Tue, 11 Jun 2013 12:19:38 +000
- Controlling Assistive Machines in Paralysis Using Brain Waves and Other
Abstract: The extent to which humans can interact with machines significantly enhanced through inclusion of speech, gestures, and eye movements. However, these communication channels depend on a functional motor system. As many people suffer from severe damage of the motor system resulting in paralysis and inability to communicate, the development of brain-machine interfaces (BMI) that translate electric or metabolic brain activity into control signals of external devices promises to overcome this dependence. People with complete paralysis can learn to use their brain waves to control prosthetic devices or exoskeletons. However, information transfer rates of currently available noninvasive BMI systems are still very limited and do not allow versatile control and interaction with assistive machines. Thus, using brain waves in combination with other biosignals might significantly enhance the ability of people with a compromised motor system to interact with assistive machines. Here, we give an overview of the current state of assistive, noninvasive BMI research and propose to integrate brain waves and other biosignals for improved control and applicability of assistive machines in paralysis. Beside introducing an example of such a system, potential future developments are being discussed.
PubDate: Tue, 28 May 2013 17:45:27 +000
- Towards Brain-Computer Interface Control of a 6-Degree-of-Freedom Robotic
Arm Using Dry EEG Electrodes
Abstract: Introduction. Development of a robotic arm that can be operated using an exoskeletal position sensing harness as well as a dry electrode brain-computer interface headset. Design priorities comprise an intuitive and immersive user interface, fast and smooth movement, portability, and cost minimization. Materials and Methods. A robotic arm prototype capable of moving along 6 degrees of freedom has been developed, along with an exoskeletal position sensing harness which was used to control it. Commercially available dry electrode BCI headsets were evaluated. A particular headset model has been selected and is currently being integrated into the hybrid system. Results and Discussion. The combined arm-harness system has been successfully tested and met its design targets for speed, smooth movement, and immersive control. Initial tests verify that an operator using the system can perform pick and place tasks following a rather short learning curve. Further evaluation experiments are planned for the integrated BCI-harness hybrid setup. Conclusions. It is possible to design a portable robotic arm interface comparable in size, dexterity, speed, and fluidity to the human arm at relatively low cost. The combined system achieved its design goals for intuitive and immersive robotic control and is currently being further developed into a hybrid BCI system for comparative experiments.
PubDate: Tue, 07 May 2013 17:37:51 +000
- A Comparison of Field-Based and Lab-Based Experiments to Evaluate User
Experience of Personalised Mobile Devices
Abstract: There is a growing debate in the literature regarding the tradeoffs between lab and field evaluation of mobile devices. This paper presents a comparison of field-based and lab-based experiments to evaluate user experience of personalised mobile devices at large sports events. A lab experiment is recommended when the testing focus is on the user interface and application-oriented usability related issues. However, the results suggest that a field experiment is more suitable for investigating a wider range of factors affecting the overall acceptability of the designed mobile service. Such factors include the system function and effects of actual usage contexts aspects. Where open and relaxed communication is important (e.g., where participant groups are naturally reticent to communicate), this is more readily promoted by the use of a field study.
PubDate: Thu, 11 Apr 2013 14:11:21 +000
- A Review of Mobile Robotic Telepresence
Abstract: Mobile robotic telepresence (MRP) systems incorporate video conferencing equipment onto mobile robot devices which can be steered from remote locations. These systems, which are primarily used in the context of promoting social interaction between people, are becoming increasingly popular within certain application domains such as health care environments, independent living for the elderly, and office environments. In this paper, an overview of the various systems, application areas, and challenges found in the literature concerning mobile robotic telepresence is provided. The survey also proposes a set terminology for the field as there is currently a lack of standard terms for the different concepts related to MRP systems. Further, this paper provides an outlook on the various research directions for developing and enhancing mobile robotic telepresence systems per se, as well as evaluating the interaction in laboratory and field settings. Finally, the survey outlines a number of design implications for the future of mobile robotic telepresence systems for social interaction.
PubDate: Sun, 07 Apr 2013 16:18:49 +000
- Text Entry by Gazing and Smiling
Abstract: Face Interface is a wearable prototype that combines the use of voluntary gaze direction and facial activations, for pointing and selecting objects on a computer screen, respectively. The aim was to investigate the functionality of the prototype for entering text. First, three on-screen keyboard layout designs were developed and tested () to find a layout that would be more suitable for text entry with the prototype than traditional QWERTY layout. The task was to enter one word ten times with each of the layouts by pointing letters with gaze and select them by smiling. Subjective ratings showed that a layout with large keys on the edge and small keys near the center of the keyboard was rated as the most enjoyable, clearest, and most functional. Second, using this layout, the aim of the second experiment () was to compare entering text with Face Interface to entering text with mouse. The results showed that text entry rate for Face Interface was 20 characters per minute (cpm) and 27 cpm for the mouse. For Face Interface, keystrokes per character (KSPC) value was 1.1 and minimum string distance (MSD) error rate was 0.12. These values compare especially well with other similar techniques.
PubDate: Thu, 04 Apr 2013 09:10:27 +000
- User Assessment in Serious Games and Technology-Enhanced Learning
PubDate: Sun, 03 Mar 2013 11:06:38 +000
- Assessment in and of Serious Games: An Overview
Abstract: There is a consensus that serious games have a significant potential as a tool for instruction. However, their effectiveness in terms of learning outcomes is still understudied mainly due to the complexity involved in assessing intangible measures. A systematic approach—based on established principles and guidelines—is necessary to enhance the design of serious games, and many studies lack a rigorous assessment. An important aspect in the evaluation of serious games, like other educational tools, is user performance assessment. This is an important area of exploration because serious games are intended to evaluate the learning progress as well as the outcomes. This also emphasizes the importance of providing appropriate feedback to the player. Moreover, performance assessment enables adaptivity and personalization to meet individual needs in various aspects, such as learning styles, information provision rates, feedback, and so forth. This paper first reviews related literature regarding the educational effectiveness of serious games. It then discusses how to assess the learning impact of serious games and methods for competence and skill assessment. Finally, it suggests two major directions for future research: characterization of the player’s activity and better integration of assessment in games.
PubDate: Thu, 28 Feb 2013 14:38:43 +000
- A Review of Hybrid Brain-Computer Interface Systems
Abstract: Increasing number of research activities and different types of studies in brain-computer interface (BCI) systems show potential in this young research area. Research teams have studied features of different data acquisition techniques, brain activity patterns, feature extraction techniques, methods of classifications, and many other aspects of a BCI system. However, conventional BCIs have not become totally applicable, due to the lack of high accuracy, reliability, low information transfer rate, and user acceptability. A new approach to create a more reliable BCI that takes advantage of each system is to combine two or more BCI systems with different brain activity patterns or different input signal sources. This type of BCI, called hybrid BCI, may reduce disadvantages of each conventional BCI system. In addition, hybrid BCIs may create more applications and possibly increase the accuracy and the information transfer rate. However, the type of BCIs and their combinations should be considered carefully. In this paper, after introducing several types of BCIs and their combinations, we review and discuss hybrid BCIs, different possibilities to combine them, and their advantages and disadvantages.
PubDate: Mon, 25 Feb 2013 14:12:55 +000
- Improving Interactions between a Power-Assist Robot System and Its Human
User in Horizontal Transfer of Objects Using a Novel Adaptive Control
Abstract: Power assist systems are usually used for rehabilitation, healthcare, and so forth.This paper puts emphasis on the use of power assist systems for object transfer and thus brings a novelty in the power-assist applications. However, the interactions between the systems and the human users are usually not satisfactory because human features are not included in the control design. In this paper, we present the development of a 1-DOF power assist system for horizontal transfer of objects. We included human features such as weight perception in the system dynamics and control. We then simulated the system using MATLAB/Simulink for transferring objects with it and (i) determined the optimum maneuverability conditions for object transfer, (ii) determined psychophysical relationships between actual and perceived weights, and (iii) analyzed load forces and motion features. We then used the findings to design a novel adaptive control scheme to improve the interactions between the user and the system. We implemented the novel control (simulated the system again using the novel control), the subjects evaluated the system, and the results showed that the novel control reduced the excessive load forces and accelerations and thus improved the human-system interactions in terms of maneuverability, safety, and so forth. Finally, we proposed to use the findings to develop power assist systems for manipulating heavy objects in industries that may improve interactions between the systems and the users.
PubDate: Mon, 31 Dec 2012 13:02:17 +000