The paper concerns accuracy of emotion recognition from facial expressions. As there are a couple of ready off-the-shelf solutions available in the market today, this study aims at practical evaluation of selected solutions in order to provide some insight into what potential buyers might expect. Two solutions were compared: FaceReader by Noldus and Xpress Engine by QuantumLab. The performed evaluation revealed that the recognition accuracies differ for photo and video input data and therefore solutions should be matched to the specificity of the application domain.
This paper concerns how intelligent systems should be designed to make adequate, valuable and natural affective interventions. The article proposes a process for choosing an affective intervention model for an intelligent system. The process consists of 10 activities that allow for step-by-step design of an affective feedback loop and takes into account the following factors: expected and desired emotional states, characteristics of the system and available input channels for emotion recognition. The proposed method is supported with three examples of intelligent systems that have used the process to design or to describe affective intervention. Two of them are case studies of affective video games that were designed with the proposed approach and evaluated against the non-affective versions. Another example described in the article is the affective tutoring system Gerda ,which exemplifies a more complex and sophisticated intervention model
In this paper the idea of affect aware video games is presented. A brief review of automatic multimodal affect recognition of facial expressions and emotions is given. The first result of emotions recognition using depth data as well as prototype affect aware video game are presented
In this paper the problem of emotion recognition using physiological signals is presented. Firstly the problems with acquisition of physiological signals related to specific human emotions are described. It is not a trivial problem to elicit real emotions and to choose stimuli that always, and for all people, elicit the same emotion. Also different kinds of physiological signals for emotion recognition are considered. A set of the most helpful biosignals is chosen. An experiment is described that was performed in order to verify the possibility of eliciting real emotions using specially prepared multimedia presentations, as well as finding physiological signals that are most correlated with human emotions. The experiment was useful for detecting and identifying many problems and helping to find their solutions. The results of this research can be used for creation of affect-aware applications, for instance video games, that will be able to react to user’s emotions.
In this paper a set of comprehensive evaluation criteria for affect-annotated databases is proposed. These criteria can be used for evaluation of the quality of a database on the stage of its creation as well as for evaluation and comparison of existing databases. The usefulness of these criteria is demonstrated on several databases selected from affect computing domain. The databases contain different kind of data: video or still images presenting facial expressions, speech recordings and affect-annotated words.
Affect-aware video games can respond to a game player's emotions. Such games seem to be more attractive for users. Therefore for that kind of games it is necessary to create a model of the player's emotions to know to which emotions the application should react. The paper describes different models of emotions. The questionnaire and experiment for video game players is presented. Some results of the tests are shown. Then the model of game player that allows to describe his or her emotions is proposed.
The chapter concerns emotional states representation and modeling for software systems, that deal with human affect. A review of emotion representation models is provided, including discrete, dimensional and componential models. The paper provides also analysis of emotion models used in diverse types of affect-aware applications: games, mood trackers or tutoring systems. The analysis is supported with two design cases. The study allowed to reveal, which models are most intensively used in affect-aware applications as well as to identify the main challenge of mapping between the models.
The paper proposes a set of research scenarios to be applied in four domains: software engineering, website customization, education and gaming. The goal of applying the scenarios is to assess the possibility of using emotion recognition methods in these areas. It also points out the problems of defining sets of emotions to be recognized in different applications, representing the defined emotional states, gathering the data and training. Some of the scenarios consider possible reactions of an affect-aware system and its impact on users.
In this paper a novel application of multimodal emotion recognition algorithms in software engineering is described. Several application scenarios are proposed concerning program usability testing and software process improvement. Also a set of emotional states relevant in that application area is identified. The multimodal emotion recognition method that integrates video and depth channels, physiological signals and input devices usage patterns is proposed and some preliminary results on learning set creation are described.
In this paper the problem of hand drawn flowcharts recognition is presented. There are described two attitudes to this problem: on-line and off-line. A concept of FCE, a system for recognizing and understanding of freehand drawn on-line flow charts on desktop computer and mobile devices is presented. The first experiments with the FCE system and the planes for future are also described.