4 minute read

3.3 Emotions in Software Engineering

Emotion Recognition is Software Engineering has been gaining traction as a profitable new sector. First defined by Rosalind Picard, what is known as Affective Computing influences, relates to or arises from emotions. It is a multidisciplinary research field bridging computer science, psychology and cognitive science (Bota et al., 2019). This research field “investigates how technology enables human affect recognition, how emotions and sentiment can be simulated to embed emotional intelligence in interfaces, and how software systems can be designed to support emotion” (Novielli & Serebrenik, 2019). Affective computing exists in several manifestations: computer vision based applications like facial expression recognition, natural language processing applications like text sentiment analysis, audio analysis such as tone of voice recognition, and psychophysiological sensing. In 2019 the Institute of Electrical and Electronics Engineers published the Sentiment and Emotion in Software Engineering as a special issue of the Institute of IEEE Software Publication in response to the growing interest of the subject in the field. A range of software engineering companies have started to consider emotions in their work. Some examples involve emotion analysis in internal processes like the hiring and support of neurodiverse developers, such as Microsoft. Others utilize emotions to enhance productivity through emotional awareness (Novielli & Serebrenik, 2019). Physical manifestations of emotions such as facial expressions, while easy to collect, “present low reliability since they depend on the user's social environment, cultural background (if he is alone or in a group setting), their personality, mood, and can be easily faked, becoming compromised” (Bota et al., 2019, p. 140991) These constraints are not as applicable to physiological tellers of emotional state. Heart Rate and perspiration among others are not easily falsified and present a more authentic assessment into a subject’s emotional state (Shu et al., cited by Bota et al. 2019). These physiological signals are almost always collected in lab settings according to a media stimulus where machine learning is applied to aid in the signal processing and classification of biosignals. Fig 34 shows this typical work flow process.

These sensors commonly include electromyography (EMG), heart rate variability, galvanic skin conductance, and electroencephalography (EEG) (Yoon & Wise 2014). Most often the use of psychophysiological sensors is performed in a lab setting where subjects are purposefully exposed to different media to induce certain emotions. This allows researchers to control the stimulus media as well as environmental factors. The sensor data is cleaned, processed and classified using Machine Learning algorithms trained against existing data-bases such as the DEAP (A Database for Emotion Analysis using Physiological Signals) Database which contains data from several sensor types from 32 volunteers who were recorded while watching 40 one minute-long music videos. Few studies have attempted to extract data-sets from outside of a lab setting in everyday circumstances (Bota et al., 2019). While data is collected in real-time it must be processed in order to classify the sensor signals into their emotional label. There exist some real time data classification systems such as the iMotions platform, the PLUX Biosginals platform and the wearable device Empatica. However at the time of this writing access to these real-time platforms was prohibitively expensive, making them less accessible. The field of Affective Computing is still very young and “scientific evidence of emotion remains one of the most challenging yet important issues in affective engineering” (Yoon & Wise

Advertisement

Fig 34. Schematic representation of a machine learning process for emotion recognition (Bota et al. 2019)

2014 p. 222). Emotion classification, whether based on biosignals, computer vision or natural language processing, requires artificial intelligence to parse whatever data is being collected. This comes with it’s own limitations and issues. A recent study Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements warns that the current digital methods for extracting emotion from data are not always fully accurate. The study was composed by a team of experts in neuroscience, computer science and psychological science and reviewed over 1,000 studies on facial movements and emotions and determined that the current technology is not yet reliable enough and carries a lot of bias (Baron, 2019). Furthermore the AI Now 2019 report warns that Affect Recognition using AI which is “rapidly being commercialized for a wide range of purposes—from attempts to identify the perfect employee to assessing patient pain to tracking which students are being attentive in class… is built on markedly shaky foundations” (Crawford et al., 2019, p. 50). According to the report there is little evidence that current affect-recognition products have scientific validity (Crawford et al., 2019). Citing a study out of Berkley, “in order to detect emotions with accuracy and high agreement requires context beyond the face and body” (Chen & Whitney cited by Crawford et al., 2019). However the report underscores the growing popularity of this field “the emotion-detection and -recognition market was worth $12 billion in 2018, and by one enthusiastic estimate, the industry is projected to grow to over $90 billion by 2024” (Crawford et al., 2019, p. 50). Despite its current limitations, it is clear that Affective Computing is an inevitable development in the field of Software Engineering and product design. Though we cannot directly start implementing these processes, it's pertinent that architects start to imagine the implications of effective computing in the built environment.

Fig. 37 Empatica Embrace Wearable (Empatica, 2020)

Fig. 36 Emotiv Products (Emotiv, 2020)

This article is from: