State of the Art
3.3
Emotions in Software Engineering Emotion Recognition is Software Engineering
has been gaining traction as a profitable new sector. First defined by Rosalind Picard, what is known as Affective Computing influences, relates to or arises from emotions. It is a multidisciplinary research field bridging computer science, psychology and cognitive science (Bota et al., 2019). This research field “investigates how technology enables human affect recognition, how emotions and sentiment can be simulated to embed emotional intelligence in interfaces, and how software systems can be designed to support emotion” (Novielli & Serebrenik, 2019). Affective computing exists in several manifestations: computer vision based applications like facial expression recognition, natural language processing applications like text sentiment analysis, audio analysis such as tone of voice recognition, and psychophysiological sensing. In 2019 the Institute of Electrical and Electronics Engineers published the Sentiment and Emotion in Software Engineering as a special issue of the Institute of IEEE Software Publication in response to the growing interest of the subject in the field. A range of software engineering companies have started to consider emotions in their work. Some examples involve emotion analysis in internal processes like the hiring and support of neurodiverse developers, such as Microsoft. Others utilize emotions to enhance productivity through emotional awareness (Novielli & Serebrenik, 2019). Physical manifestations of emotions such as facial expressions, while easy to collect, “present low reliability since they depend on the user's social environment, cultural background (if he is alone or in a group setting), their personality, mood, and can be easily faked, becoming compromised” (Bota et al., 2019, p. 140991) These constraints are not as applicable to physiological tellers of emotional state. Heart Rate and perspiration among others are not easily falsified and present a more authentic assessment into a subject’s emotional state (Shu et al., cited by Bota et al. 2019). These physiological signals are almost always collected in lab settings according to a media stimulus where machine learning is applied to aid in the signal processing and classification of 54
biosignals. Fig 34 shows this typical work flow process.