International Research Journal of Engineering and Technology (IRJET)
e-ISSN: 2395-0056
Volume: 06 Issue: 05 | May 2019
p-ISSN: 2395-0072
www.irjet.net
Facial Emotion Detection using Convolutional Neural Network Rohit Jadhav1, Jayesh Bhuke2, Nita Patil3 1,2,Student,
Department of Computer Engineering, Datta Meghe College of Engineering, Airoli, Navi Mumbai Professor, Department of Computer Engineering, Datta Meghe College of Engineering, Airoli Navi Mumbai ---------------------------------------------------------------------***---------------------------------------------------------------------3Assistant
Abstract - Humans can easily identify emotions using
communication cues has grown with increasing role of spoken language and gesture interfaces in humancomputer interactions and computer mediated applications[8].
their senses where as computer vision seeks to imitate human vision by analysing the digital image as input. For humans to detect an emotion will not be a difficult job to perform. Detecting emotion through voice for example detecting ‘stress’ in a voice by setting parameters in areas like tone, pitch, pace, volume etc in case of digital images detecting emotion just by analysing image is novel way. In this paper, we are trying to design a convolutional neural network model which can classify the input image into 7 different emotions. The respected emotions we are going to classify are angry, disgust, fear, happy, sad, surprise and neutral. In order to classify these emotions, we are implementing Convolutional Neural Networks (CNNs) that can efficiently and accurately elucidate semantic information coming from the faces in an automated manner. We also apply some data augmentation techniques in order to intercept overfitting and underfitting problems. To evaluated the proposed model, FERC-2013 dataset from kaggle competition is used to evaluate the designed CNN model. The Results of this model shows that it works better if it has more set of images to learn. The proposed model achieves the accuracy rate 65.34.
When machines are able to appreciate their surroundings, some sort of machine perception has been developed [1]. Humans use their senses to gain insights about their environment [5][3]. Nowadays, machines have several ways to capture their suitable algorithms allow to generate machine perception. In the last years, the use of Deep Learning algorithms has been proven to be very successful in this regard [6] [4] [2]. For instance, Jeremy Howard showed on his Brussels 2014 TEDx’s talk [7] how computers trained using deep learning techniques were able to achieve some amazing tasks. These tasks include the ability to learn Chinese language, to recognize objects in images and to help on medical diagnosis. In this paper, we are proposing deep learning methods and techniques. Convolution Neural Networks has been an effective and common way to solve the problems like facial emotion detection. CNN works better than Recurrent Neural Networks (RNN) because CNN will learn to recognize components of an image(e.g. lines, curves, etc.), but in case of RNN, it will similarly learn to recognize patterns across time. We are also using some loss functions along with data augmentation techniques in order to train model in all features of input images.
Key Words: Convolutional Neural Networks; overfitting; underfitting.
1. INTRODUCTION
We are using FERC-2013 datasets these datasets are coming from the competition held on Kaggle in 2013 where the winner of this competition is RBM team [9]. The accuracy of public test set is 69.7% and the private test is 71.1%.
In recent years, the interaction between humans and computers has been constantly evolving, achieving a clear goal of natural interaction. The most expressive way for humans to express emotions is through facial expressions. Humans have little or no effort to detect and interpret facial and facial expressions in the scene. Still, developing an automated system to accomplish this task is still quite difficult. There are several related issues: the classification of expressions (eg, in the emotional category), detecting image segments as faces and extracting facial expression information. A system that accurately performs these operations in the real world will be an important step in achieving similar human interactions between people and machines.
The remaining paper is categorised as follows. In section 2 we concisely summarize some related work on facial emotions. In section 3 we illustrate our proposed CNN model. The Results, Experiments are in section 4 and Conclusion is in section 5 respectively.
2. RELATED WORK One or two different methods are used for facial emotion recognition, both of which include two different methods. Dividing the face into separate action units or further processing it as a whole seems to be the first and main difference between the main methods. In both methods, two different methods can be used, namely
Human communication conveys important information not only about intent but also about desires and emotions as well. In particular, the importance of automatically recognizing emotions from human speech and other
© 2019, IRJET
|
Impact Factor value: 7.211
|
ISO 9001:2008 Certified Journal
|
Page 1077