Facial Emotion Recognition: A Survey

Page 1

ThemostsuccessfulexampleofaFacedetectionalgorithmis thecascadeclassifier.It was firstpresentedbyPaul Viola and Michael Jones. It was described in their 2001 paper,[1]"RapidObjectDetectionusingBoostedCascadeof SimpleFeatures".Thistechniqueisstillinuseandhasbeen very successful.. They have raised the framework for real timefacedetectionbypresentinganovelimageknownasan integralimageandcreatingaboostedcascadeofweakHaar likefeatureclassifiers.

Facial Emotion Recognition: A Survey

2. Face Detection

Abstract: This paper describes different techniques in Facial Emotion Recognition. Face is a complex three dimensional visual model and developing a computationalsolutionforface recognition is a complicated task. Face recognition provides a challenging problem and as such hasacquireda terrificdealof interest over the previous few years due to its applications in diverse domains. The step one in face recognition is face detection. Face detection used with face recognition strategies have a huge variety of applications. One of which is Emotion Detection. In Face detection we will be discussing viola jones algorithm, multitask cascaded convolutional neural network (CNN) and R CNN. Then further in face recognition we have four best holistic (Linear) approaches, wiz Eigenfaces, Fisherfaces, Marginfaces, Laplacianfaces etc. further in final stage (Enmotion detection) we have methods based on CNN and Histogram of oriented gradients (HOG) such as Alexnet CNN, Affedex CNN and HOG ESRs Algorithm. Also we Image datasets suh as CK+, JAFFE, FER2013 etc.

However,therearedifficultiesinFacedetectionduetosome issuesrelatedtopose,expression,positionandorientation, skincolorandpixelvalues,thepresenceoffacialhair,varying lightingconditionsanddifferentimageresolutions.

Givenimageorstillorvideoorsequenceimagesofascene, verifyonepersoninthesceneusingastoreddatasetoffacial images.availablefacialinformationsuchashair,age, gender, facial expressionandspeechcan be used in enhancingthe searchrecognitionofone.

Keywords: Face detection, Face Recognition, Emotion detection, FER, Deep learning, Survey

Shriraj Chinchwade1, Prasad Gaikwad2, Niraj Garudkar3, Sumedh Jadhav4 , Prof. Mrs. Sonal Gore5 1,2,3,4Student(UG), Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, India. 5Professor , Department of Computer Engineering, Pimpri Chinchwad College of Engineering, Pune, India. ***

Knowledge based, feature based, template matching or appearance based are some of the types of face detection Featuremethods.basedalgorithmshavebeenusedwidelyfrommany years.

Face detection is an object detection technique in the computervisiondomain.Thistechnologyisusedtofindand identifyhumanfacesbasedondigitalimageorvideoinput.It requires a number of mathematical algorithms, pattern recognition, and image processing to achieve the desired result.The variousstepsinthealgorithminclude lookingfor an area of the face , then performing additional tests to confirmahumanface.Afterthatalgorithmsaretrainedon largedatasetstoensurehighprecision.

2.1.1 The Viola Jones Algorithm

Numberofdeeplearningmethodshavebeendeveloped as possiblesolutionstotheaboveproblems.

Thisalgorithmdefinesaboxandsearchesthefaceinsidethat box.Insmallsteps,thenumberofboxesdetectingHaar like

Facial expression is one amongst the foremost powerful, natural and established alerts for persons to convey their emotionalstatesandintention.

FERisaprocessinwhichthealgorithmdetectsthetypeof emotionthattheuserhas.TheseemotionscanbeHappy,sad, normal, etc. There are different types of algorithms to recognize the emotion of a user. There are 10different methodstorecognizewhichtypeofemotionitis.Iteithercan be recognized by movements of hands or by the displacementsofeyebrows.Emotionplaysaveryvitalrolein psychological aspects. Effective emotion recognition techniques will be very useful in Human Computer

The solution of the problem needs segmentation of faces (facedetection),featureextraction/removalfromfacialarea or verification problems. The input into the system is an anonymousfaceandthesystemreportsalimitedidfroma databaseofknownpeople.

1. Introduction

2.1 Methods of Face detection

International Research Journal of Engineering and Technology (IRJET) e ISSN: 2395 0056 Volume: 09 Issue: 03 | Mar 2022 www.irjet.net p ISSN: 2395 0072 © 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page1378

rolebehavior.improvebeInteractionandAffectivecomputing.Emotionrecognitioncanappliedinmanyareas.EmotiondetectionwillhelptoproductandservicesbymonitoringcustomerInthepsychologicalaspectemotionsplayavitalsothistechniquehelpstohandlesuchcases.

The Faster R CNN [5] has achieved impressive results on variousobjectdetectionbenchmarks.

InrecentyearsresearchershaveappliedfasterR CNN,which is one of the state of art methods for generic object detectionandhasachievedpromisingresults.

Fast R CNN for object detection was presented by Ross Girshick in his paper "Fast R CNN" in 2015. This paper suggested a fast region based approach [4]. Fast R CNN builds on past work to effectively differentiate object proposalsusingdeepconvolutionalnetworks.Comparedto previouswork,FastR CNN[4]usesmanynewmethodsto improvetrainingspeedandtestingwhilealsoincreasingthe accuracyofdetection.

features and data for all those boxes combined, helps the algorithmto detectwherethefaceis.

Fig. 2 Three stagemulti taskdeepconvolutionalnetwork

ThisapproachwasproposedbyKaipengZhangetal.intheir paper‘JointFaceDetectionandAlignmentusingMulti task CascadedConvolutionalNetworks’[3]

These techniques are centered on how a computer will automaticallylearnthepatternofthe targetobject.Themain process will focus on the Convolutional Neural Network (CNN)andtheObjectProposalMechanism.

Theyproposedanewframeworkwiththreestagesoffacial recognitionandsimultaneousmarking.Inthefirstphase,it will raise a few windows for immediate appointment via shallow CNN. After that, thesecond network will clear the windowstorejectalargenumberofstainlesswindowsvia sophisticatedCNNandlastly,itusesthemorepowerfulCNN torefinetheresultsandoutputfaciallandmarkspositions.

Faster R CNN for Face detection:

Multi Task Cascaded Convolutional Neural Network:

International Research Journal of Engineering and Technology (IRJET) e ISSN: 2395 0056 Volume: 09 Issue: 03 | Mar 2022 www.irjet.net p ISSN: 2395 0072 © 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page1379

Faster R CNN for Object detection:

Faster R CNN[5]takeslesscomputationtimethanbothR CNN[2]andFastR CNN[4],asitusesanotherconvolutional network(RPN)toproduceregionalproposals.

ThispaperpublishedbyHuaizuJiangandErikLearned Miller [6]reportshigheraccuracyresultsontwoFDDBandIJB A

Although the Viola Jones framework is still very much successful and very commonly used in face to face applicationsforreal timeapplications,ithasitslimitations. For example, it may not work if the face is covered with a mask or scarf, or if there is no proper alignment, the algorithm may fail to detect it. Other algorithms like [2]convolutional neural networks (R CNN), have been developed toovercometheseproblems.

This technique achieves higher accuracy over the other techniques onthe FDDBand WIDER FACE benchmark for facedetection.

Fig. 1 Haar likeFeaturesforViola Jonesalgorithm

Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun in their paper “Faster R CNN: Towards Real Time Object DetectionwithRegionProposalNetworks”[5],proposeda discoverypipelinethatusesRPNasaregionalalgorithm,and FastR CNNasadetectornetwork.

2.1.2 Deep Learning Methods

The original paper “Rich feature hierarchies for accurate objectdetectionandsemanticsegmentation” [2]describes oneofthefirstphasesofCNN'susein'R CNN'thathadthe highestdetectionperformancethan otherpopularmethods atthetime.

Oneofthemostwidelyuseddeeplearningmethodsiscalled the “Multi Task Cascaded Convolutional Neural Network”, with 3stagesofCNN.

Fast R-CNN for object detection:

R-CNN for object detection:

The tests conducted on numerous subjects in different environmentsshowsthatthisapproachhaslimitationsover

Linear Transformation (Holistic) Approaches

A set of eigenfaces will be generated by activity, a mathematical operation referred to as principal element analysis (PCA) on an outsized set of picture portraits of different human faces. This technique was developed by Sirovich and Kirby (1987). Informally, eigenfaces may be thought aboutacollectionof"standardizedfaceingredients", derivedfromappliedmath(statistics)analysisofthemany imagesofhumanfaces.

In1997,Belheumeurintroducedthefisherfacemethodfor facerecognition.WhenEigenfaceswerecreatedwithLinear Discriminant Analysis (LDA), New sets of vectors were formed,knownas“FisherFaces”.Fisherfacesexceededthe results,butstillhavesomedrawbacks.[12]

the variations in light source, size and head orientation, nevertheless this methodology showed excellent classifications of faces(>85% success rate). a loud (noisy) imageorpartlyoccludedfacewouldcauserecognitionlow performance.[11]

3.3 Laplacianfaces

Marginface is a technique that uses the famous Average Neighborhood Margin Maximization (ANMM) algorithm. It wasdevelopedbyFeiWang(2009).Theimagesaremapped into a face topological space (subspace) for analysis. completelydifferentfromPCAandLDAthateffectivelysee solely the global euclidean structure of face space, ANMM aims at discriminating face pictures of various people supported by local information. Moreover, for every face image, it pulls the neighboring images of identical person towardsitasclosetoaspotential,whereasatthesametime pushing the neighboring pictures of different people away fromitasfaraspossible.[14]

Fig. 3 OutlineofFacerecognitionsystem

•HumanComputerInteraction(HCI)

•PersonRecognition/Identification

Face recognition deals with Computer Vision technique, a disciplineofAIandusesmethodsofimageprocessingand MachinelearningandDeeplearning.WhenaFaceisdetected byafacedetectionalgorithmitisthenrecognized/identified byFaceRecognitionSystem.Itsoutputiswhetherthefaceis knownorunknown.FR(FaceRecognition)techniquescanbe dividedintothreecategories:1)LocalApproaches2)Holistic Approaches 3) Hybrid Approaches. The most promising studiesinthefacerecognitionfieldarestateoftheartface detectiontechniques,approaches,viz.Marginface,Eigenface, Fisherface,SupportVectorMachines(SVM),ArtificialNeural Networks (ANN), Principal Component Analysis (PCA), IndependentComponentAnalysis(ICA),ElasticBunchGraph Matching,Gabor Wavelets,Hidden Markov Models and3D morphableModel.[10]

3.1 Eigenfaces

3.4 Marginfaces

3.2 Fisherfaces

ThistechniqueusestheoutputoftheEigenfacealgorithmasa sourcedataset.ThegoaloftheLaplacianFaceapproachisto search out the transformation that preserves the native informationincontrasttoPCAthatprotectsthestructureof theimagespace.Inotherwords,Eigenfacesthatarecreated reducesstructure.subspaceItwithLocalityPreservingProjections(LPP),arelaplacianfaces.wasdevelopedbyXiaofeiHeetal.(2005).ItcreatesafacewhichexplicitlyconsidersthefacemanifoldithasBetterdiscriminatingpowerthanPCA.Itthedimensionoftheimagespace.[13]

Volume: 09 Issue: 03 | Mar 2022 www.irjet.net ISSN: 2395 0072 Factor value: 7.529 ISO 9001:2008 1380

benchmarkswhenFasterR CNNistrainedonalargescaleof WIDERfacialdata.

p

|

International Research Journal of Engineering and Technology (IRJET) ISSN: 2395 0056

e

© 2022, IRJET | Impact

Certified Journal | Page

•SecurityandSurveillanceCameraSystems

Various applicationsofFR:

3 Face Recognition

TableComparison1

e ISSN: 2395 0056

Inrecentyears,facialemotionrecognition(FER)hasbecome a prevailing analysis topic because it is often applied in numerous areas. The present FER approaches embody handcrafted feature based ways (HCF) and deep learning ways (DL). HCF ways trust however sensible the manual feature extractor will perform. The manually extracted optionsarealsoexposedtobiasbecauseitdependsonthe researcher’s previous information of the domain. In distinction, DL methods, particularly Convolutional Neural Network(CNN),aresensibleatactivityimageclassification. The downfall of deciliter ways is that they need in depth knowledgetocoachandperformrecognitionwithefficiency. Hence, we have a tendency to propose a deep learning techniquesupportingtransfer

learning of pre trained AlexNet design for FER. We have a tendencytoperformfullmodelfinestandardizationonthe Alexnet, that was antecedently trained on the Imagenet dataset,victimizationfeelingdatasets.Theprojectedmodelis trainedandtestedon2widelyusedfaceexpressiondatasets, particularlyextendedCohn Kanade(CK+)datasetandFER dataset.Theprojectedframeworkoutperformsthepresent progressive ways in facial feeling recognition by achieving the accuracy of ninety nine.44% and 70.52% for the CK+ datasetandalsotheFERdataset.

Commercial Affdex CNN

AffdexCNNanswer,3)customcreatedFER CNN,4)Support Vector Machine (SVM) of HOG options, 5) Multilayer Perceptron(MLP)artificialneuralnetworkofHOGoptions.

4.1.1 Using CNN

So we classified methods of Facial Emotion Detection Techniquesin2majortypesi.eUsingConvolutionalNeural Network (CNN) and Using Histogram Oriented Gradients(HOG).Inthisstudy,weaimedtounderstandCNN

©

AlexNet CNN

Fig. 4 different Face Detection Methods

Affdex CNN solution is a commercial solution.In the commercial space, Affectiva is a company that develops emotionmeasurementtechnologyandoffersitasaservice for other companies to use in their products. Their core productistheAFFDEXalgorithm,thatisusedinAffectiva’s AFFDEXSDK,whichisavailableforpurchaseontheirwebsite Itismainlymeantformarketresearch,butitisalsousedin

International Research Journal of Engineering and Technology (IRJET)

andHOG.Basedmethodsinordertofindthemostoptimistic andaffordablemethodforFacialEmotionDetection.Further, we'vegotclassifiedthesetwomajorsortsintotheirsubtypes thatessentiallyareaunitthewaysthatareaunitusedwide for Facial feeling Detection on completely different occasions.We have compared 3 deep learning approaches supported convolutional neural networks (CNN) and 2 standardapproachesforclassificationofbarchartofdirected Gradients (HOG) features: 1) AlexNet CNN, 2) industrial

Averageandextremefalserejectionandfalse acceptancerates[15]

3.5

Volume: 09 Issue: 03 | Mar 2022 www.irjet.net p ISSN: 2395 0072 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page1381

4. Emotion Detection

4.1 Methods Of Facial Emotion detection

Emotion recognition has application in varied fields like drugs (rehabilitation, therapy, counseling, etc.), e learning, diversion, feeling observance, marketing, law. Totally differentalgorithmsforfeelingrecognitionembracefeature extractionandclassificationsupportedphysiologicalsignals, numerousretrieval,withinvisionmayfacialexpressions,bodymovements.Nowadays,deeplearningbeatechniquethattakesplaceinseveralcomputerconnectedapplicationsandstudies,whereasit'splacethefollowingistotallyoncontentbasedmostlyimagethere'sstillareaforimprovementbyusingitincomputervisionapplications.

Fromresultswecansayfourwell knowntechniquesoflinear transformations within the context of the face verification taskareprecisewithapx.92%accuracy.[15]

Table 2 theconfusionmatrixforvariousemotions class

Sad 10.1 0.01 090. 50.0 510. 0.01 0.22 21 iseSurpr 20.0 0.00 130. 60.0 020. 0.74 0.03 74 alNeutr 40.0 0.01 050. 60.0 130. 0.02 0.69 69

Overall Accuracy=Number of correct predictionsTotal numberofpredictions

Also, AFFDEX has a high accuracy in detecting "key emotions",morepreciselyinthe"high90thpercentile".[16]

dataset is a csv file consisting of an image in pixels(whereeach pixelvariesfrom0 255)alongwitha labelindicatingtheemotionofthepersonintheimage.The first 32,298 images are used to train the model. The remaining3587imagesareusedfortesting.Theconfusion matrixisfoundasshowninTable.

other environments, such as the automotive industry to monitoremotionsandreactionsofadriver.

Thetrainedmodelwasusedtotestagainst3,589imagesfor emotionclassificationinreal timeanditsresultsareshown inPaper.[22]

(%)acyurAccgryAn ustDisg arFe pyHap dSa iseSurpr ralNeut yAngr 80.5 0.01 110. 30.0 140. 0.02 0.11 58 stDisgu 40.2 0.64 040. 40.0 010. 0.02 0.01 64 Fear 30.1 0.01 420. 50.0 180. 0.12 0.09 42

=286+35+223+766+300+309+4313589100

HOG 4.1.2 Using HOG

ClasstTarge Predicted

HOG with SVM classifier :

Fig.5StepsinHOGwithSVM

1. Ina 2014 paper entitled"Facial ExpressionRecognition based on the Facial Components Detection and HOG Features" they suggested an effective approach to facial emotionrecognitionproblems.Theyhaveproposeda plan thatincludesthreeactivities.Thefirsttask wastofindthe faceandgetthefacialfeatures.ThesecondfunctionusedHOG to encode those parts. The final task was to train the SVM Classifier. The proposed method was tested on the JAFFE database and the extended Cohn Kanade database. The andaverageclassificationrateonthetwodatasetsreached94.3%88.7%,respectively.[24]

=65.48%

Thetesting.training

Custom Made FER CNN

HOG is a descriptor thatisusedindetectionofobjects.It counts occurrences of gradient orientations in confined portions of an image. It is also useful to understand the changesinfacialmusclesbymethodof edgeanalysis.

AstheAFFDEXSDKisacommercialsolution,itisnoteasily accessibleandhasapricepointof25000USD[17](visitedon Aug.31,2019).Therefore,thisSDKisnottheidealsolutionfor thisresearchprojectandothersolutionsmustbeconsidered.

HOGfeatureimplementationprocess:

yHapp 20.0 0.00 020. 70.8 030. 0.02 0.04 87

International Research Journal of Engineering and Technology (IRJET) e ISSN: 2395 0056 Volume: 09 Issue: 03 | Mar 2022 www.irjet.net p ISSN: 2395 0072 © 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page1382

CustommadeFERCNNsolutionsarethesolutionsthatare basedonFER(FacialEmotionRecognition)Datasets.These solutionsarebasicallythemethodswhichareusedinmodels whicharetrainedusingtheFERDatasets.FacialExpression DetectionsubsystemusesFER2013[23]datasetwhichisan open source dataset consisting of 48x48 pixel grayscale images. This dataset has 35,887 images, with 7 emotions labeled as (1) happy, (2) neutral, (3) angry, (4) sad, (5) surprise,(6)disgust,(7)fear.Inthisdataset32,298images areusedfortrainingandtheremaining3,589areusedfor

InTable1,theconfusionmatrixforvariousemotionshave been shown. The matrix shows the comparison of actual valueswiththepredictedvaluesforeachemotion.Thematrix showsthatangryandsadareconfusedwithaprobabilityof 0.14. Similarly surprise and fear are also confused with a probabilityof0.13.Thediagonalvaluesgiveustheaccuracy ofpredictingvariousemotions.Thehighestaccuracyisthat ofhappinesswithanaccuracyof87%.Theleastaccurately detectedemotionisfearwithanaccuracyof42%.

ThisdatabaseisautomaticallycollectedbytheGoogleImage SearchAPI.Allimagesareenlarged/reducedto48*48pixels after cutting negatively labeled frames and cropping unnecessary areas. FER2013 contains 28709 training pictures,3589confirmationpicturesand3589testportraits withsevenlabels(anger,disgust,fear,joy,sadness,surprise andneutrality).[29]

Darrell and J. Malik, "Rich Feature Hierarchies for Accurate Object Detection and SemanticSegmentation,"2014IEEEConferenceonComputer Vision and Pattern Recognition, 2014, pp. 580 587, doi:

2. Similar method was proposed in a 2019 paper titled “RobustfacerecognitionsystemusingHOGfeaturesandSVM classifier”Theyused500imagesof10personsofageranging from20 30years.Imageswerecapturedindifferentlighting conditions. For training, 50 images of every person were collectedandtestingwasperformedon2differentimagesof each person and the accuracy of this experiment came around92%.[25]

AmongalldatasetsCK+givesthebestresultinmanycases.

AfteranalyzingmanytechniquesofFacialemotiondetection wecametoaconclusionthattheAlexNetCNNsolutionisthe bestwaytodofacialemotiondetectioncomparedtoothers onthebasisofaccuracyandpricing.Afterstudyingvarious techniques using Histogram of Oriented Gradients with different classifiers and combination of HOG with other methods, HOG ESRs algorithm provides greater accuracy withincreasedrobustnessthanpreviousavailablemethods.

[3]10.1109/CVPR.2014.81.K.Zhang,Z.Zhang,Z.LiandY.Qiao,"JointFaceDetection and Alignment Using Multitask Cascaded Convolutional Networks,"inIEEESignalProcessingLetters,vol.23,no.10, pp.1499 1503,Oct.2016,doi:10.1109/LSP.2016.2603342.

Facedetectionmethodshaveevolvedfromtimetotime.The Viola Jonesalgorithmisonethemostpopularfacedetection algorithms with high accuracy. Number of deep learning approaches are developed to overcome its problems and increasetheaccuracyoffacedetectionalgorithm.FasterR CNN approach is we found in our study to be the most accuratealgorithmwithstate of artresults.

5.3 FER2013

5. Image Datasets

5.2 JAFFE

Therearemanypubliclyavailabledatasetsthatcontainbasic expressions/emotionsandthatarewidelyutilizedinmany reviewed papers for machine learning, deep learning algorithmsevaluation.

5.1 CK+

International Research Journal of Engineering and Technology (IRJET) e ISSN: 2395 0056 Volume: 09 Issue: 03 | Mar 2022 www.irjet.net p ISSN: 2395 0072 © 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page1383

TheExtendedCohnKanade(CK+)datasetisawidelyused laboratory controlled dataset to test FER systems. CK + contains593videosequencesfrom123subjects(different people).Sequencesvaryinlengthfrom10to60framesand show a change from a neutral facial expression to a high (peak)expression.Amongthesevideos,327sequencesfrom 118 people are labeled with seven basic labels (anger, contempt,disgust,fear,joy,sadnessandsurprise).[27]

References:

[2]10.1109/CVPR.2001.990517.R.Girshick,J.Donahue,T.

6. Conclusions

We can say that four well known techniques of linear transformationsintheholisticfacerecognitionsystemare almost92%accurate.Withfisherfacegivingmoreaccurate resultsinmostcases.

Dataset plays an important role in FER systems, having sufficientlabeledtrained/untraineddatathatincludeseveral variations of the populations and environments is very importantforthedesignofemotionrecognitionsystems.

Anotherefficientmethodisproposedina2021papertitled “HOG ESRsFace Emotion Recognition Algorithm Basedon HOGFeatureandESRsMethod”.TheyhavecombinedHOG featureswithensembleswithsharedrepresentations(ESRs) methods. Combining ESRs with HOG has improved the accuracyandrobustnessofthealgorithm.Theresultsofthe experimentsontheFER2013datasetshowedthatthenew algorithm can efficiently extract features and decrease the residual generalization error. The accuracy of HOG ESRs methodonCK+,JAFFEandFER2013datasetsonanaverage wasaround88.3%,87.9%and89.3%respectively.[26]

HOG ESRs algorithm :

[1] P. Viola and M. Jones, "Rapid object detection using a boostedcascadeofsimplefeatures,"Proceedingsofthe2001 IEEEComputerSocietyConferenceonComputerVisionand Pattern Recognition. CVPR 2001, 2001, pp. I I, doi:

JAFFE(JapaneseFemaleFacialExpression)databasewhichis alaboratorycontrolleddatabasecontaining213samplesof variousexpressionsfrom10Japanesefemales.Eachperson has 3 4 pictures with each of the six faces (anger, disgust, fear,joy,sadnessandsurprise)andoneimagewithaneutral (normal)expression.[28]

Verification Task. Advances in Intelligent Systems and Computing.226.10.1007/978 3 319 00969 8_18.

[4] R. Girshick, "Fast R CNN," 2015 IEEE International ConferenceonComputerVision(ICCV),2015,pp.1440 1448, doi:10.1109/ICCV.2015.169.

[6]H.JiangandE.Learned Miller,"FaceDetectionwiththe FasterR CNN,"201712thIEEEInternationalConferenceon AutomaticFace&GestureRecognition(FG2017),2017,pp. 650 657,doi:10.1109/FG.2017.82.

Volume: 09 Issue: 03 | Mar 2022 www.irjet.net ISSN: 2395 0072

International Research Journal of Engineering and Technology (IRJET) ISSN: 2395 0056

[8]G.M.ZafaruddinandH.S.Fadewar,"Facerecognition:A holisticapproachreview,"2014InternationalConferenceon ContemporaryComputingandInformatics(IC3I),2014,pp. 175 178,doi:10.1109/IC3I.2014.7019610.

[21]SarmelaA PRajaSekaran,ChinPooLee,KianMingLim : Facial Emotion Recognition Using Transfer Learning of AlexNet,IEEE2021.

[24]HeidelbergJunkai

[27]P.Lucey,J.F.Cohn,T.Kanade,J.Saragih,Z.Ambadar,and I. Matthews, “The extended cohn kanade dataset (ck+): A complete dataset for action unit and emotion specified expression,” in Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conferenceon.IEEE,2010,pp.94 101.

p

[5] S. Ren, K. He, R. Girshick and J. Sun, "Faster R CNN: TowardsReal TimeObjectDetectionwithRegionProposal Networks," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137 1149, 1 June 2017,doi:10.1109/TPAMI.2016.2577031.

andA.P.Pentland,"Facerecognitionusing eigenfaces," Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1991,pp.586 591,doi:10.1109/CVPR.1991.139758.

© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page1384

[7]Pandya,JigarM.,DevangRathod,andJigna J.Jadav."A SurveyofFaceRecognitionapproach."InternationalJournal of Engineering Research and Applications (IJERA) 3. 1 (2013):632 635.

[17]Affectiva.(2019).“Pricing AffectivaDeveloperPortal,” 2017.Facial[19]2020forProf.[18](visited[Online].Available:https://developer.affectiva.com/pricing/onAug.31,2019).MarcFranzen,MichaelStephanGresser,TobiasMüllerDr.SebastianMauser:Developingemotionrecognitionvideoconferencesoftwaretosupportpeoplewithautism,VedatTümen,ÖmerFarukSöylemez,BurhanErgen:emotionrecognitiononadatasetusingCNN,IEEENov

[22]SupreetUSugur,SaiPrasadK,PrajwalKulkarni,Shreyas Deshpande:EmotionRecognitionusingConvolutionNeural Network,2019.

[23]GoodfellowI.J.etal.(2013)ChallengesinRepresentation Learning:AReportonThreeMachineLearningContests.In: LeeM.,HiroseA.,HouZG.,KilR.M.(eds)NeuralInformation Processing. ICONIP 2013. vol 8228. Springer, Berlin,

[26]YuanchangZhong,LiliSun,ChenhaoGeandHuilianFan: HOG ESRs Face Emotion Recognition Algorithm Based on HOGFeatureandESRsMethod,MDPI,2021.

e

[13] He, Xiaofei & Yan, Shuicheng & Hu, Yuxiao & Niyogi, Partha.(2005).FaceRecognitionUsingLaplacianFaces.IEEE Laplacianfaces,[15]PatternmethodZhanga,[14]328transactionsonpatternanalysisandmachineintelligence.27.40.10.1109/TPAMI.2005.55.FeiWanga,XinWangb,DaoqiangZhangc,ChangshuiTaoLib.marginFace:Anovelfacerecognitionbyaverageneighborhoodmarginmaximization.Recognition.2009Nov1;42(11):286375Smiatacz,Maciej.(2013).Eigenfaces,Fisherfaces,MarginfacesHowtoFacetheFace

[11]PMC7013584.M.A.Turk

[20]AnetaKartali,MilošRoglić,MarkoBarjaktarović,Milica Đurić Jovičić,MilicaM.Janković:Real timeAlgorithmsfor Facial Emotion Recognition: A Comparison of Different Approaches,IEEE2018.

[12]MustaminAnggo1,LaArapu.(2018)FaceRecognition UsingFisherfaceMethod.IOPConf.Series:JournalofPhysics: Conf. Series 1028 (2018) 012119, https://doi.org/10.1088/1742 6596/1028/1/012119

[10] Kortli Y, Jridi M, Falou AA, Atri M. Face Recognition Systems: A Survey. Sensors (Basel). 2020 Jan 7;20(2):342. doi: 10.3390/s20020342. PMID: 31936089; PMCID:

Chen, Zenghai Chen, Zheru Chi, and Hong Fu : FacialExpressionRecognitionBasedonFacialComponents Detection and HOG Features, International Workshops on ElectricalandComputerEngineering,2014.

[25] Amrendra Pratap Singh, Ankit Kumar : Robust face recognition systemusing HOG featuresand SVM classifier, IJISA,2019.

[9] Sasan Karamizadeh,Shahidan M. Abdullah and Mazdak Zamani. An Overview of Holistic Face Recognition. International Journal of Research in Computer and CommunicationTechnology,Vol2,Issue9,September 2013

[16] Affectiva.(2019). “DeterminingAccuracy,” [Online]. (visitedhttps://developer.affectiva.com/determiningaccuracy/Available:onSep.23,2019).

e

International Research Journal of Engineering and Technology (IRJET) ISSN: 2395 0056 09 Issue: 03 | Mar 2022 www.irjet.net ISSN: 2395 0072

Volume:

p

© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page1385

[28]M.Lyons,S.Akamatsu,M.Kamachi,andJ.Gyoba,“Coding facialexpressionswithgaborwavelets,”inAutomaticFace and Gesture Recognition, 1998. Proceedings. Third IEEE InternationalConferenceon.IEEE,1998,pp.200 205

[29]I.J.Goodfellow,D.Erhan,P.L.Carrier,A.Courville,M. Mirza,B.Hamner,W.Cukierski,Y.Tang,D.Thaler,D. H.Leeet al.,“Challengesinrepresentationlearning:Areportonthree machinelearningcontests,”inInternationalConferenceon NeuralInformationProcessing.Springer,2013,pp.117 124.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.