International Research Journal of Engineering and Technology (IRJET)

Volume: 09 Issue: 05 | May 2022 www.irjet.net
e ISSN: 2395 0056
p ISSN: 2395 0072
International Research Journal of Engineering and Technology (IRJET)
Volume: 09 Issue: 05 | May 2022 www.irjet.net
e ISSN: 2395 0056
p ISSN: 2395 0072
1Department of ECE, Vidya Vardhaka College Of Engineering Mysuru, India
2Professor, Dept. of ECE, Vidya Vardhaka College Of Engineering Mysuru, India
***
Abstract - Video tracking and processing have become increasingly important in modern technologies. This data can be utilized for a variety of research projects or to represent a specificoutcome on a certain system. To obtain the desired outcome, a variety of methods for data processing and manipulation areavailable. This painting program was made. To construct an image, we have used the OpenCV package, MediaPipe library package, Python programming language and the CNN model, which is a machine learning tool. This application uses the OpenCV library to process real time webcam data. To follow the fingertip and allow the user to write by moving the hand makes drawing or writing simple. And the CNNmodel makes the gesturedetection easier.
Key Words: OpenCV, Python, MediaPipe, CNN
Nowadays, human machine interaction is mainly done throughmouse,keyboard,remotecontrol,touchscreenand other means of direct contact, while human to human communication is basically done by more natural and intuitivemeanswithoutcontact,suchassoundandphysics. movement. Communicating by natural and intuitive non contactmeansisgenerallyconsideredflexibleandeffective; Therefore,manyresearchershavetriedtomakemachines identifyintentionsandotherinformationthroughnon contact wayslikehumans,suchassounds,facialexpressions,body movements,andgestures.Amongthem,gesturesarethemost importantpartofhumanlanguage,anditsgesturesalsoplay a very important role in human communication. They are considered the easiest means of communication between humans and computers. Gesture recognition has many applications, including signlanguage recognition, robotics, andmore.Gesturerecognitioncanbesimplyclassifiedinto twomethodsbasedonthedevicesusedtorecordgestures: wearablesensor based methodsand optical camera based methods. These devices are used in wearable sensing methodsthataffectthenaturalnessofuserinteractions,and theyarealsoexpensive.Inopticalcamera basedmethods,an opticalcameraisusedtorecordasetofimagestocapture gesture movements at a distance. These optical cameras recognizegesturesbyanalyzingvisualinformationextracted fromcapturedimages.
Hence, they are also known as user computer vision based. Human Computer Interaction (HCI) technology is widelyused.OpenCVitselfhasplayedaroleintheevolution of computer vision by allowing thousands ofpeople to do
moreproductivework withinvision.Withan emphasis on real time vision. It provides readers with assistance in implementingcomputervisionandcomputeralgorithmsby providing many coded working examples to get started. OpenCV, allows readers to quickly do interesting and interestingthingsincomputervision.Itprovidesanintuitive understandingofhowalgorithmswork,tohelpguidereaders in the design and debugging of visual applications, and to facilitate understanding and memorization of formal descriptions on computer vision and machine learning algorithmsinothertexts.Gesturerecognitionmethodssuch aspatternmatchingorfinitestatemachinesareoftenused, where high classification rates can also be achieved. However,onlyspecificgesturescanberecognizedusingthe abovemethods.Recenttrendsincomputervisiontechniques areeasier,morenatural,andlessexpensive.Theconvexhull detection algorithm recognizes finger gestures on the hull andcanidentifyeveryfingertippositionofahumanhand.It can receive more gesture information and has potential benefits.
Video tracking and processing have become extremely important in current technology. The fingertip is initially detectedandmotionsarerecognisedwiththeaidofOpenCV, MediaPipe,CNN,andPython.Thisdatamaybeutilisedfora varietyofresearchprojectsortoconveyacertainoutcomeon a system. It allows users to sketch by moving their hands, whichmakesdrawingsimpleobjectsbothfunandcomplex. For the deafandhard ofhearing,this technology will be a valuabletoolforcommunication.
Fotini Patrona, Ioannis Mademlis and Ioannis Pitas [1] proposedaUnmannedAerialVehicles(UAVs,ordrones)with camerashaverevolutionizedvariousapplicationsectors,with organisationdrones'regularlyenhancingcognitiveautonomy paving the path for excellent robotization of each day life. Duringamission,dynamiccooperationofUAVswithhuman collaboratorsiscommon,whichhas triggereda numberof techniquesforhigh degreeUAV operatorinteraction.Hand gesturesareaexcellentmethodtomakeanprolongedwayoff dronemanageeasier,andthatthey'vegivenupwardpushto new gesture languages for visible conversation among operatorsandindependentUAVs.Thisstudiesexaminesallof thetriumphinglanguages, inadditiontoimportantgesture popularitydatasetsfortrainingdevicegainingknowledgeof
2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified
Deepthi B Kuravatti1, Deepthi V B1 , Kulsum N1 , M Darshan1, Sahana M S2
International Research Journal of Engineering and Technology (IRJET)
Volume: 09 Issue: 05 | May 2022 www.irjet.net
models,thatisprobablyfinishedorweregeneratedforthis purpose. All styles of gesture languages are used in this project.MoreeffectiveandsmoothtouseUAVvehicles.The virtual virtualdigitalvirtualdigitalcamera used are costly. Any damage withinside the virtual virtual digital virtual digitalcamera,theUAVvehiclewontcomeacrossthegesture language. For gold ultra modern gesture based totally definitelyreallyintruthhuman droneinteractionTheareaof UAV oriented gesture languages anddatasets grow to be investigated in this paper. Surprisingly, the languages that existareeachbuiltformannedaerialvehicles,orthey'vegot averyslendershapeofpackagesinnature,buttherelevant datasets which is probably currently available are Task particularandimpractical,tosaytheleast.Asaresult,amore massivelanguageiswantedgrowtobeshown,whichgrowto becreatedforhigh degreecommunication. tmaybeusedasa foundationforanyvirtualvirtualdigitalvirtualdigitalcamera preparedUAVdealingwithjobswhilstmoreoverbearingin thoughtssmoothandordinaryfutureextensiontomassesof moreparticularscenarios/profilestailor madeforparticular software program application software software program software domains and/or more UAV equipment. The endorsed language and dataset were every successfully validated,withexperimental evaluation relying on modern DeepNeuralNetwork basedtotallydefinitelyreallyintruth approaches.
AyushTripathi,ArnabKumarMondal,LalanKumar1and PrathoshA.P[2]Thehassleofdetectinglettersproducedin free vicinity with finger movement is called airwriting popularity.It'sessentiallyasubsetofgesturepopularityin whichthegesturevocabularycorrelatestolettersinaselected language.Theencoderweightsarefrozenandakindheadis knowledgeable withinside the following diploma Experiments on a publicly available dataset further to a datasetacquiredinourlabusingaseparatedeviceshowoff theefficacyoftheproposedstrategy.Experimentshavebeen finishedineverysupervisedandunsupervisedsettingsandin evaluation closer to severa ultra modern vicinity model techniques.Itcanbeusedtoprovidesomeonewithashort andtouch a gooddeala goodbuylots a brilliantdeal tons muchlessinputdesirewhichcanbeemployedforprograms inHuman ComputerInteraction.Itincludestheindividualto preserveasimilarlybodilytoolwhichcanbebulkyforthe users.Amultivariatetime collectionrecordedtheuseof3 axisaccelerometerandgyroscopeofanInertialMeasurement Unit (IMU) located at the wrist of the dominant hand of a humanproblemontheidenticaltimeaswritinguppercase English alphabets workplace art work the enter for the method. The second degree is the classifier C. that is a mapping from the latent location obtained from the encoder(A)toone of a kindalphabets.Afterdiscardingthe projectionhead onthepreventofStage1ofthe education process, the general version has the identical quantity of parametersasanregularbeautyversioninformedwithCross EntropyLossatinferencetime.Thefollowingone of a kind architecturesfortheEncoderblock.Ablationsatthehyper parametersforone of a kindencoderarchitectures.
e ISSN: 2395 0056
p ISSN: 2395 0072
The ReLU activation characteristic is used to stimulate a single layer of 128 neurons withinside the projection head.The flow into Entropy Loss is used to teach the kind network.Theimpactofchangingtheprojectionheadlength andtemperatureparametergisshown.Inthispaper,weused accelerometer and gyroscope inputs from a wrist worn motionsensortoanalyzeasupervisedcontrastiveloss based absolutely absolutely surely in reality in reality certainly framework for airwriting identification.On a publicly availabledataset(deliverdataset),theproposedtechnique outperforms the state of the art, at the identical time as moreover improving recognition accuracy onan unknown cause dataset. Future research will examine precise bias lessenratestrategiesfordataset/user styleprecisebiases.
NimishaKPandAgnesJacob[3]proposedasignallanguage (SL)isanonverbalartworkshapethatletsinustoprecise ourmindandfeelings.Forthedeafanddumbcommunity, signallanguageisthenumberonemodeofcommunication. They commonly talk with numerous hand gestures.The predominantstrategiesforSLRare(i)pictureprimarilybased totally absolutely in truth and (ii) sensor primarily based totallyabsolutelyintruth.Thekeyadvantageofthistechnique istheshortresponsetime.Theyareinparticularaccurate.It makesuseofexcessivepricesensorsitisn'tanentirelottons much less high priced for the no longer unusualplacedeaf people.ThesuperstepsinvolvedareImageacquisition,Image pre processing, Feature Extraction, Sign Classification and SignTranslation.Differenttechniquesprimarilybasedtotally absolutelytotallyonvisibleandrecordsglovesmaybeused torecognizesignallanguage.Themajority of structures do nownolongercomprisefacialfeatures.Featureextractionin VBAmakesuseofnumeroustechniquestogetherwithYOLO, CNN,PCA,andothers.Thepre expertversionisthemaximum ultra contemporary dayandquickestofthosetechniques.It's furthermorethefirst rateasitemployshundredsofrecords, whichcontributestofirst rateaccuracy,ittrulyistheSLR's important goal. SVM, ANN, and CNN classifiers are used withinside the splendor stage. All of those techniques are notablyaccurate.Despitealltheproposedmethodologiesand trials which have been examined, a near first rate tool for signaltranslationstaysanextendedmanneroff.
P.SharmaandN.Sharma[4]Inthecurrentsituation,most communication is done through vocal sounds and body language gestures. Vocal sounds are crucial in communication, but they can also be distracting. With the passage of time, many body language expressions take on greatersignificance.Eveninafewcases,bodylanguagehada significantimpact.Communicationbetweendeafanddumb individualsplaysavitalfunction.forexample:Amethodis definedinthispaperforRecognizingGestureandPostureisa
2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008
1) 1DCNN
International Research Journal of Engineering and Technology (IRJET)
Volume: 09 Issue: 05 | May 2022 www.irjet.net
skill that can be learned.This work's application is in the domainofHCI(HumanComputerInteraction)basedsystems. Principal Component Analysis (PCA) and Singular Value Decomposition(SVD)methodsareutilizedtoextractfeatures frominputphotos,andaneuralnetworkistrainedtoidentify motionsusingthesecharacteristics.Inanybodyimage,the proposed methodology recognised distinct postures and motions. The results are more precise than priorefforts. However, the background in the suggested techniqueis homogeneous,andjustafewtypesofmotionsandpostures aredetected.Anovelmethodisdefinedandimplementedof recognizing the gesture and body pose recognition using SVD PCA approach and feed forward artificial neural network.Theneuralnetworkgeneratesconfusionmatrices, whichareusedtocalculatetheresults. Here,weformalise theminatabulartechniquethatdisplaysthepercentageof observedgesturesandposturesthatarecorrectlyrecognised. In any body image, the proposed methodology recognised distinctposturesandmotions.Theresultsaremoreprecise thanpriorefforts.However,thebackgroundinthesuggested techniqueishomogeneous,andjust a fewtypes ofmotions andposturesaredetected.
B. Nandwana, S. Tazi, S. Trivedi, D. Kumar and S. K. Vipparthi,[5]proposedaHandgesture recognitionsystems aregainingpopularitythesedaysduetotheeasewithwhich humansandmachinescaninteract.Thegoalofhandgesture developmentistoimprovecommunicationbetweenhumans andcomputersforthepurposeoftransmittinginformation. This paper provides an overview of current handgesture recognitiontechnology,bothstaticanddynamic.Itdisplays all of the methods for hand gesture recognition that have been employed in various research papers. In gesture recognition method, a camera displays the human body movementsandcommunicatesthedatatoacomputerthat usesthegesturesasinputtocontrolgadgetsorapplication.In thissurveyresearch,welookatthebasicsofhandgesture recognitionanddiscoverthat,incomparisontovision based technologiesandglove basedmethods,thekinectsensoris commonlyemployed.Incomparisontostatichandgestures, dynamic hand gesture recognition necessitates more computing. In a number of computer applications, hand gesturesprovideaninterestinginteractiondomain.
J.YashasandG.Shivakumar[6]Wegiveareviewofthe literatureonhandgesturerecognitioninthisstudy(HGR). After achieving all of the greatest feasible data collecting methods,suchascameras,wristsensors,andhandgloves, theyarenolongeraworry.Thefocusnowisonextracting features from accessible data and improving feature extractionusingalgorithms.Thesetechniqueshavealsobeen investigated,andinrecentpapers,aproblemrelatedwithone approachoffeatureextractionhasbeensolvedbycombining itwithanotherwayoffeatureextraction.Recentpublications in HGR discuss the integration of multi modal information suchasskeletaljointinformation,depth,andRGBpicturesas inputentities,withproblemsrelatedwithonemodalbeing solved byanother.Dataacquisiton canbemadebyusinga
e ISSN: 2395 0056
p ISSN: 2395 0072
cameraorhand.SensorslikeMicrosoftkinect,Leapmotion controller, ASUS XtJon, and others can help solve the data collectingproblemtothebestoftheirabilities.Now,thefocus of study is on data learning, data similarity, and a set of algorithms that are best suited to the type of data being obtained. Deep learning, Convolutional Neural Network (CNN),AdaptedConvolutionalNeuralNetwork(ADCNN),and RecurrentNeuralNetwork(RNN)aresomeofthelearning methodsemployed.
Y. Lu, C. Guo, X. Dai and F. Y. Wang [7] proposed a Machinelearninginfineartpaintingshasrecentlygottena lot of interest. Painting image captioning is critical for paintingstudy,yetitisrarelyexplored.Becausethepaintings contain abstract expressions and no annotated datasets, paintingcaptioningbecomesadata hungrytask.Asaresult, painting captioning presents greater difficulties than captioning photographic images. This work attempts to generatecontentdescriptorsforpaintingsinaninnovative way. The solution follows two steps. We demonstrate our advancesbyevaluatingourtechniqueonanannotatedsmall scale painting captioning dataset.We train the model on generatedvirtualpaintingstosolvethedata hungryproblem inimagecaptioning,presentingmeaningfulimprovements. BLEU[18](BLEUnimpliescalculatingBLEUscoreusingn gramtokenizing),CIDEr[19],METEOR[20],ROUGEL[21], andSPICE[22]aresomeofthestandardimagecaptioningjob metricsthatweusetoevaluateourmodel.Table.Icompare the results of our models on a real painting dataset to a baselinemodel trained justontheMSCOCOdataset.where ourCNN+LSTMmodeliscomparedtothebaselinemodeland thegroundtruthThepositiveandnegativestatementsare highlightedinbold.Wecanseethatourmodelhasimproved in terms of correcting the baseline model's erroneous descriptions,suchasobject recognition,scenerecognition, andobjectrelationships.Thisresearchtakesafreshapproach tothechallengeoffineartpaintingimagecaptioning.Totrain thepaintingcaptioningmodel,wecreatedvirtualartworks. In future work, we can utilize more complex picture captioningsystemsandmoreannotatedpaintingstoassess ourapproachesperformance.
M. Ranawat, M. Rajadhyaksha, N. Lakhani and R. Shankarmani[8]proposedatechnologythatdoesnotneed anyextra devicestodothemouseactivities.Theweb cam followstheuser'shands,recognisingspecifiedmovements and executing the relevant mouse operations. This system was created in Python with the help of OpenCV and PyAutoGUI.Backgroundsettings,theimpactofilluminance variations,andskincolourwerealltestedseparatelybythe researchers. Methodology involves following steps Image Capturing and Preprocessing, Hand Detection, Gesture RecognitionandEventTriggering. Thisapplicationinclude simplemovementlikemovingupanddown,andopeningand closing applications, left andright click, double click using hands.Thebackgroundwillnotaffectthehandtarcking.Skin colourandhandsizedoesnotaffect.Someexceptionslikethe gesture‘Paper’whenheldforlongtimeactsasdouble click.
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified
International Research Journal of Engineering and Technology (IRJET)
Volume: 09 Issue: 05 | May 2022 www.irjet.net
To overcome the above problem we are creating two differentframesforgesturerecognitionandforwriting.The implementation of this application has been successful. In low lightsituations,theconversionofcolourspacefromRGB to HSV hasimproved accuracy. Because facedetectionand subsequent face subtraction have been performed, the presenceoftheuser'sfaceinthebackdrophasnoeffecton the results. The accuracy of the CNN model in recognising gestures has greatlyimproved aftera custom dataset with predefinedmotionswascreated.
PranaviSrungavarapu,EswarPavanMaganti,Srilekkha Sakhamuri, Sai Pavan Kalyan Veerada and Anuradha Chinta[9]developedaVirtualSketchusingOpenCV.Wecan drawwithavirtualsketchbysimplycatchingthemovement of a colored pen with a web cam. The marker is usually a single coloreditemnearthetipofthefinger.Pythonand openCVareused.Colortrackinganddetectingprocessesare used.Thecolormarkerusedinthiscaseisidentified,anda maskiscreated.FramedetectionandconversionofBGRcolor toHSV.Creatingcanvas.Setthetrackbarvaluestolocatethe coloredmarker'smask.Themaskispre processed.Arrays are used to draw points on the canvas. On the frames and canvas,drawthepointsstoredinanarray.Differentwriting space/frameiscreatedusingframesubtraction.Sinceblue colorpointerisusedforwriting,thebluecolor(otherthanthe pointer) in the frame is also detected which reduces the accuracy. We are using media pipe for hand tracking, and colorwillnotaffectthemodel.Thisapplicationprovidesthe userwithaninteractiveenvironmentinwhichhemaydraw whateverhewantsusingacolourpalette.Asaresult,wecan deducethatVirtualSketchwascreatedutilizingtheNumPy library and OpenCV, which includes several libraries and algorithmsthatmaketheinterfacesmorealivewhileinuse.
Gangadhara Rao Kommu [10] proposed a An Efficient Tool For Online Teaching Using OPENCV. The primary purposeof computer vision is to recognise and identify distinct objects ofvarious sizes,shapes, andlocations.The illumination and viewpoint of the item are two important challengesincomputervision;yet,severalexperimentson detectingandrecognisingobjectshavedemonstratedahigh levelofaccuracyandprecisioninthesetasks.Theproposed workallowstheusertotrackthemovementofanycultivated object of his or her choice in order to facilitate object detection online. The user can also select which colours shouldbeshown.Thecamera istriggeredwhentheappis run,allowingtheusertosketchintheairsimplybywaving thetrackerobject.Thedrawingisalsovisibleonthewhite frameatthesametime.Theinstructorcandrawwithanyof the colours given above and can also clear the screen if necessary.Thisapplicationwillbebuiltutilizingopencvand pythoncomputervisiontechniques. Differentwritingspace/ frameis created using frame subtraction. Fontsize cannot bechanged.ToenhancethehandtrackingweusedOpenCV. We are adding font size selection. We created a free hand paintingprogrammerthatdetectstheuser'spointerfinger
Impact Factor value: 7.529
e ISSN: 2395 0056
p ISSN: 2395 0072
usingOpenCV.Colorfullinescanbedrawnanywheretheuser wants,andthebrushcanbechanged.
S.U.Saoji,NisthaDua,AkashKumarC,BharatPhogat[11] proposed a the method of identifying and interpreting a continuousstreamofmovementsbasedonasetofinputdata is known as gestural recognition. Gestures are non verbal information used to improve computer language comprehension.Acomputervisionprogramanalyzesvarious gestures by analyzing human vision. The project takes advantageofthisgapbyconcentratingonthedevelopmentof amotion to textconverterthatmightbeusedassoftwarefor intelligentwearablegadgetsthatallowuserstowritefromthe air.Thesystemwillemploycomputervisiontotracetheroute of thefinger,allowing forwriting fromabove.Thecreated text can be utilised for a variety of applications, including sendingmessagesande mails.Forthedeaf,itwillbeastrong way of communication. It is an efficient communication approach. Methods are as fallows, Writing Hand Pose Detection, Hand Region Segmentation, Hand Centroid Localization,FingertipTracking,ColourTrackingofObjectat fingertip,ContourDetectionoftheMaskofColourObjectand DrawingtheLine using theposition of thecontour. Frame subtractionhasbeendone.Edgeenhancementisdone.This modelusesbluetipforwriting.Bluetiptrackingisneglected. Sending messages as the generated text can be done. This technologyhastheabilitytoputstandardwritingtechniques tothetest.Iteliminatestheneedtowritedownnotesona mobilephonebygivinganeasyon the gomethodtodoso.It willalsobeveryusefulinassistingpersonswithdisabilities tocommunicatemorefreely.Thetechnologyiseasytouse evenforseniorindividualsorthosewhohavedifficultyusing keyboards.ThistechnologycansoonbeusedtooperateIoT devices,extendingitscapability.Itisalsofeasibletodrawin the air. This solution will be fantastic software for smart wearables, allowingindividuals to interactwith the digital world more effectively. Text can be brought to life via augmented reality. The accuracy and speed of fingertip recognitioncanbeimprovedwithupcomingobjectdetection techniqueslikeYOLOv3.Artificialintelligencebreakthroughs willimprovetheefficiencyofair writinginthefuture.
YashPatil,MihirPaun,DeepPaun,KaruneshSingh,Vishal Kisan Borate,[12] Virtual Painting with OpenCV Using Python,Videomonitoringandprocessinghaveturnouttobe more and more more crucial in current technologies. This recordsmaybeappliedforquiteafewstudiesinitiativesor to symbolize a selected final results on a sure gadget. To achievethefavoredfinalresults,quiteafewtechniquesfor records processing and manipulation are available. This paintingprogrambecomemadetoassembleanpicturethey have used the OpenCV package deal deal and the Python programminglanguage,it truly isanapex gadgetstudying tool. Such as this application This paint like python application uses the OpenCV library to process real time webcamrecords.Toobserveanobjectofinterest(inthiscase,a bottlecap)andallowthecharactertocoolanimatedfilmvia moving the difficulty makes drawing clean topics every
9001:2008
International Research Journal of Engineering and Technology (IRJET)
Volume: 09 Issue: 05 | May 2022 www.irjet.net
amusing and tough. The first step in spotting an Object of Interestistosectionit.Itisthetechniqueofisolatinganenter picture(onthisexample,anObjectofInterestpicture)into sectionswithdescribedbounds.Aahitpopularitytechnique startswithagreatsegmentationtechnique,whichendsupin a first rate characteristic extraction process. The OpenCV modulesforpictureprocessing.ImplementationMethod:For thisdevicegettingtoknowapplication,WeusedthePython programminglanguagetogetherwiththeOpenCVlibraryto createcode.ImplementationMethod:Forthistoolanalyzing application, we used the Python programming language togetherwiththeOpenCVlibrarytocreatecode.
PrajaktaVidhate,RevatiKhadse,SainaRasal[13]Virtual Paint Application By Hand Gesture Recognition System, Gesturereputationisaultra modernbranchoftechnology. OnaWindowsPC,weincreaseasmoothcustomerinterface forspeakmewithMSPAINT.Thecustomer'shandmotions areprobablyanalyzedusingthevirtualcameratodelivera commandtotheMSPAINTutility.Toaccomplishspeedyand constantgesturedetectioninrealtimeswithoutanydistance limits, a web virtual camera is employed to extract hand movements. Gesture, internet virtual camera, Hand, Algorithm,Sense,Paint,Human,Control,andsoonaresomeof the index terms Our venture intends to increase a virtual paintapplicationthatallowsclientstodrawwithinsidethe airontheequaltimebecausethemachinerecognizestheir motions. first It counts the huge sort of palms with the resourceoftheuseoflocatingconvexitydeficienciesamongst adjacentpalmsofthehand.Backgroundsubtraction(BS)isa well knownmethodforgeneratingaforegroundmaskusing static cameras (i.e., a binary photograph comprising the pixels belonging to transferring gadgets with inside the scene).Asthedecisionimplies,BSconstructstheforeground maskwiththeresourceoftheuseofsubtractingthepresent dayframefromabackgroundmodelthatincludesthestatic issueofthephotograph,or,extrabroadly,thewholelotthat can be labelled background given the parameters of the locatedThehandphotographisextractedfromthebackdrop using hand segmentation. Contours are the curves that be partofallofthenon preventelementsalongaborder,allof whichmightbethesameintensity.Gesturereputationallows you to carry out your virtual paint utility whilst now no longer having to use a much off control or perhaps your laptop'sscreen.Forexample,ifwefloatourpalmstotheright side,we'recapableofdrawarectangle,andifwefloatour palms to the left side, we're capable of draw a circle. We endorseasolutionforavirtualpaintapplicationthatusesa handreputationtechnology.
Vijay Kumar Sharma, Vimal Kumar, Md. Iqbal, Sachin Tawara,Vishal Jayaswal[14]proposedaarticlesuggestsa mannerforcontrollingthecursor'splacewithouttheneedof anyelectricpoweredequipment.Whileactionscollectively with clicking and dragging subjects is probably completed usingseveralhandgestures.Thishaveatakeahavealookat offersamannerforcontrollingthecursor'splacewithoutthe needofanyelectricpoweredequipment.Thetoolthathas
e ISSN: 2395 0056
p ISSN: 2395 0072
beenproposedisusedtocontrolthepointerusingdetecting thehumanhandandshiftingthepointerwithinthecourse withinthehumanhandrespectively.ThetoolControlclean featureofthemousecollectivelywithleft clicking,dragging andcursormovement.Theproposedtoolisusedtocontrolthe resourceofdetectingthehumanhandandmovingthepointer withinthepathwithinthehumanhandrespectively.thetool Control clean feature of the mouse collectively with left clicking,draggingandcursormovement.
Abhilash SS, Light Thomas, Naveen Wilson, Chaitanya C,[15] This research proposes a new camera vision based cursorcontrolsystemthatuseshandmovementsextracted fromawebcamusingacolordetectionalgorithm.Thesystem willallowtheusertomovethecomputercursorwiththeir fingers,clicktheleftmousebuttonanddragwiththehand coloredcapsorribbons,willbeperformedwithavarietyof movementsofthehand.Moreover,itperformssimultaneous filetransferbetweentwonetworks.Systems.Thesuggested system is based only on a low cost component. A high resolutionwebcamcapableoftrackingandactingasasensor, Two dimensionalcoloredplugsareheldinthehandsofusers. PythonandOpenCVwillbeusedtoimplementthesystem. Themostnaturalandeasiestwayofcommunicationishand gesturecommunication.Thecameraoutputcanbechecked on the monitor. Color detection will be used to acquire information about the shape and position of the gesture. Python server programming is used to implement the file transfertechnique.
Gesturerecognitionallowsforthemostnaturalhuman machineconnection.Gesturerecognitionisalsocrucialfor thedevelopmentofnewhuman computerinteractionmodes. Itmakesitpossibleforhumanstointeractwithmachinesin amorenaturalmanner.Gesturerecognitionhasawiderange of applications, including deaf and dumb people's sign language recognition, robot control, and so on. The image processing skills of OpenCV are presented. The ultimate objectiveistodevelopacomputervisionmachinelearning applicationthatencouragespeopletoparticipateinsports. Man MachineInteraction(MMI),istherelationshipbetweena human and a computer or, more specifically, a machine. Becausethemachineisuselesswithoutproperhumanuse, therearetwomaincharacteristicstoconsiderwhendesigning anHCIsystem:functionalityandusability.Theamountand breadthtowhichthesystemcanoperateandexecutedefined userpurposesefficientlywasreferredtoassystemusability, while the system functionality referred to the range of operationsorservicesthatisdeliveredtousersbythesystem.
2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified
[1] FotiniPatrona,IoannisMademlisandIoannisPitas, “An Overview of Hand Gesture Languages for Autonomous UAV Handling,” Aerial Robotic Systems Physically Interacting with the Environment
International Research Journal of Engineering and Technology (IRJET)
e ISSN: 2395 0056
Volume: 09 Issue: 05 | May 2022 www.irjet.net p ISSN: 2395 0072
(AIRPHARO)Year: 2021, Conference Paper, Publisher: IEEE
[2] AyushTripath,ArnabKumarMondal,LalanKumar and Prathosh Ap, “SCLAiR: Supervised Contrastive Learning for User and Device Independent Air writing Recognition,” IEEE Sensors Letter Year: 2021, Early AccessArticle,Publisher:IEEE
[3] NimishaKPandAgnesJacob“BriefReviewofthe Recent Trends in Sign Language Recognition,” InternationalConferenceonCommunicationandSignal Processing (ICCSP)Year: 2020, Conference Paper, Publisher:IEEE.
[4] P. Sharma and N. Sharma, "Gesture Recognition System,"20194thInternationalConferenceonInternet ofThings:SmartInnovationandUsages(IoT SIU),2019, pp.1 3,doi:10.1109/IoT SIU.2019.8777487.
[5] B.Nandwana,S.Tazi,S.Trivedi,D.KumarandS.K. Vipparthi,“Asurveypaperonhandgesturerecognition” 7thInternationalConferenceonCommunicationSystems and Network Technologies (CSNT), 2017, pp. 147 152, doi:10.1109/CSNT.2017.8418527
[6] J. Yashas and G. Shivakumar, “Hand Gesture Recognition”,International Conference on Applied Machine Learning (ICAML), 2019, pp. 3 8, doi: 10.1109/ICAML48257.2019.00009.
[7] Y. Lu, C. Guo, X. Dai and F. Y. Wang, “Image Captioning on Fine ArtPaintings via Virtual Paintings,” IEEE1stInternationalConferenceonDigitalTwinsand Parallel Intelligence (DTPI), 2021, pp.156 159, doi:10.1109/DTPI52967.2021.9540081.
[8] M. Ranawat, M. Rajadhyaksha, N. Lakhani and R. Shankarmani, "HandGesture Recognition Based Virtual Mouse Events," 2021 2nd International Conference for Emerging Technology(INCET),2021, pp. 1 4, doi: 10.1109/INCET51464.2021.9456388.
[9] Pranavi Srungavarapu, Eswar Pavan Maganti, SrilekhaSakhamuri,SaiPavanKalyanVeerada,Anuradha Chinta, "Virtual Sketch using OpenCV," International Journal of Innovative Technology and Exploring Engineering(IJITEE),ISSN:2278 3075(Online),Volume 10Issue 8,June2021.
[10] Gangadhara Rao Kommu, “An Efficient Tool For Online Teaching Using OpenCV,” 2021 International JournalofCreativeResearchThoughts(IJCRT),Volume9, Issue6June2021,ISSN:2320 2882.
[11] S.U. Saoji , Nistha Dua, Akash Kumar C , Bharat Phogat, “Basic PaintWindow Application via Webcam Using OpenCV and Numpy in Python,” Journal of
Interdisciplinary Cycle Research, August 2021, Volume XIII,IssueVII:1680 1686,ISSNNO:0022 1945.
[12] Prajakta Vidhate, Revati Khadse, Saina Rasal, “Virtual paint application by hand gesture recognition system,”InternationalJournalofTechnicalResearchand Applications e ISSN: 2320 8163, Volume 7, Issue 3 (MARCH APRIL2019),PP.36 39
[13] YashPatil,MihirPaun,DeepPaun,KaruneshSingh, VishalKisanBorate,“VirtualPaintingwithOpenCVusing Python,” First International Conference on Computer EngineeringInternationalJournalofScientificResearchin ScienceandTechnologyPrintISSN:2395 6011,Online ISSN:2395 602XVolume5Issue8,November December 2020
[14] VijayKumarSharma,VimalKumar,Md.Iqbal,Sachin Tawara, Vishal Jayaswal, “Virtual Mouse Control using HandClass Gesture,” MeerutInstituteofEngineering& Technology, MIET · Department of Computer Science, MasterofTechnology,December2020
[15] Abhilash S S, Lisho Thomas, Naveen Wilson, Chaithanya C, “Virtual Mouse using Hand Gesture,” International Research Journal of Engineering and Technology(IRJET),p-ISSN:2395-0072,e-ISSN:2395-0056 ,Volume:05Issue:04,Apr-2018.
Deepthi B Kuravatti, ispursing4/4 engineering in Electronics and CommunicationEngineering,inVidya Vardhaka College of Engineering, Mysuru. The areas of intereset are Web Devwlopment, Python and MachineLearning.
Deepthi V B, is pursing 4/4 engineering in Electronics and CommunicationEngineering,inVidya Vardhaka College of Engineering, Mysuru.TheareasofinteresetareSQL andDataScience
Kulsum N,ispursing4/4engineering in Electronics and Communication Engineering, in Vidya Vardhaka College of Engineering, Mysuru. The areasofinteresetareIoTandMachine Learning.
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page127
International Research Journal of Engineering and Technology (IRJET)
Volume: 09 Issue: 05 | May 2022 www.irjet.net
M Darshan, is pursing 4/4 engineering in Electronics and CommunicationEngineering,inVidya Vardhaka College of Engineering, Mysuru. The areas of intereset are Embedded system and Machine Learning.
Sahana M S, currently working as Assistant Professor, Departement of Electronics and Communication Engineering, Vidya Vardhaka College OfEngineering,Mysuru Shehasmore than 5 years of teaching experience Her areas of interest includes Image ProcessingandSpeechProcessing.
e ISSN: 2395 0056
p ISSN: 2395 0072