VIRTUAL MOUSE USING OPENCV
BALA HARSHITHAA B1, LEELAVATHI R2
1Student, Department of Information Technology, Bannari Amman Institute of technology, Erode, Tamil Nadu, India

2Assistant Professor, Department of Information Technology, Bannari Amman Institute of technology, Erode, Tamil Nadu, India ***
Abstract - Human-computerinteractionshighlyrelyonhand gesture recognition. As we can see, there are a lot of new technological developments taking place. For example, biometric authentication, which is widely used in smartphones, and hand gesture recognition, which is a contemporary methodofHumanComputer Interaction. Asan alternative to the current methods for controlling cursor motions, which include physically pressing buttons or modifying the angles of an actual computer mouse, this research proposes a method for the use of a real-time camera in Human Computer Interaction. It can perform all the activities that a standard computer mouse can, and it also controls many mouse events with the help of a camera and computer vision technology to detect finger movement gestures onthe computer'swindowandallowsuserstocontrol the entire systemwith just their fingers. This systemincludesa User-Interface through which the usercanenablethetracking ofthe hand movements andgeneratescolouredcoordinatesto detect the hand. The real-time photos that the virtual mouse colour recognition application is continuously gathering will undergo a variety of filters and conversions. The application will employ a photo processing strategy once the system finishes to extract the focused coloured coordinates from the frames that are transformed. It ismademoreapproachableby usingfinger detectingtechniquesforrapidcameraaccess.This technology decreases the need for a real mouse, saving time and effort.
Key Words: Open CV, Hand-gestures, Virtual mouse, Image processing.
1. INTRODUCTION
A mouse is referred to as a pointing device in computer nomenclature, using two-dimensional movements in referencetoafloor.Thismovementisconvertedintopointer motionsonadisplay,makingitpossibletocontroltheGUIof a computer platform. In the modern era, there have been several distinct types of mouse, like mechanical mouse, whichmovesbyusingasturdyrubberballthatrotateswhen the mouse is moved. After that, an LED sensor was developed to replace the rubber ball in the mouse with a hardexteriorthatcoulddetectmovementonthetableand transferdatatoacomputerforprocessing.
Thelasermousewassubsequentlylaunchedin2004tosolve the shortcomings of the optical mouse, which include challenges in tracking highly reflective surfaces and to improve motion accuracy with the tiniest hand motion. Despiteitsaccuracy,themouseneverthelesshaslimitations in terms of both its physical capabilities and its technical capabilities.
For instance, a laptop mouse is a consumable piece of hardware since it eventually needs to be replaced either because the mousebuttons haveworn outandnow make worthlessclicksorbecausethelaptopnolongerrecognises the mouse as a whole. Interactions between people and computers are crucial. Despite the limitations of laptop technology,itisexpanding.
Giventhedevelopmentofatouch-screenmobiledevice,the market is starting to demand the adoption of the same technologyacrossallplatforms,includingdesktopmachines. Despite the fact that desktop PCs with contact displays already exist, the price may be too high. An alternative to touchscreenswouldbeadigitalhuman-computerinterface thatusesawebcamorotherphoto-takingtoolsinsteadofa physical mouse or keyboard. The movements a person makes to engage with a webcam can be translated into pointer motion by a software programme that frequently usesthewebcam,muchlikearealmouse.
Inthissystemthatisdeveloped,thecameratakespictures, stores them in a temporary location, analyses the frames, andthendetectsthepre-programmedhandgestureactions.
This type of virtual mouse system is typically developed using the Python programming language, and computer visionprojectsalsoemploytheOpenCVpackage.Thesystem makesuseofwell-knownPythonlibrarieslikeMediaPipefor thereal-timetrackingofhandsandfingermovements.The PyAutoGUI package is also used to track the finger as it moves across the screen to perform actions like left- and right-clicking,dragging,andscrolling.Thisalsomakesuseof Numpy for calculations, Pycaw for audio related functionalitieslikeremovingbackgroundnoises,TKinterfor UI,ScreenControlforcontrollingthescreenbrightness.
2. LITERATURE SURVEY
Thefollowingliteraturehelpedindevelopingtheproposed work with greater performance and implementing it in a way by overcoming the disadvantages of the existing systems.

[1]ByusinganIRcameraandanIRpenthatserves as a virtual marker, Ashish Mehtar and Ramanath Nayak createdthe Virtual Mouse onMarch12,2015.Italsomakes useofhardware.Theysuggestedaproductthatstrivestobe avirtualmarkerandhastheaddedbenefitofincorporating capabilities from a mouse. The existing Virtual Marker is altered to serve as both a mouse pointer and a marker, givingitaccesstoallofthemouse'sfeatures.Intheirarticle, they suggested a hardware implementation of a virtual mouse that, by making the existing "Virtual Marker" very responsiveinrealtime,improvesperformance.
[2]BySherinMohammedandVHPreethain2018, their Hand Gesture-Virtual Mouse for Human Computer Interaction utilisedOpenCVandPythonandrequiredthe deploymentoftwoexpensivecameras.Theirstudyfocuses onthedevelopmentofhandgesturesin3-Dspaceforhuman computerinterfacesystemsusingtwocamerasinposition. Thescreencoordinatesystemisapproximatedandmapped tothehandgestureofpointing.Tofulfilthefunctionofthe virtual mouse, we also employ various hand movements. Theyemployothergestures,suchasselectingafolderoran object,inadditiontohandmovementstopointatthescreen.
[3]KolliparaSaiVarun,IPuneeth,andDr.T.Prem Jacob's Virtual Mouse Implementation Using Opencv in 2019 uses a webcam or built-in camera to record hand motionsandrecognisehandtipsusingcomputervision.The machine learning algorithm is utilised by the system's algorithm.Theirsuggestedsolutionisbuiltoncutting-edge technologythatmakesuseofhandmotions.Withtheuseofa camera, hand motions are recorded. After that, hand landmark key points are detected, and depending on the gesturerecognised,themousecursorexecutesavarietyof actions. Users can operate the mouse in a number of different ways without the need for any hardware or sensors;allyouneedisyourhandandapairoffingers.Also, itiscost-efficientanduser-friendly.
[4] Kabid Shibly, Samrat Dey, Aminul Islam, and ShahriarShowravusedHSITechnologyinthe Design And Development Of A Hand Gesture-Based Virtual Mouse on December19,2019.Thelightningconditionwasabarrier. Their suggested mouse system exceeds this restriction. In their study, they suggest integrating computer vision and handmovementstocreateavirtualmousesystembasedon HCI.Webcamorbuilt-incamerarecordingsofgesturesthat havebeensubjectedtocoloursegmentationanddetection processing.Withhandsthathavecolouredcapsonthetips, the user will be able to control some computer cursor
operations.Ausercanprimarilyscrollupordownbyusing various hand motions, as well as conduct left, right, and double clicks. The system uses a webcam or a built-in cameratotakeframes,thenprocessestheframestomake themtrackable.Afterthat,itidentifiesvarioususermotions andexecutesthemousefunction.
[5]Byusingcomputervisiontooperatethemouse cursor,the Virtual Mouse Using Object Tracking research by Monali Shetty, Christina Daniel, Manthan Bhatkar, and OfrinLopezin2020captureshandgesturesfromacamera using an HSV colour recognition method. Their goal is to develop a system-interacting object tracking application. Theirapproach,whichuseshandgesturesrecordedfroma webcam using an HSV colour detection method, is a computervision-basedmousecursorcontrolsystem.Using colouredcapsortapesthatthecomputer'swebcamtracks, theirtechnologyenablesuserstomovethesystempointer and carry out mouse actions like left-, right-, and doubleclicks with various hand motions. The system is implementedusingreal-timecomputervisioninPythonand theOpenCVpackage. The outputisshownonthemonitor fromthecamera.
[6] Vantukal Reddy, Thumma Dhyanchand, Galla VamsiKrishna,andSatishMaheshwaram's Virtual Mouse Control Using Colored Fingertips And Hand Gesture Recognition, published in 2020, uses two techniques for tracking the fingers: One study in the field of humancomputer interaction uses a virtual mouse with fingertip detectionandhandmotiontrackingbasedonimageinalive video.Theysuggestemployinghandmotionrecognitionand fingertip identification to control a virtual mouse in their article. Two techniques are used in their investigation to track the fingers: hand gesture detection and the use of colouredcaps.Itconsistsofthreebasicsteps:trackinghand gestures, finger detection via colour identification, and cursorimplementation.Intheirinvestigation,aconvexhull is formed around the contour that is detected to produce handgesturetracking.Withthearearatiooftheproduced hullandcontour,handfeaturesareretrieved.
[7] Hand recognition and segmentation with a Microsoft Kinect Sensor version 2 depth image is made possible by Dinh-SonTran, HyungJeong Yang, Ngoc-Huynh HoSooHyungKim,andGueeSangLee's Real-Time Virtual Mouse System Using Rgb-D Pictures And Fingertip Detection.ByutilisingfingertipsensingandRGB-Dpictures, theysuggestarevolutionaryvirtualmousetechnique.Using detailedskeleton-jointinformationphotosfromaMicrosoft KinectSensorversion2,thehandregionofinterestandthe palm'scentrearefirstretrieved,andtheyarethentranslated intoabinaryimage.Aborder-tracingmethodisthenusedto extract and describe the hands' outlines. Based on the coordinatesofthehandcontour,theK-cosinemethodisused todeterminethelocationofthefingertip.Lastly,themouse cursor is controlled via a virtual screen by mapping the
fingertip location to RGB pictures. Using a single CPU and Kinect V2, the device detects fingertips in real-time at 30 framespersecondonadesktopcomputer.
[8]ZhenzhouWang's RobustSegmentationOfThe Image By Fusing The SDD Clustering Result From A Separate Colour Space employs SDD clustering as a revolutionarycoloursegmentationtechniquethatperforms betterthantraditionalclusteringtechniques.Bycombining theslopedifferencedistribution(SDD)clusteringoutcomes inseveralcolourspaces,itissuggestedthatthecolourimage besegmented.Thesegmentationmethodisintendedtobe two-labelsegmentationforsimplicity,whilemultiple-label segmentation might be readily generalised from this. The proposedmethodiscontrastednumericallyandqualitatively with the most recent colour picture segmentation techniques.Experimentalfindingsconfirmedtheefficacyof thesuggestedstrategy.
3. PROPOSED SYSTEM

Peopleaspiretoliveinaworldwheretechnologyequipment maybecontrolledandinteractedwithremotelywithoutthe need for any peripheral devices like remote controls, keyboards,orthelike.Withoutadoubt,avirtualmousewill soon take the place of a physical one. It not only provides ease,butitalsosavesmoney.Theexistingapproachhasthe drawback of only accurately detecting colours on plain backgroundsduetoitsusageofcolourdetectiontechnology. Additionally,therightclickmousefunction'saccuracyhas slightlydecreased,andthemodelhassometroubleclicking and dragging to pick text. In terms of daily use, the interactions are also not very user-friendly. While recognising the gesture movements, ambiguity is created whenthethreevariousinteractiontechniquesarecombined. A physical mouse requires specialised hardware and a surface to operate. Physical people struggle to adjust to uniquecircumstances,anddependingontheenvironment, theyfunctiondifferently.Themouseonlyhasasmallsetof capabilities, even in typical operational scenarios. Every wirelessmousehasadifferentlifespan.
Thecalibrationstageandthereputationsegmentarethetwo mainlevelsthatmakeuptheshadereputationprocess.The calibrationsegment'sobjectiveistoinstructthesystemon howtointerpretthehueandsaturationvaluesforcolours that customers have selected. These values and other configurationsmaybestoredbythedevice,ifnecessary,in text files that will be used in a later segment of the identificationprocess.Thedevicewillstarttotakeframes andlookfordataforcolourastherecognitionportionofthe process starts at some point in the calibration part, dependingontheparametersthatwererecorded.
4. APPLICATION DESIGN
Uponstartup,thesoftwarewilldisplayaconsolewindow; users must make a selection from the primary menu

selections because each option affects the program's capabilitiesdifferently.Thesecondoptionenablesthemto givethecustomertheabilitytopickandchangethecolours theydesireinordertoachievegreateraccuracyandefficacy atalaterstageinthepopularityphase.Additionally,the0.33 option allows the user to modify the program's settings, includingthecamerasthatcanbechosenaswellasthesize ofthecommentwindow.
When the default option is chosen, the application will launch several approaches, establish the appropriate variables,anddisplayanumberofwindows,includingone thatdisplaystherealtimecapturedframeandothersthat display the HSV track bars and binary thresholds of individual shades. To increase the popularity of colour accuracy,usersareinvitedtoalterthetrackbarsfortheHSV thatareprovided.However,usersmusthaveafundamental understandingoftheHSVshadeversioninordertoeditthe tunebarseffectively;otherwise,incorrectadjustmentcould causetheentirereputationareatomalfunction.
The UI and AI are combined in this application. The user opensthe.exefilethatcontainstheapplicationfirst.When thefileisopened,auserinterfacewiththeoptiontoTRACK appears. When the user clicks that button, our backend applicationisrun,andthelogicinsideallowsaccesstothe cameraandfingertrackingactions.
5. MODULES USED
5.1. OpenCV


Itisasoftwarelibraryformachinelearningandcomputer vision. A standard infrastructure for computer vision applicationswascreatedwithOpenCVinordertospeedup the incorporation of artificial intelligence into products. SinceOpenCVisanApache2licensedproduct,itissimple forbusinessestouseandalterthecode.ItalsooffersGpu acceleration features for real-time operation. OpenCV is utilisedinmanydifferentfields,includingasmobilerobots, facial and gesture recognition systems, and 2D and 3D featuretoolkits.
5.2. MediaPipe
A cross-platform framework developed by Google that is mostly employed for the construction of multimodal pipelines in machine learning. Since the MediaPipe frameworkisbuiltusingstatisticaldata,itismosthelpfulfor the cross-platform design process. The MediaPipe framework is multimodal, which means it can be used for motiontracking,objectdetection,facedetection,facemesh, iris scanner position detection, hand detection, hair segmentation, and objection detection. The MediaPipe frameworkisthefinestchoicefordeveloperswhencreating, analysing,anddesigningsystemsperformanceintheformof graphs.Ithasalsobeenutilisedforconstructingavarietyof cross-platformapplicationsandsystems.Hereitisusedfor streamingthevideoandgesturerecognition.
5.3. Numpy
NumPyisanall-purposetoolkitfordealing witharrays.It providesamultidimensionalarrayobjectwithexceptional performanceandtheabilitytointeractwiththesearrays.For scientificcomputing,itistheessentialPythonpackage.The programme is open-source. It has a number of characteristics,includingthefollowingcrucial ones: Tools formergingC/C++andFortrancode,aN-dimensionalarray object, and sophisticated (broadcasting) functions useful Fourier transform, random number, and linear algebra abilities.
5.4. Pycaw
Python Core Audio Windows Library for audio related functionalitieslikeremovingbackgroundnoises.
5.5. TKinter
The typical Python interface for the Tk GUI toolkit is the Tkinter package. The majority of Unix systems, including macOS,aswellasWindowsmachines,supportTkinter.The combination of Python and Tkinter makes it quick and simple to develop GUI apps. An effective object-oriented interfacefortheTkGUItoolkitisprovidedbyTkinter.
5.6. PyAutoGUI
The automation library PyAutogui for Python supports keyboardandmousecontrol.Alternatively,wecouldsaythat itmakesiteasierforus toautomatekeyboardandmouse clicks in order to establish interaction with another programmeusingaPythonscript.Ahandfulofthefeaturesit offersare,Inthewindowsoftheotherapplications,wecan movethemouseandclick,Wecancommunicatekeystrokes withotherprogrammes.Asanillustration,fillingoutaform, enteringasearchtermintoabrowser,etc.Anotheroptionis totakepicturesandprovideanimage.Itenablesustofind an application window and move, resize, maximise, or reduceit.Displaymessageandalarmboxes.
6.
6. RESULT ANALYSIS
Theproposedsystemwassuccessfullyimplementedandthe followingresultswereachieved.
Abasicandresponsivemouseoperatingsystem.Incredibly delicate interactions were performed. The interaction accuracywasscaled-upbydecreasingthepresentsensitivity level. The mouse movements were handled even more

preciselyindarkspaces.Asystemwhichusestheconceptof historyofimagesfortransformingarealtimecollectionof framesofimagesandprocessingthemtogreatlyimprovethe accuracy of cursor movements and fix issues with malfunctioningofthesystemduetolowvideoqualityand lightingproblems.Useofasinglecameraforhandgesture detection.Eliminationoftheuseofcolouredfingertips.Itcan adapttodifferentbackgrounds.
Theaccuracythatwasachievedforcursormovementswas 97.4%andthatfortheLeftclickrecognitionwas96.2%.For theRightclickrecognition,itwas 94.3% andforScrolling upanddown,itwasanalysedtobe94.7%.

Italsoprovedtobeacosteffectiveandmostefficient,userfriendlysystemthatcanbeimplementedveryeasily.
in International Conference on Broadband, Wireless Computing, Communication and Applications 978-07695-4,2010

[2] HemantaSaik,AbhikBanerjee,KoustuvmoniBharadwaj, Abhirup Ghosh, “Mouse Control using a Web Camera basedonColourDetection”in International Journalof ComputerTrendsandTechnology(IJCTT)–volume9.
[3] Ronak Joshi, Rashmi Adatkar, et.al, (2017). “Virtual Mouse”inImperialJournalofInterdisciplinaryResearch (IJIR),Vol-3,Issue-4.
[4] PritpalSingh,TanjotSethi,B.B.V.L.DeepakandMetaDev PrasadMurthy(2015).“Real-TimeObjectDetectionand Tracking Using Color Feature and Motion” in the InternationalConferenceonCommunicationandSignal Processing.
[5] “Virtual Mouse Implementation using Color Pointer Detection” byD.S.Sureshand I.V.Bhavana.
[6] “AvirtualmouseinterfacewithatwolayeredBayesian network”by S.W.Lee,D.Kang,S.Huh,M.C.Roh.
[7] D. Lee, D.-H. Liou, and C.-C. Hsieh, “A real time hand gesturerecognitionsystemusingmotionhistoryimage,” inProceedingsofthe2ndInternationalConferenceon SignalProcessingSystems,July2010.
[8] “Cursor system using hand gesture recognition,” in IJARCCEbyS.U.Dudhane
7. CONCLUSION
Thismodelcancometoaconclusionbyutilisingcomputer visiontechniqueslikeopenCV,colourvariationapproaches, andthegenerationofmousemovementthroughtheusageof specificpackages.Thiswillbeutilisedformousemovement viacoordinates.Thiscanmakesystemsandnumerousother applicationseasiertouse.Asaresult,theopenCVoffersits customersavarietyofmodelsthatareeasilyaccessibleand willsimplifytheirlives.
Asanextensiontothiswork,itwouldbeoptimaltostudy various hardware approaches that would produce more precisehanddetectionsandconductresearchintoadvanced computational materials for image processing. This study notonlyillustratedthevariousgestureactivitiesthatusers might perform, but it also showed the potential for streamlininguser interactionswithhardware devicesand personalcomputers.
REFERENCES
[1] “Fingertip-writingalphanumericCharacterRecognition for Vision-based Human Computer Interaction” by Cheng-YuanShih,Chien-ChengLee,andBor-ShennJeng,
[9] IzamanKhanandAdnanIbraheemRafiqu,“Surveyon Various Gesture Recognition Technologies and Techniques” in International Journal of Computer ApplicationsVolume50–No.7,July2012
[10] “Contested Collective Intelligence: Rationale, Technologies, and a Human-Machine Annotation” by ÁgnesSándor,AnnaDeLiddo,et.al,(2012).inComputer SupportedCooperativeWorkVolume21,Issue4–5,pp 417–448.