CMPE 537 Computer Vision
Fall 2005 Term Project Proposal
Real Time Hand Pose Estimation Işık Barış Fidaner 2005702532
Problem Since computers were invented, the interaction between human beings and the machines had a long way. In the old days, people were mostly pushing levers and buttons, while watching colored lights and pointers. After computers, more sophisticated input/output devices were developed. Firstly keyboard, then mouse was used for providing a smooth transition layer between human mind and computer processor. For this layer to get even thinner, there are two options. Either the human beings have to start interacting like computers, or vice versa. We prefer the second option. Human beings interact with the world mostly with their hands. It is told that the hands are an extension of the brain. It is known that about a quarter of the motor cortex is devoted to the muscles of the hands. Hand muscles are one of the most important devices of humans to interact with the objects and environment and the first tool to start production processes that make them human.
Figure 1. Homunculus - Areas of motor cortex that control different parts of the body
Işık Barış Fidaner
Başar Uğur 2005702906
Solution Hand movements like hitting, grasping, holding, dropping are our basic interactions with the physical world. We believe, these can also be the basic interaction mechanisms of humans with virtual physical worlds. This possibility depends on the technologies for processing the information extracted from hand movements. Haptic devices are an option, but another option is processing the visual data. We choose the second option, because the widespread use of webcams make this option more and more portable. But this is a computationally harder option. Real time recognition of hand movements requires a sequence of processes. Firstly, the image data is momentarily captured from the webcam. Second step is processing the image, so that the noise is removed and image is enhanced. Third step is edge detection and finding certain areas on the hand. To make this step easier, a special colored glove is going to be used. The glove is going to be composed of six colors, three on one side, three on the other side of the hand. In this stage, the computer knows the visible areas on the hand, and the situation between them. Then the fourth step is using this information to estimate the behavior of the hand. After estimating the behavior and validating the movement of the hand, it is actuated by a graphical 3D model of it. That is, the graphics environment will try to act as a mirror at first glance. But this system must be entirely tested before moving on. Successfully obtaining the mirror illusion, next step will be putting the hand in a virtual physical world. If nothing goes wrong, finally the user will be able to experience the interaction with a physical environment. Başar Uğur