Vicon Standard 2014

Page 47


Our Vicon T160 cameras did a great job. 16 million pixels, 120 fps, electronic freeze frame shutter. We couldn’t have done it without them.

on a mesh that would not scare every deaf viewer. This was a challenge in itself! BY Rémi Brun MOCAPL AB FOUNDER & CEO

My PhD subject at that time was “How to transform Speech into Tactile Signals for the Deaf”. This research experience led me to learn in more depth about the deaf world as a whole and also to investigate on how scientists were studying speech at that time (speech analysis, recognition and speech synthesis). Coincidently when I started motion capture, back in 1993, one of my first mocap sessions consisted in motion capturing Sign Language with optical mocap at the CNRS Research Institution. It was an exciting experience, but the results were not successful enough to take it further. Since then, it has become clear to me that Sign Language will become a very special branch in the motion capture world. Not only is this area technically difficult for so many reasons, but also, as a mocap specialist facing movement everyday over the last 20 years, I was particularly fascinated by the extremely subtle and rich movements, that convey to the full extent, this means of communication. As deaf people explain very clearly, Sign Language is as rich and as complex as our spoken language. It has its own structure, syntax and grammar. It is not a “word by word” translation of a spoken language, nor is it a type of miming. It is a real language. The technical challenge itself is tricky. In addition to capturing fingers, one has to first make sure that the facial expressions are recorded and transferred to a mesh properly, as it is important to remember that Sign Language communication focuses as much on the face, as on the fingers. And with the facial expressions, the eye gaze direction is also a very important cue. So before tackling the finger issue, we had to make sure that our in-house tools were ready to record the face and eyes, as well as being capable of transferring the subtleties

When it comes to motion capturing hands, we focus on the 10 fingers, moving very rapidly in the air, in all configurations and positions around the body; making a lot of contact with one another, sometimes with an impact, and also making contact with the body or the face. It’s a real and complex ‘ballet’ with five limbs, 22 DOF for each hand, with a thumb that has its own very particular way to roll. The challenge for us was to work out the right recording and synchronization solutions as well as processing tools to drive the right skeleton. And last but not least, we had to define how to rebuild the fingers’ mesh to ensure an accurate final mesh surface positioning in order to enable a good quality contact between the two hands. The results so far have been very rewarding and the feedback from deaf viewers has been extremely positive. It is important to bear in mind that in general the deaf community, for historical reasons, is not at ease when it comes to scientific research and avatars, and usually remain very sceptical about the results. Despite the very low quality of our mesh and rendering, the feedback remains very promising. Actually it is important to mention that we kept ourselves as far away as possible from photorealism, to avoid comments and reactions on the modeling/rendering, allowing the viewer to focus primarily on the animation side. Of course we will try photorealism too, which you can see in the video links on the right. In addition to this, we have also had some unexpected and interesting comments from deaf viewers. For example, some viewers personally knew the signer behind the mocapped movements driving the avatar, and recognized her immediately through her signing. This was also the case when the 3D model of the ‘avatar’

was simply made of cubes for the body and fingers, with no facial animation! As a comparison, it would be just like hearing people who recognize someone by their voice on the telephone. Other interesting comments were based on the ‘accent’. The signer we used was signing in ASL (American Sign Language), yet her native sign language was New Zealand Sign Language (which is very close to British Sign Language). In her avatar, when signing ASL, deaf viewers could sometimes guess her origins from the ‘accent’ in the movements recorded! So the whole experience has been incredible. We’ve worked hard, we’ve pushed boundaries, and we’ve learned a great deal about the power, the richness and the subtleties of human movement. I am sure the path will continue, as this is just the first step. We are currently finishing a research project to create a tool for Sign Language that would be like “Pro Tools” for sound engineers. It will be a tool box to analyze the Sign Language signal through all its aspects, as scientists did for speech 30 years ago.

7 Discover More Mocaplab’s ASL signing avatar: A photorealistic animation project from Mocaplap:


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.