OPENGL BASED VISUAL TOOL FOR FACIAL ANIMATION

Page 1

OPENGL BASED VISUAL TOOL FOR FACIAL ANIMATION

by Işık Barış Fidaner Başar Uğur

Submitted to the Department of Computer Engineering in partial fulfilment of the requirements for the degree of Bachelor of Science in Computer Engineering

Boğaziçi University June 2005


Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv Acronyms and Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . v 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2. Studies on Facial Animation . . . . . . . . . . . . . . . . . . . . . . 4 2.1 Related areas . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3. Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4. Tools and platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.1 OpenGL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.2 GLUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.3 Microsoft Visual C++ 6.0 . . . . . . . . . . . . . . . . . . . . 8 5. Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 5.1 VRML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 5.2 FAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 6. Geometric Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 7. Animation Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 7.1 Animator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 7.2 Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 7.3 FAP Animation . . . . . . . . . . . . . . . . . . . . . . . . . . 12 8. Interface Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 9. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Appendix A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Appendix B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Appendix C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Appendix D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 References not cited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

ii


Preface Although this seems to be one semester’s work, it is a large sum of 4 years’ experience at our Department of Computer Engineering at Boğaziçi University. The department has usually (not always) provided us flexible time, large space and many opportunities to focus on our project. Başar would like to thank Barış and vice versa, for sharing time, space, food and coffee (but not cigarettes).

iii


Project Name

: OpenGL Based Visual Tool for Facial Animation

Project Team

: Işık Barış Fidaner – Başar Uğur

Term

: 2004/2005 – II. Semester

Keywords

: Face animation, MPEG-4, FAP, FAPU, face model, VRML, VRML Reader, Winged-Edge Table,

Summary

:

For the growing technology of 3D modelling and animating, one of the widely used technologies is MPEG-4. MPEG-4 has a standard set of parameters for facial animation. Our aim is to develop a program that enables geometrically editing and interpreting 3D face models and animating them with MPEG-4 FAP standards. We eventually developed one on OpenGL and GLUT, called Face Edit. Face Edit is a program with the following specifications: There exists an interface including a menu and moving dialogs and buttons. It enables you to read 3D face models from VRML files. It keeps them in Winged-Edge Structure. The user can display the model from different angles with changeable view modes, can modify the 3D face model by selecting areas on the model. The program includes an option for setting MPEG-4 FP areas on the 3D face model, by showing pictures describing the standards. The user can undo or redo selections or modifications. When saving 3D face model into VRML file, Face Edit inserts a special code for FP areas as comments. After defining FP areas, FAPUs are automatically calculated from FP definitions. Then, a FAP sequence can be loaded and animated using user-set FP areas and FAPUs.

iv


Acronyms and Symbols

3D

Three-dimensional

AU

Angular unit (FAPU)

ENS

Eye-nose separation (FAPU)

ES

Eye separation (FAPU)

FAP

Facial Animation Parameter

FAPU

Facial Animation Parameter Unit

FDP

Facial Definition Parameters

FP

Feature Point

GLUT

(Open) Graphics Library Utility Toolkit

MNS

Mouth-nose separation (FAPU)

MPEG

Moving Picture Experts Group

MW

Mouth width (FAPU)

OpenGL

Open Graphics Library

VRML

Virtual Reality Modelling Language

v


1 Introduction Face is the most affective part of the body on human perception. Its every little movement is captured by the brain and interpreted. We are born with that sophisticated hardware which detects faces with its movements and keeps a huge record of them. Besides, face recognition is a current well-known problem in computer science. However, our point is not of recognition, but of creation. You may recall the first music video to appear on MTV Europe, of Dire Straits’ “Money For Nothing” (Figure 1.1). It contained, interestingly, a 3D low-poly animation which could give a little sense of reality and a large sense of a strange mechanical environment which was trying to be a friendly animation. Actually, it was a friendly animation, but it intuitively gave out the idea of incompleteness. More than incompleteness, it was the idea of a clear beginning of 3D animation, and especially faces in that new world.

Figure 1.1. Dire Straits’ “Money For Nothing” video

At the beginning, the faces were like these. But afterwards, 3D faces became an area to work hard on. Thinking of the 3D graphics -and faces, especially- in the historical context, among all the game development area we may recall Sierra’s HalfLife. It was a First Person Shooter game containing most of the techniques in 3D rendering and being popular of creating a real environment experience for the game player. Half-Life became popular (especially with the add-on network game, Counter Strike) both by its widespread playability and the successful appliance of the graphics techniques used in the game, which helped the team form a conceptual environment which could also be seen as a new art issue. Last year, Sierra came up with Half-Life 2, which still holds the fire with its every minor detail leading you to speculate on or simply congratulate (Figure 1.2). One of these details is, the importance given to facial animations. As we mentioned earlier, about the effect of face on human perception, Half-Life owes it success to the attention they pay on this subject. Following the newest techniques to render more realistic faces, they become able to gain the player and make him/her get more illusion from the game. There are some comparative examples -showing at least the Sierra team’s improvement throughout the years- which encourages us to focus our attention on this specific issue of the well known large game concept.


Figure 1.2. Half-Life 2, with its every minor detail leading you to speculate on or simply congratulate

As the screen adapter cards has grown in exponential speed with very high numbered polygon rendering capabilities, the flexibility of increasing the geometric details of a face has also increased. Apart from high number polygon rendering option, many other techniques like texture, bump and displacement mapping, are used for getting more and more real experiences of a generated 3D face. Today’s technologies differ in the ways they give out the experience, but the results show us that we are not so far away from real, of course, virtually (Figure 1.3). The last thing left will probably be the standardization of them in a universally usable context.

7


Figure 1.3. Current technology of face modelling

The results may be well charming. The point is, first choosing the area to work on. Our aim was to dig deep into the dimensions of different face models and come up with a toolkit which would enable the users to work on the geometry of faces, on the level of vertices, edges and faces. Starting from that point, we looked for a standard in 3D graphics, which would enable us to input/output 3D mesh objects –especially the ones that formed faces. The standard we have agreed on is VRML. It is a simple and generic file format to input/output 3D worlds. It is widely used and imported/exported by many popular 3D graphics programs. That would enable our resulting files to be portable. For the viewing engine, we decided to use our already existing 3D engine which was developed for the Introduction to Computer Graphics course project and to improve it to adapt to the new system with many additions. Afterwards, we agreed on forming an interface that would allow the users to select the vertices and faces, label them, edit them in 3D space. Then the whole idea could lead to an animation generation from the facial animation standards, because after really labelling the vertices, very few jobs are left to form a simple facial animation. Our final conclusions were satisfactory. We developed the engine and cleaned it from bugs, formed the robust winged-edge table structure and used it in numerous ways in the program, built up a quite generic interactive menu environment and finally created a facial animation depending on the MPEG-4 facial animation standards.

8


2 Studies on Facial Animation The development of Facial Animation as an area of study, is parallel to the improvements in multimedia and networking technologies, and the increasing need for general concepts and standards for Facial Animation. Especially in games and entertainment, virtual humanoids are widely used, but there has not been a standardization of these applications until recently. Application of face animation has several problems that can be Figure 2.1. Six basic emotions solved by a general standard. For example, which part of the face are you going to to move, in which direction? Or how can you create different expressions related with human feelings such as desire, fear or anger? MPEG-4 is the first international multimedia standard that includes facial animation. MPEG-4 includes standards for natural as well as synthetic sounds and videos. It also involves 3D graphics standards including facial and body animation of virtual humans. The facial animation parameters (FAP) does not only include low-level parameters, it also includes high-level parameters such as visemes and expressions. MPEG-4 Binary Format for Scenes (BIFS) is the file standard for transmitting this data through a communication network [1]. MPEG-4 divides the audiovisual content into its “media objects” to represent it. Then it describes “compound media objects” by composing these objects into a scene. These descriptions are transferred through network lines. At the end, the user can interact with the scene. In all these processes, standard and well specified “virtual characters” are formed by composing information from facial and body animation parameters [2]. There are two types of facial animation parameters. Low-level parameters are a little movement of one muscle. For example, blinking an eye, raising side of lips, nodding the head etc. correspond to low-level parameters. High-level parameters are either visemes, which are visual representations of phonemes, or emotional expressions. Visemes are used for talking heads. The speech text is divided into its sounds (phonemes). Every phoneme has a corresponding viseme, thus a talking head is an animation composed of the sequence of visemes that are extracted from that text or audio. Humans show six types of expressions, psychologists say. These are anger, disgust, fear, Figure 2.2. Greta’s eye surprise, sadness and joy (Figure 2.1). Still, there are several problems and 9


unanswered questions in this area. This is partly because animating human beings is a very hard task, and can probably never be completely solved. Firstly, growing use of facial animation is increasing the need for a general framework for facial animation. There are several studies for producing this kind of framework. The framework must include a portable MPEG-4 compatible facial animation player, a program for producing face models and their animations, and related tools compatible with Figure 2.3. Greta’s smile many platforms [3]. In fact, there are some software developed, but they are far from forming a general framework. For example, “Greta” is a face animation engine for a single model in the form of a young woman [4] (Figure 2.2, 2.3). Web Facial Expression Editor (WFEE) is a production tool for animating faces by delivering deformations through web [5]. Second problem is closely related to 3D motion capturing (Figure 2.4, 2.5). This is the problem of capturing the movements of a speaking person’s natural face movements either from video or in other way, and producing corresponding facial animation parameters. Some studies use web interfaces in capturing user movements (Real-Time Animation And Motion Capture In Web Human Director (WHD)) whereas other studies concentrate on capturing movements from sequences of images with different algorithms. In some algorithms the model of speaker is used for tracking [6]. In some other studies, model is also left as an unknown in the mathematical equation of face movements [7]. And some studies concentrate on capturing visemes by capturing corresponding visemes from audio data [8]. Thirdly, MPEG-4 includes high-level parameters for facial expressions and visemes in speech, but it does not have highlevel parameters for body animation. There are studies for developing a standard of complex bodily behaviors, such as Bodily Animation Script (BAS) [9]. By solving its problems, facial animation is opening its way to many possibilities. Most important element in a virtual environment, is the human being. The characters make the environment meaningful. And facial movements are the natural way of interpersonal interaction. Studies on facial animation generally aim at creating Figure 2.4. Motion Capturing more realistic, or more meaningful virtual environments and characters living in them. In these virtual environments, there is the model representing the user: The Avatar. And there are 10


other synthetic characters, directed by natural or artificial intelligence. Some studies focus on these issues about the interaction of different characters in the environment [10].

2.1 Related areas We used Winged-Edge Table Data Structure in our program [11]. This structure makes it easier to comprehend operations involving the neighborhood relations in 3D geometry. This neighborhood relations can be used to define some areas on the face geometry, which we tried to accomplish. In a face model, we think that curvature of a point is an important variable for automatically determining certain areas on the face model. Curvature is a mathematical value on a surface at a point. Curvature is related to the radius of the tangent sphere on the surface at that point [12]. But we did not use curvature and left it as a possible future work.

Figure 2.5. Real-time speech capturing

11


3 Purpose Our main purpose is to develop a visual tool on OpenGL for facial animation. The program is expected to read 3D mesh files, edit them and write them as they are read. As the tool is intended to be for facial animation, it should give users the chance of labelling the points and areas accordingly in a standard context. In Face Edit 1.0, we accomplished some of our initial purposes such as: 1. Reading and writing VRML files, 2. Using Winged-Edge Structure to keep the model, 3. Select areas by clicking on the model, 4. Modifying the model by applying translations/rotations on the selected area. And our purposes when we finished the first version of the program were: 1. Using a menu, (to open files, save, undo, redo etc.) 2. Using the freehand tool to select an area of an irregular shape, 3. Using the magic wand to select an area by its curvature, 4. Naming the areas you selected according to MPEG-4 facial animation standards, (such as "nose", "lips", "cheeks" and so on) 5. Defining a simple property, by assigning an operation on a single area, 6. Defining a composite property, by collecting simple properties as a weighted sum, 7. Saving the property information in a separate format linked to the VRML. 8. Viewing the mesh structure with its saved properties in a previously determined key frame flow to experience the animation effect.

12


4 Tools and platforms 4.1 OpenGL OpenGL is the premier environment for developing portable, interactive 2D and 3D graphics applications. Since its introduction in 1992, OpenGL has become the industry's most widely used and supported 2D and 3D graphics application programming interface (API), bringing thousands of applications to a wide variety of computer platforms. OpenGL fosters innovation and speeds application development by incorporating a broad set of rendering, texture mapping, special effects, and other powerful visualization functions. Developers can leverage the power of OpenGL across all popular desktop and workstation platforms, ensuring wide application deployment.

4.2 GLUT GLUT is the OpenGL Utility Toolkit, a window system independent toolkit for writing OpenGL programs. It implements a simple windowing API for OpenGL. It makes it considerably easier to learn about and explore OpenGL programming. GLUT provides a portable API so you can write a single OpenGL program that works across all PC and workstation OS platforms. It is designed for constructing small to medium sized OpenGL programs. The GLUT library has both C, C++ (same as C), FORTRAN, and Ada programming bindings. The GLUT source code distribution is portable to nearly all OpenGL implementations and platforms. The current version is 3.7. Additional releases of the library are not anticipated.

4.3 Microsoft Visual C++ 6.0 Visual C++ 6.0 is a powerful C++ tool for creating high-performance applications. Nearly all world-class software, ranging from the leading Web browsers to missioncritical corporate applications, is built using the Visual C++ development system.

13


5 Standards 5.1 VRML VRML is a file format for describing interactive 3D objects and worlds. VRML is designed to be used on the Internet, intranets, and local client systems. VRML is also intended to be a universal interchange format for integrated 3D graphics and multimedia. VRML may be used in a variety of application areas such as engineering and scientific visualization, multimedia presentations, entertainment and educational titles, web pages, and shared virtual worlds. Our program makes a partial use of VRML. As we are interested in human face objects, we need mesh data. Observing the complex VRML specification and comparing it with the files that we needed to input/output; we came up with a new and simple standard of ours: DEF <name> Separator -> the compact area that will be determined { Coordinate3 -> actual 3D coordinates of the points in the 3D world { point[x1 y1 z1, x2 y2 z2, x3 y3 z3, ...] } Texture2 -> Texture image file name { filename "<name>" } TextureCoordinate2 -> 2D coordinates of texture points in the image { point[tx1 ty1, tx2 ty2, tx3 ty3, ...] } IndexedFaceSet -> a face set given by the indices specified before { coordIndex[index1_1, index1_2, ..., index1_N, -1, index2_1, index2_2, ..., index2_N, -1, ...] textureCoordIndex[txIndex1_1, txIndex1_2, ..., txIndex1_N, -1, ..., txIndex2_1, txIndex2_2, ..., txIndex2_N, -1, ...] } }

5.2 FAP FAP are the set of parameters defined by MPEG-4 to allow the animation of synthetic face models. MPEG-4 defines 68 FAPs: 2 hi-level FAPs, visemes and expressions, and 66 low-level FAPs. With the exception of some FAPs which control the head rotations, the eyeball rotations etc, each low-level FAP indicates the translation of the corresponding feature point, with respect to its position in the neutral face, along one of the coordinate axes. Briefly, FAPs affect feature points by FAP Units (FAPUs). The standard reference figures and tables for feature points and FAPUs are included in Appendix C.

14


6 Geometric Structure The geometric structure of a model consists of points, edges and surfaces that are read from VRML file. Every point has three coordinates in 3D space. Every surface has either three or four corners. Only triangles and quads are allowed, larger surfaces are divided into triangles. And every edge keeps necessary information about its endpoints and the two surfaces (wings of an edge). These elements are interconnected with the Winged-Edge Data Structure. In the winged-edge structure, nearly all information about the neighborhood is stored in the edges. The points and surfaces only need to know a neighboring edge. But the edges contain more information. Point class has following geometrical member variables: X, Y, Z: Three coordinates that determine the position of the point relative to the model origin. You can add, subtract points just like vectors, find dot product and multiply a point with a scalar. Every relevant operator is overloaded. Edge: One of neighboring edges. NeighboringEdges(): Finds and returns the full list of neighboring edges. Surface class has following geometrical member variables: Corners[]: Array of points, that determine corner points of the surface. Edge: One of neighboring edges. But the Edge class following additional member variables: Top, Bottom: Top and bottom points of an edge. Obviously, an edge has some sense of directionality from bottom point to top point. Left, Right: Left and right “wings” of the edge. These are two surfaces that are at the left and right of the edge as directed from bottom to top. They may be NULL if less number of wings exist. LeftForward, LeftBackward: If the edges of the left “wing” are traversed in counter-clockwise direction, starting from this edge, LeftForward is the next edge, LeftBackward is the previous. RightForward, RightBackward: If the edges of the right “wing” are traversed in counter-clockwise direction, starting from this edge, RightForward is the next edge, RightBackward is the previous (Figure 6.1). Top point

Left forward edge

Right backward edge

Left wing surface

Left backward edge

Right wing surface

Right forward edge Bottom point

Figure 6.1. Winged-Edge Data Structure

15


7 Animation Structure This is the program’s graphics engine which handles camera-like actions while user interactivity continues, just like in the window applications. As double buffering is used, the frames are immediately formed according to many different actions taken by the user and displayed. A continuity experience is obtained by that manner.

7.1 Animator This is the engine class for camera actions and spotlight actions. Camera defines the angle and the position we look at the object. Camera can be translated and rotated without losing the face object’s centralization on the screen. Spotlight defines only the x and y angles of the point light source relative to the objects axes, therefore remaining around a sphere centered the same with the object.

7.2 Modes The system has some number of modes, which are predefined constants affecting the kinds of screens allowed to be obtained and the types of actions allowed to be taken. Here are the list of modes: Intro: The beginning of the program mode. In this mode, the program owners’ animation with the program logo is shown. Viewing: The ready-and-steady state of the system. In this mode, user can look at the model by moving the camera and the spotlight. S/he also can interact with the menus or select areas -actually causing alpha and beta values to increase. Alphas are the values of the vertices which indicate how much they are included in the current selected region. Betas are the instant values of the vertices to be subtracted from alphas, when deselection is done. There are some sub-modes of Viewing, if you will. They are boolean variables that are checked in the necessary positions to perform their meaning: vert icesVis ib le : The representation of vertices (by a diamond around them) visible or not bordersVis ib le: The borders of a selected area or defined FP area visible or not spot l ightVis ib le : Spotlight is visible or not areasVis ib le: Defined FP areas are visible or not s lowMot ion : Animation will be played slow or not wireFrame : Only the edges will be drawn or not Editing: In this mode, user can only do modification to the vertices specified by the selection, allowed by Viewing mode. The transformations of the vertices are 16


handled by their class functions. Animate FAP: The animation mode. Pre-generated FAP sequence files are read into the system and shown in this mode. To enter this mode, FAP file to be read and all the FAP Units must have been specified.

7.3 FAP Animation An animation using FAPs are handled by modifying the areas whose most affective (or most affected) vertices are labeled (or called) as feature points. This area modification is done by the FAPs that affect the relevant feature point at each frame. The affection is calculated by the frame’s relevant FAP value and that FAP’s FAPU. Then, if a well formed sequence is given, a continuous animation is formed. A simple FAP class is formed for the purpose of these animations. It is formed according to the FAP standards. class FAP { public: bool bidirectional; int direction; int fapUnit; int areaIndex1,areaIndex2; }; bidirectional is the boolean that determines if the FAP is bidirectional or not (i.e. uni-directional). direct ionis the integer which determines the positive motion direction fapUnit is the fapUnit index in the fapUnits array areaIndex1 is the index of the area that this FAP affects The ambiguous areaIndex2 is formed for the sake of FAP # 47, which affects two feature points, different from the other FAPs that affect only one point. It is again the index of the area that this FAP affects. f rames[FRAMES_MAX][68]

array keeps each 68 FAP value for a specified

number of frames. It is called for modifying the FP areas just before showing them in the sequence. For all these frames (i.e. the updated positions of the vertices), one drawing is done and as they follow each other, a good experience is achieved.

17


8 Structure of the Program Interface User interacts with the program through a user interface. This interface involves a main menu and different types of dialogs. Menu and Dialog object classes are used for the interface. Menu is designed for single use, as the main menu of the program. Menu object class includes following member variables and methods: X, Y: Coordinates of the menu on the screen. Items[]: Names and numbers of every menu item. AddItem(): For adding a new menu item and relating with an operation. Draw(): For drawing the menu into the 3D space.

Dialogs are more complicated. The dialog box can be either a message dialog, an input dialog, or a list of items for the user to pick from. X, Y: Coordinates of the dialog on the screen. Message: The message viewed in the dialog. Type: The type that this dialog belongs. Visibility: This is used for fading animations when opening and closing. Texture: This is used to show pictures on dialogs. PickList: List of items to be picked by the user. AddPick(): Used for adding a new pick and associate it with an operation. Draw(): For drawing the dialog into the 3D space. Move(): For moving the dialog by dragging with the mouse. KeyPress(): Receives keys pressed by the user, necessary for input dialogs. Open(): Opens a new dialog with specified initial values. AddMessage(): For displaying a list of messages.

18


9 Summary After finishing Face Edit 1.0, we determined new aims as stated in the Purpose section. In the last version, some of these aims were reached and some were not. For example, we did not implement the “magic wand” tool, “freehand selection” tool in the program. This was because making the program compatible with MPEG-4 was more crucial, and we focused on this issue in negotiation with our advisor. To conclude, we have a program with the following specifications: 1. Menu and dialog interface with moving windows and buttons, 2. Reading 3D Face Model from VRML files, keeping in Winged-Edge Structure, 3. Displaying and lighting the Model from different angles, 4. Modifying the 3D Face Model by selecting areas, 5. Setting MPEG-4 FP areas on the 3D Face Model, by showing pictures describing FP standards, 6. Undoing/redoing selections or modifications, 7. Saving 3D Face Model into VRML file, inserting special code for FP areas as comments, 8. Automatic calculation of FAPUs from FP definitions, 9. Animating a FAP sequence using FP areas and FAPUs, 10.Changeable view settings

19


Appendix A: Face Edit CD \Documents \Images \faceeditv10.doc \finalReport.doc \presentation.ppt \proposal.doc

Directory of project documents Some images used in documents Project Progress Report and User Manual 1.0 Project Final Report Presentation for Project Proposal Project Proposal

\Source \Classes \Animator.cpp \Animator.h \Geometri.cpp \Geometri.h \Maske.cpp \maske.h \Menu.cpp \Menu.h \tga.cpp \tga.h \Debug \FAP \Texture \Wrl \glu.dll;glu32.dll;glut32.dll \resource.h \runme.bat \StdAfx.cpp \StdAfx.h \yuzlesme.cpp \yuzlesme.dsp \yuzlesme.dsw \yuzlesme.exe \yuzlesme.ncb \yuzlesme.opt \yuzlesme.plg

Directory of project source code Project classes Animator class methods Animator class definition Geometrical definitions Geometrical methods, operators Maske class methods Maske class definition Menu, Dialog class methods Menu, Dialog class definitions TGA methods TGA definitions Compile directory FAP files Texture files VRML files GL files

\Standalone \FAP \Texture \Wrl \glu.dll;glu32.dll;glut32.dll \runme.bat \yuzlesme.exe

Run this file to start Face Edit Main source file Visual C++ project file Visual C++ workspace file Executable (requires wrl file as parameter)

FAP files Texture files VRML files GL files Run this file to start Face Edit Executable (requires wrl file as parameter)

20


Appendix B: Some of the Program Code yuzlesme.cpp #include #include #include #include #include #include

<GL/glut.h> <GL/glu.h> "stdafx.h" "classes\\Maske.h" "conio.h" "windows.h"

// Klavye kontrol dizisi unsigned char keys[256]; // Fare kontrol değişkenleri int lmousex=-1,lmousey=-1, mmousex=-1, mmousey=-1, rmousex=-1, rmousey=-1; int lmouseinitx=-1,lmouseinity=-1, mmouseinitx=-1, mmouseinity=-1, rmouseinitx=-1, rmouseinity=-1; bool mouselb=false,mouserb=false,mousemb=false; bool draggedLMB(int x,int y); bool draggedRMB(int x,int y); bool draggedMMB(int x,int y); Maske *m; void void void void void void void

displayCB(void); keyCB(unsigned char key,int x,int y); keyupCB(unsigned char key,int x,int y); mouseCB(int button,int state,int x,int y); motionCB(int x,int y); passivemotionCB(int x,int y); timerCB(int value);

int main(int argc, char* argv[]) { if(argc<2) { printf("Enter wrl file as parameter\nEx: yuzlesme wrl\\nefertiti.wrl"); exit(1); } int win; m=new Maske(); glutInit(&argc,argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH); glutInitWindowSize(WINDOW_WIDTH,WINDOW_HEIGHT); win=glutCreateWindow("Grafik Programı"); glutFullScreen(); m->readFrom(argv[1]); m->drawinit(); glutDisplayFunc(displayCB); glutKeyboardFunc(keyCB); glutKeyboardUpFunc(keyupCB); glutMouseFunc(mouseCB); glutMotionFunc(motionCB); glutPassiveMotionFunc(passivemotionCB); glutTimerFunc(0,timerCB,1); glutMainLoop(); return 0; } void displayCB(void) { m->draw(false); } // Klavyede bir tuşa basıldığını algılar

21


void keyCB(unsigned char key,int x,int y) { m->keyPress(key); keys[key]=true; if(keys[27]) { exit(0); } } // Klavyede bir tuşun bırakıldığını algılar void keyupCB(unsigned char key,int x,int y) { keys[key]=false; if(key==KEY_SPACE) m->releaseSpace(); if(key=='a' || key=='A') m->releaseShift(); if(key=='s' || key=='S') m->releaseAlt(); } bool draggedLMB(int x, int y) { return (abs(lmouseinitx+lmouseinity-x-y) > 2); } bool draggedRMB(int x, int y) { return (abs(rmouseinitx+rmouseinity-x-y) > 2); } bool draggedMMB(int x, int y) { return (abs(mmouseinitx+mmouseinity-x-y) > 2); } // Farenin düğmesine basıldığını ya da bırakıldığını algılar void mouseCB(int button,int state,int x,int y) { if(state==GLUT_UP) { if(button==GLUT_LEFT_BUTTON) { mouselb=false; if(draggedLMB(x,y)) { lmousex=-1; lmousey=-1; lmouseinitx=-1; lmouseinity=-1; m->dragreleaseLMB(x,y); } else { lmousex=-1; lmousey=-1; lmouseinitx=-1; lmouseinity=-1; m->clickreleaseLMB(x,y); } } else if(button==GLUT_RIGHT_BUTTON) { mouserb=false; if(draggedRMB(x,y)) { rmousex=-1; rmousey=-1; rmouseinitx=-1;

22


rmouseinity=-1; m->dragreleaseRMB(x,y); } else { rmousex=-1; rmousey=-1; rmouseinitx=-1; rmouseinity=-1; m->clickreleaseRMB(x,y); } } else if(button==GLUT_MIDDLE_BUTTON) { mousemb=false; if(draggedMMB(x,y)) { mmousex=-1; mmousey=-1; mmouseinitx=-1; mmouseinity=-1; m->dragreleaseMMB(x,y); } else { mmousex=-1; mmousey=-1; mmouseinitx=-1; mmouseinity=-1; m->clickreleaseMMB(x,y); } } } else if(state==GLUT_DOWN) { if(button==GLUT_LEFT_BUTTON) { lmousex=lmouseinitx=x; lmousey=lmouseinity=y; mouselb=true; m->clickLMB(x,y); } else if(button==GLUT_RIGHT_BUTTON) { rmousex=rmouseinitx=x; rmousey=rmouseinity=y; mouserb=true; m->clickRMB(x,y); } else if(button==GLUT_MIDDLE_BUTTON) { mmousex=mmouseinitx=x; mmousey=mmouseinity=y; mousemb=true; m->clickMMB(x,y); } } } // Farenin düğmelerinden bazılarına basarak oynatıldığını algılar void motionCB(int x,int y) { if(mouselb) { m->dragLMB(x,y,lmousex-x,lmousey-y); lmousex=x; lmousey=y; } if(mouserb)

23


{ m->dragRMB(rmousex-x,rmousey-y); rmousex=x; rmousey=y; } if(mousemb) { m->dragMMB(mmousex-x,mmousey-y); mmousex=x; mmousey=y; } } // Farenin düğmelerine basmadan oynatıldığını algılar void passivemotionCB(int x,int y) { m->dragPassive(x,y); } // Aradan geçen zaman aralıklarını algılar void timerCB(int value) { if(value==1) { displayCB(); glutTimerFunc(11,timerCB,1); } }

24


animator.h #include<stdio.h> #include<math.h> #include<time.h> #include <GL/glut.h> #include <GL/glu.h> // Trigonometrik sabit ve makrolar #define PI 3.141592654 #define RAD2DEG (180.0f/PI) #define DEG2RAD (PI/180.0f) #define SIN(x) sin((x)*DEG2RAD) #define COS(x) cos((x)*DEG2RAD) #define ATAN(x) atan((x)*DEG2RAD) // Modeli oynatan Animator nesnesi class Animator { public: double lightanglex,lightangley; // Spot ışığın bulunduğu açılar double modellookx,modellooky; // Modelin mevcut açıları double modelz,modely; // Modelin uzaklığı ve yüksekliği // Modeli ve ışığı grafik uzayında konumlandıran işlevler void placeModel(); void placeLight(); // Modelin yönlenimini ve noktalarının konumlarını veren işlevler void getModelOrientation(double &rotx,double &roty,double &rotz); void getModelPosition(double &tx,double &ty,double &tz); // Spot ışığın konumunu değiştiren ve resetleyen işlevler void ResetLight(); void RotateLightLeft(double amount); void RotateLightRight(double amount); void RotateLightUp(double amount); void RotateLightDown(double amount); // Modelin konum ve yönlenimini değiştiren işlevler void RotateModelLeft(double amount); void RotateModelRight(double amount); void RotateModelUp(double amount); void RotateModelDown(double amount); void MoveModelNearer(double amount); void MoveModelFarther(double amount); void MoveModelUp(double amount); void MoveModelDown(double amount); // Modelin konum ve yönlenimini resetleyen işlevler void ResetModelVerticalOrientation(); void ResetModelOrientation(); void ResetModelPosition(); void ResetModelHeight(); void ResetModel(); // İlk değerlerin verildiği yaratılma işlevi Animator(); };

25


geometri.h #include <iostream> #include <math.h> #define KOSEMAX 10 #define KESISENMAX 20

// Bir yüzeyin en fazla sahip olabileceği köşe sayısı // Bir noktada kesişebilecek en fazla kenar/yüzey sayısı

#define AREAMAX

// FP alanlarının sayısı

100

// Trigonometrik sabit ve makrolar #define PI 3.141592654 #define RAD2DEG (180.0f/PI) #define DEG2RAD (PI/180.0f) #define SIN(x) sin((x)*DEG2RAD) #define COS(x) cos((x)*DEG2RAD) #define ATAN(x) atan((x)*DEG2RAD) class Nokta; class Kenar; class Yuzey; // Kenar çizgisinin tipini belirleyen sabit ve makrolar #define BD_ALPHA -1 #define BD_AREA(i) i #define BORDER_VAR(n,x) (((x)==BD_ALPHA)? (n)->alpha: (n)->area[(x)]) // Modelin yapıtaşları, nokta, yüzey ve kenar nesneleri // Noktaların tutulduğu nesne sınıfı class Nokta { public: double x,y,z; // double tx,ty; // double alpha; // kullanılır double beta; // ekleme/çıkarmalarda kullanılır double area[AREAMAX]; // tutar int areaMaxIndex; // Kenar *kenar; kenarlarından birisini tutar

Noktanın model içindeki koordinatları Noktanın doku koordinatları Noktanın alpha değeri, yeşil alan seçiminde Noktanın beta değeri, yeşil alana mavi Noktanın area değerleri FP alanlarının kaydını Noktanın en çok ait olduğu FP alanının indisi

// Kanatlı kenar yapısına göre her nokta

// Noktanın yaratılma işlevleri Nokta(double a,double b,double c,Kenar *d); Nokta(Nokta*); Nokta(); // Noktanın ait olduğu FP alanları ile ilgili işlevler double areaMax(); void setAreaMaxIndex(); void setAreaMaxIndex(int newMaxIndex); // Noktayı bir başka noktaya atayan işlev void assign(Nokta*n); // Noktanın belirli alanların sınırı boyunca komşusunu veren işlev Nokta *sinirKomsusu(int bd); // Noktada kesişen kenarları döndürür void kesisenKenarlar(Kenar *kk[],int &nkk); // Noktada kesişen kenarların diğer uç noktalarını döndürür void kesisenKenarUclari(Nokta *kn[],int &nkn); void kesisenKenarUclari(Nokta *kn[],int &nkn,Kenar *kk[],int &nkk);

26


// Noktada kesişen yüzeyleri döndürür void kesisenYuzeyler(Yuzey *ky[],int &nky); // Noktayı modele göre öteleyen, döndüren işlevler void translate(double x,double y,double z); void rotatex(double t); void rotatey(double t); void rotatez(double t); }; // Kenarların tutulduğu nesne sınıfı class Kenar { public: // Kanatlı kenar değişkenleri Nokta *alti,*ustu; // Kenarın alt ve üst noktası Yuzey *sagi,*solu; // Kenarın sağ ve sol yanındaki yüzeyler Kenar *sagGeri,*sagIleri; // Kenarın sağ ve sol yüzeyinde (saatin tersi Kenar *solGeri,*solIleri; // yönüne göre) ilerisi ve gerisindeki kenarlar // Kenarın yaratılma işlevi Kenar(Nokta *alt,Nokta *ust); }; // Yüzeylerin tutulduğu nesne sınıfı class Yuzey { public: Nokta *koseler[KOSEMAX]; // Yüzeyin köşe noktaları int koseSayisi; // Yüzeyin köşe sayısı Nokta *normal; // Yüzey normal vektörü Kenar *kenar; kenarlarından birini tutar

// Kanatlı kenar yapısına göre yüzey

// Yüzeyin yaratılma işlevi Yuzey(Nokta *yeni_koseler[],int yeni_koseSayisi); // Yüzey noktalarının ait olduğu FP alanını veren işlev int areaMaxIndex(); };

27


maske.h #include <GL/glut.h> #include <GL/glu.h> #include <string> #include <conio.h> #include "Animator.h" #include "Geometri.h" #include "Menu.h" #include "tga.h" using std::string; #define FILES_MAX #define STR_MAX

40

#define DEFAULT_FONT

GLUT_BITMAP_8_BY_13

40

#define DONE 0 #define NOT_FOUND -1 #define SCALE_NORM 6 #define NOKTASAYI 50000 #define KENARSAYI 50000 #define YUZEYSAYI 50000 #define BUFFERMAX 16 #define TEXTURE_MAX 50 #define WINDOW_WIDTH 1024 #define WINDOW_HEIGHT 768 #define SPOT_ALPHA_MIN 0.3 #define SPOT_ALPHA_MAX 11.0 #define SPOT_ALPHA_SPD 0.1 #define NORMALCOLOR #define VERTEXCOLOR

NORMAL_R,NORMAL_G,NORMAL_B//11.0f,0.8f,0.5f NORMAL_R,NORMAL_G,NORMAL_B

#define FULLALPHACOLOR

NORMAL_R,\ ALPHA_G,\ NORMAL_B NORMAL_R,\ NORMAL_G+(ALPHA_G-NORMAL_G)*0.5,\ NORMAL_B NORMAL_R+(AREA_R-NORMAL_R)*0.5,\ NORMAL_G,\ NORMAL_B

#define HALFALPHACOLOR #define HALFAREACOLOR #define #define #define #define #define #define #define #define

NORMAL_R NORMAL_G NORMAL_B AREA_R AREA_G AREA_B ALPHA_G BETA_B

0.5 0.37 0.13 0.8 0.0 0.0 0.8 0.8

#define COLOR(n,x) NORMAL_R+ (AREA_R-NORMAL_R)*(x),\ NORMAL_G+(ALPHA_G-NORMAL_G)*(n)->alpha +(AREA_G-(NORMAL_G+(ALPHA_G-NORMAL_G)*(n)->alpha))*(x),\ NORMAL_B+ (BETA_B-NORMAL_B)*(n)->beta +(AREA_B-(NORMAL_B+(BETA_B-NORMAL_B)*(n)->beta))*(x) #define AREACOLOR(x) NORMAL_R+ (AREA_R-NORMAL_R)*(x),\ NORMAL_G,\ NORMAL_B #define FULLAREACOLOR AREA_R,\ NORMAL_G,\ NORMAL_B #define ALPHACOLOR(n) NORMAL_R,\

28


NORMAL_G+(ALPHA_G-NORMAL_G)*(n)->alpha,\ NORMAL_B #define ALPHABETACOLOR(n) NORMAL_R,\ NORMAL_G+(ALPHA_G-NORMAL_G)*(n)->alpha,\ NORMAL_B+ (BETA_B-NORMAL_B)*(n)->beta #define EDGECOLOR #define #define #define #define

0.0f,0.5f,0.6f

INTRO VIEWING EDITING ANIMFAP

0 1 2 3

#define NORMAL 0 #define ADD 1 #define SUBTRACT 2 #define #define #define #define

KEY_SPACE 32 KEY_SHIFT 16 KEY_MENU 18 KEY_BACKSPACE 8

#define #define #define #define #define #define

TR_UP TR_DOWN TR_LEFT TR_RIGHT TR_FWD TR_BWD

1 2 3 4 5 6

#define UNITMAX 10 #define #define #define #define #define #define

DIR_UP DIR_DOWN DIR_LEFT DIR_RIGHT DIR_FORWARD DIR_GROWING

#define FRAMES_MAX

0 1 2 3 4 5 500

class FAP { public: bool bidirectional; int direction; int fapUnit; int areaIndex1,areaIndex2; }; class Maske { private: FAP faps[68]; int mode,selectMode,inputMode; int activeItemID; bool mouseOverActiveItem,movingDialog; bool verticesVisible,bordersVisible,spotlightVisible,areasVisible; bool slowMotion,wireFrame; int textureIndex[TEXTURE_MAX]; int textureFaceCount[TEXTURE_MAX]; int frameCount; double frames[FRAMES_MAX][68]; string sampleFileName; string areaNames[AREAMAX];

29


double fapUnits[UNITMAX]; long int animtime; int noktaSayisi,kenarSayisi,yuzeySayisi; Nokta *noktalar[NOKTASAYI]; Kenar *kenarlar[KENARSAYI]; Yuzey *yuzeyler[YUZEYSAYI]; Nokta *tutulanNoktalar[BUFFERMAX][NOKTASAYI]; int currentState; int alphadepth,betadepth; void yuzeyleriTersCevir(int l,int r); void fillWingedEdgeTable(); double scalingFactor, spotLightAlpha; void yuzeyNormali(Nokta *,Yuzey*); bool spotLightOn; Yuzey *seciliYuzey; Nokta *seciliNokta; int selectedArea; GLuint texture[TEXTURE_MAX]; double textureYOverX[TEXTURE_MAX]; public: Animator anim; void readFaps(); void specifyFapUnits(); bool allFapUnitsReady(); void selectArea(int index); bool isAreaSet(int index); void setArea(string areaName); int areaIndex(string name); Nokta *areaCenter(int index); void readAreaNames(); void picklistRefresh(); void butunKoseIndex(Yuzey*,int bk[],int &nbk); void yuzeyNormalleriniYenile(); void void void void

animateFAP(); showAnimFAP(); applyFAP(int frame); alphizeArea(int areaIndex);

void writeTo(string filename); int readFrom(string filename); int readFromFAP(string filename); bool endsWith(string read, string word); void init(); void initFile(); void noktaEkle(double x, double y, double z); void yuzeyEkle(Nokta *koseler[], int koseSayisi); Kenar *kenarEkle(Nokta *,Nokta *,bool&); void yuzeyCiz(Yuzey *y); void drawinit(); int yuzeyIndisi(Yuzey *y); int noktaIndisi(Nokta *n); double gaussianAlpha(int ad,int d); double decayAlpha(int d);

30


void void void void void void void void void void void

applyBeta(); resetAlphaDepth(); incrementAlphaDepth(); decrementAlphaDepth(); decrementBetaDepth(); resetBetaDepth(); incrementBetaDepth(); calcAlpha(); addAlpha(Nokta *n,double value,int depth); calcBeta(); addBeta(Nokta *n,double value,int depth);

void resetFapUnits(); void doOperation(int index); void writeBitmapString(string); void showIntro(); void draw(bool); void drawSpotLight(); void drawVertices(bool); void drawLines(); void drawFaces(); void drawBorder(int); void editRotateRight(double t); void editRotateDown(double t); void editMoveFarther(double t); void editMoveDown(double t); void editMoveRight(double t); void keyPress(unsigned char key); void clickLMB(int x,int y); void clickMMB(int x,int y); void clickRMB(int x,int y); void dragLMB(int x,int y,int dx,int dy); void dragRMB(int dx,int dy); void dragMMB(int dx,int dy); void dragreleaseMMB(int x,int y); void clickreleaseMMB(int x,int y); void dragreleaseRMB(int x,int y); void clickreleaseRMB(int x,int y); void dragreleaseLMB(int x,int y); void clickreleaseLMB(int x,int y); int pick(int,int,int&); void dragPassive(int x,int y); void pressW(); void pressPlus(); void pressMinus(); void releaseSpace(); void pressSpace(); void releaseShift(); void pressShift(); void releaseAlt(); void pressAlt(); void saveModelState(); void undoModelState(); void redoModelState(); Menu *mainMenu; Dialog *mainDialog; bool fileExists(string filename); bool validKey(unsigned char key); bool acceptKeys(); string doubleToString(double dNumber, int precision); void showMessage(string str); void drawMainMenu();

31


string fileList[FILES_MAX]; int fileCount; void listFiles(string directory,string extension,string files[],string prev,int count); void drawFileList(); void createMainMenu(); void destructMainMenu(); Maske(); ~Maske(); };

32


menu.h #include<iostream> #include <GL/glut.h> #include <GL/glu.h> using namespace std; // Kullanılan yazıtipi sabiti #define DEFAULT_FONT GLUT_BITMAP_8_BY_13 // Klavye tuşları ile ilgili sabitler #define KEY_SPACE 32 #define KEY_SHIFT 16 #define KEY_MENU 18 #define KEY_BACKSPACE 8 // Yazıların gölge mesafesi #define SH_DIST 1/92.5 // Menü #define #define #define #define

üye ve diyalog seçenekleri için sınır değerleri MENU_ITEM_MAX 10 PICKLIST_MAX 90 MSGLIST_MAX 10 SHOWLIST_MAX 10

// Menü #define #define #define

üyelerinin, MI_WIDTH MI_HEIGHT MI_BORDER

kenarlarının boyutları 1.0f 0.3f 0.05f

// Menü üyelerinde üst ve sol boşluk değerleri #define LEFT_MARGIN 0.2f #define TOP_MARGIN 0.18f // Input diyalogunda üst boşluk #define DIN_TOP_MARGIN 0.05f // Menülerde kullanılan renk sabitleri #define BD_INACTIVE 0.4f,0.0f,0.0f // #define BG_INACTIVE 0.0f,0.3f,0.2f // #define TX_INACTIVE 1.0f,1.0f,1.0f // #define SH_INACTIVE 0.0f,0.0f,0.0f // #define TD_INACTIVE 0.0f,0.6f,0.4f // #define BX_INACTIVE 0.8f,0.0f,0.0f // #define BD_ACTIVE 0.3f,0.0f,0.0f #define BG_ACTIVE 0.5f,0.0f,0.0f #define TX_ACTIVE 0.9f,0.8f,0.2f #define SH_ACTIVE 0.0f,0.0f,0.0f #define TD_ACTIVE 0.0f,0.6f,0.4f

Kenar rengi Fon rengi Metin rengi Gölge rengi 3boyut rengi Textfield 3boyut rengi

// Diyalogların ve kenarlarının boyutları #define DLG_WIDTH 3.0f #define DLG_HEIGHT 0.3f #define DLG_BORDER 0.03f // Diyalog input kutusunun boyutları #define DIN_HEIGHT 0.25f #define DIN_WIDTH 2.0f // Diyalog tipleri #define DT_MESSAGE 0 #define DT_INPUT 1 #define DT_PICKLIST 2 // Diyalog nesne sınıfı class Dialog { public:

33


string message; int type; int id; double x,y; int cursorCount; double visibility;

// // // // // //

Diyalogda çıkan metin Diyalogun tipi Diyalogun numarası (pick için kullanılan) Diyalog koordinatları İmlecin yanıp sönmesi için kullanılan sayaç Diyaloğun görünürlüğü

GLuint texture; double yOverX;

// Diyalogda çizilecek resim (varsa) // Resmin boyunun enine oranı

// Diyalogun açık olup olmadığı bool isOpen; // Diyalogu ekrana çizen işlev void draw(int activeItemID, int name); // Diyalogu göreli olarak taşıyan işlev void move(double dx, double dy); // Basılan tuşları alan işlev void keyPress(unsigned char key); // Diyalogu farklı şekillerde açan işlevler void open(int t,GLuint tex,double,string c,int op); void open(int t,GLuint tex,double,string c,int op,double nx,double ny); void open(int t,string c,int op,double nx,double ny); void open(int type,string c,int operation); // Diyalogu kapatan işlev void close(); // Diyalogun yaratılma ve yok edilme işlevleri Dialog(); ~Dialog(); // Picklist tipindeki diyaloglar için int picklistCount; // Diyalog seçeneklerinin sayısı string picklist[PICKLIST_MAX]; // Diyalog seçeneklerinin isimleri int pickID[PICKLIST_MAX]; // Diyalog seçeneklerinin numaraları (pick) bool pickActive[PICKLIST_MAX]; // Diyalog seçeneklerinin işaretli olup olmadıkları // Diyaloga seçenek ekleyen işlevler void addPick(string pstr,int id); void addPick(string pstr,int id,bool active); // Input tipindeki diyaloglar için string input; // Kullanıcı girdisi // Message tipindeki diyaloglar için int msgListCount; // Diyalog mesajlarının sayısı string msgList[MSGLIST_MAX]; // Diyalog mesajlarının içerikleri // Diyaloga mesaj ekleyen işlev void addMessage(string str); }; // Menü nesne sınıfı class Menu { public: double x,y; int itemCount;

// Menünün ekran koordinatları // Menünün üye sayısı

string itemCaption[MENU_ITEM_MAX]; // Üyelerin isimleri int itemID[MENU_ITEM_MAX]; // Üyelerin numaraları (pick)

34


bool isOpen; // Menünün açık olup olmadığı // Menüye üye eklemek için kullanılan işlev void addItem(string c,int id); // Menüyü çizen işlev void draw(int activeItemID); // Menüyü açan ve kapatan işlevler void open(); void close(); // Menünün yaratılma ve yok edilme işlevleri Menu(double x,double y); ~Menu(); };

35


Appendix C: Standards #

FAP name

1

viseme

2

expression

3

open_jaw

4

lower_t_midlip

5

raise_b_midlip

6

stretch_l_cornerlip

7

stretch_r_cornerlip

8

lower_t_lip_lm

9

lower_t_lip_rm

10

raise_b_lip_lm

11

raise_b_lip_rm

12

raise_l_cornerlip

13

raise_r_cornerlip

14

thrust_jaw

15

shift_jaw

16

push_b_lip

17

push_t_lip

18

depress_chin

19

close_t_l_eyelid

20

close_t_r_eyelid

21

close_b_l_eyelid

22

close_b_r_eyelid

23

yaw_l_eyeball

FAP description Set of values determining the mixture of two visemes for this frame (e.g. pbm, fv, th) A set of values determining the mixture of two facial expression Vertical jaw displacement (does not affect mouth opening) Vertical top middle inner lip displacement Vertical bottom middle inner lip displacement Horizontal displacement of left inner lip corner Horizontal displacement of right inner lip corner Vertical displacement of midpoint between left corner and middle of top inner lip Vertical displacement of midpoint between right corner and middle of top inner lip Vertical displacement of midpoint between left corner and middle of bottom inner lip Vertical displacement of midpoint between right corner and middle of bottom inner lip Vertical displacement of left inner lip corner Vertical displacement of right inner lip corner Depth displacement of jaw Side to side displacement of jaw Depth displacement of bottom middle lip Depth displacement of top middle lip Upward and compressing movement of the chin (like in sadness) Vertical displacement of top left eyelid Vertical displacement of top right eyelid Vertical displacement of bottom left eyelid Vertical displacement of bottom right eyelid Horizontal orientation of left eyeball

Units

Unior Bidir

Pos Motion

Grp

FDP Subgrp Num

na

na

na

1

na

na

na

na

1

na

MNS

U

down

2

1

MNS

B

down

2

2

MNS

B

up

2

3

MW

B

left

2

4

MW

B

right

2

5

MNS

B

down

2

6

MNS

B

down

2

7

MNS

B

up

2

8

MNS

B

up

2

9

MNS

B

up

2

4

MNS

B

up

2

5

MNS

U

forward

2

1

MW

B

right

2

1

MNS

B

forward

2

3

MNS

B

forward

2

2

MNS

B

up

2

10

IRISD

B

down

3

1

IRISD

B

down

3

2

IRISD

B

up

3

3

IRISD

B

up

3

4

AU

B

left

3

5

36


24

yaw_r_eyeball

25

pitch_l_eyeball

26

pitch_r_eyeball

27

thrust_l_eyeball

28

thrust_r_eyeball

29 30

dilate_l_pupil dilate_r_pupil

31

raise_l_i_eyebrow

32

raise_r_i_eyebrow

33

raise_l_m_eyebrow

34

raise_r_m_eyebrow

35

raise_l_o_eyebrow

36

raise_r_o_eyebrow

37

squeeze_l_eyebrow

38

squeeze_r_eyebrow

39

puff_l_cheek

40

puff_r_cheek

41

lift_l_cheek

42

lift_r_cheek

43

shift_tongue_tip

44

raise_tongue_tip

45

thrust_tongue_tip

46

raise_tongue

47

tongue_roll

48

head_pitch

49

head_yaw

50

head_roll

51

lower_t_midlip _o

52

raise_b_midlip_o

53

stretch_l_cornerlip_o

54

stretch_r_cornerlip_o

Horizontal orientation of right eyeball Vertical orientation of left eyeball Vertical orientation of right eyeball Depth displacement of left eyeball Depth displacement of right eyeball Dilation of left pupil Dilation of right pupil Vertical displacement of left inner eyebrow Vertical displacement of right inner eyebrow Vertical displacement of left middle eyebrow Vertical displacement of right middle eyebrow Vertical displacement of left outer eyebrow Vertical displacement of right outer eyebrow Horizontal displacement of left eyebrow Horizontal displacement of right eyebrow Horizontal displacement of left cheeck Horizontal displacement of right cheeck Vertical displacement of left cheek Vertical displacement of right cheek Horizontal displacement of tongue tip Vertical displacement of tongue tip Depth displacement of tongue tip Vertical displacement of tongue Rolling of the tongue into U shape Head pitch angle from top of spine Head yaw angle from top of spine Head roll angle from top of spine Vertical top middle outer lip displacement Vertical bottom middle outer lip displacement Horizontal displacement of left outer lip corner Horizontal displacement of

AU

B

left

3

6

AU

B

down

3

5

AU

B

down

3

6

ES

B

forward

3

5

ES

B

forward

3

6

IRISD IRISD

B B

growing growing

3 3

5 6

ENS

B

up

4

1

ENS

B

up

4

2

ENS

B

up

4

3

ENS

B

up

4

4

ENS

B

up

4

5

ENS

B

up

4

6

ES

B

right

4

1

ES

B

left

4

2

ES

B

left

5

1

ES

B

right

5

2

ENS

U

up

5

3

ENS

U

up

5

4

MW

B

right

6

1

MNS

B

up

6

1

MW

B

forward

6

1

MNS

B

up concave

6

2

AU

U

6

3, 4

upward AU

B

down

7

1

AU

B

left

7

1

AU

B

right

7

1

MNS

B

down

8

1

MNS

B

up

8

2

MW

B

left

8

3

MW

B

right

8

4

37


55

lower_t_lip_lm _o

56

lower_t_lip_rm _o

57

raise_b_lip_lm_o

58

raise_b_lip_rm_o

59

raise_l_cornerlip_o

60

raise_r_cornerlip _o

61

stretch_l_nose

62

stretch_r_nose

63

raise_nose

64

bend_nose

65

raise_l_ear

66

raise_r_ear

67

pull_l_ear

68

pull_r_ear

right outer lip corner Vertical displacement of midpoint between left corner and middle of top outer lip Vertical displacement of midpoint between right corner and middle of top outer lip Vertical displacement of midpoint between left corner and middle of bottom outer lip Vertical displacement of midpoint between right corner and middle of bottom outer lip Vertical displacement of left outer lip corner Vertical displacement of right outer lip corner Horizontal displacement of left side of nose Horizontal displacement of right side of nose Vertical displacement of nose tip Horizontal displacement of nose tip Vertical displacement of left ear Vertical displacement of right ear Horizontal displacement of left ear Horizontal displacement of right ear

MNS

B

down

8

5

MNS

B

down

8

6

MNS

B

up

8

7

MNS

B

up

8

8

MNS

B

up

8

3

MNS

B

up

8

4

ENS

B

left

9

1

ENS

B

right

9

2

ENS

B

up

9

3

ENS

B

right

9

3

ENS

B

up

10

1

ENS

B

up

10

2

ENS

B

left

10

3

ENS

B

right

10

4

Table 1. Facial Animation Parameters Description

FAPU value IRIS Diameter (by definition it is equal to the distance between upper and lower eyelid) in neutral face

IRISD = IRISD0 / 1024

Eye Separation

ES = ES0 / 1024

Eye - Nose Separation Mouth - Nose Separation Mouth - Width Separation Angular Unit

ENS = ENS0 / 1024 MNS = MNS0 / 1024 MW = MW0 / 1024 AU = 10-5 rad

Table 2. Facial Animation Parameters Units

38


Feature Points

39


40


Appendix D: Screenshots from Face Edit

41


User manually determines Feature Points and areas around them to animate

FAPU units can be set automatically as long as you determined FP areas in the model

42


You can use original models as well as models you created by modifying them, such as this ape derived from nefertiti

This is another modification of the nefertiti model. You can edit and save many different variations of the same model.

43


References 1. Ostermann, J., “Animation of Synthetic Faces in MPEG-4,” Computer Animation, pp. 49-51, June 1998 2. Thalmann D., Vexo F., “MPEG-4 Character Animation” 3. Pandzic I. S., Ahlberg J., Wzorek M., Rudol P., Mosmondor M., “Faces Everywhere: Towards Ubiquitous Production and Delivery of Face Animation” In Proceedings of 2nd International Conference on Mobile and Ubiquitous Multimedia MUM03, Norköping, Sweden 4. Pasquariello S., Pelachaud C., “Greta: A Simple Facial Animation Engine” 5. Adamo-Villani N., Chourasia A., Cory C., “Production interface for web-deliverable realistic interactive 3D facial animation,” 6. Odisio M., Bailly G., “Shape and appearance models of talking faces for model-based tracking” 7. Shan Y., Liu Z., Zhang Z., “Model-Based Bundle Adjustment with Application to Face Modeling,” Microsoft Research 8. Hong P., Wen Z., Huang T., “Real-Time Speech-Driven Face Animation” 9. Guye-Vuilleme A., Thalmann D., “Specifying MPEG-4 Body Behaviors” 10. Pelachaud C., Bilvi M., “Computational Model of Believable Conversational Agents” 11. Winged-Edge Data Structure, http://www.cs.mtu.edu/~shene/COURSES/cs3621/NOTES/model/wingede.html 12. Garimella R. V., Swartz B. K., “Curvature Estimation for Unstructured Triangulations of Surfaces”

44


References Not Cited Pandzic I. S., “Facial Animation Framework for the Web and Mobile Platforms” Escher M., Goto T., Kshirsagar S., Zanardi C., Magnenat Thalmann N., “User Interactive MPEG-4 Compatible Facial Animation System”

45


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.