#### DMCA

## Tracking Facial Motion (1994)

Venue: | In Proceedings of the Workshop on Motion of Nonrigid and Articulated Objects |

Citations: | 38 - 10 self |

### Citations

500 |
Finite element procedures in engineering analysis
- Bathe
- 1982
(Show Context)
Citation Context ... the internal energy due to its elastic properties, and D is a (3n \Theta 3n) damping matrix. Vector R, is a (3n \Theta 1) applied load vector, characterizing the force actuations of the muscles (see =-=[8, 1]-=- for additional details). By defining each of the triangles on the polygonal mesh of a face as an isoparametric triangular shell element, (shown in Figure 2), we can calculate the mass, stiffness and ... |

330 |
Facial action coding system
- Ekman, Friesen, et al.
- 2002
(Show Context)
Citation Context ... real-time tracking of facial articulations is the sheer complexity of human facial movement. To represent facial motion using a low-order model, many systems define independent geometric (i.e., FACS =-=[6]) and phys-=-ical [5, 7, 17] parameters for modeling facial motion. The combinations of these parameters (mostly called "Action Units") results in a large set of possible facial expressions. The level of... |

178 |
Space-time gestures
- Darrell, Pentland
- 1993
(Show Context)
Citation Context ...ese previous systems by removing the need for surface markings and hand-initialization. We describe a tool for for real-time facial tracking, using spatio-temporal normalized correlation measurements =-=[4]-=- from video which are interpreted using a physically-based facial modeling system [7]. The principle difficulty in real-time tracking of facial articulations is the sheer complexity of human facial mo... |

153 | 3-d motion estimation in model-based facial image coding
- Li, Roivainen, et al.
- 1993
(Show Context)
Citation Context ...the expressive articulations of a face an important problem in computer vision and computer graphics. Consequently, several researchers have begun to develop methods for tracking of facial expression =-=[10, 11, 16, 18]-=-. These efforts, while exciting and important, have had limitations such as requiring makeup, and handinitialization of the facial model. In this paper we improve on these previous systems by removing... |

131 |
Introduction to Random Signal Analysis and Kalman Filtering
- Brown
- 1983
(Show Context)
Citation Context ...tem. This framework uses a continuous time Kalman filter (CTKF) which allows us to estimate the uncorrupted state vector, and produces an optimal least-squares estimate under quite general conditions =-=[2]-=-. The CTKF for the above system is established by the following formulation: X = A X+ L i Y \Gamma C X j ; (2) wheresX is the linear least squares estimate of the state X, which are the motor controls... |

81 | A vision system for observing and extracting facial action parameters
- Essa, Pentland
- 1994
(Show Context)
Citation Context ...n. We describe a tool for for real-time facial tracking, using spatio-temporal normalized correlation measurements [4] from video which are interpreted using a physically-based facial modeling system =-=[7]-=-. The principle difficulty in real-time tracking of facial articulations is the sheer complexity of human facial movement. To represent facial motion using a low-order model, many systems define indep... |

38 |
Interactive graphics for plastic surgery: A task level analysis and implementation
- Pieper, Rosen, et al.
- 1992
(Show Context)
Citation Context ...h large differences in thickness of neighboring elements are not suitable for convergence [1]. Models for muscles are attached to this physical model of the facial tissue, based on the work of Pieper =-=[13] and Water-=-s [17]. 2.1 Visually extracted Facial Expressions The method of Essa and Pentland [7] provides us with a detailed physical model and also a way of observing and extracting the "action units"... |

16 |
Lip reading by optical flow
- Mase, Pentland
- 1990
(Show Context)
Citation Context ...rs of a three dimensional wireframe face model, and reproduce facial expression. Requirement of facial markings for successful tracking is a significant limitation of these systems. Mase and Pentland =-=[11, 12]-=- introduced a method to track facial action units using optical flow. Haibo Li, Pertti Roivainen and Robert Forchheimer [10] propose a feedback control loop between vision measurements and a facial mo... |

15 |
Recognition of facial expressions for optical flow
- Mase
- 1991
(Show Context)
Citation Context ...the expressive articulations of a face an important problem in computer vision and computer graphics. Consequently, several researchers have begun to develop methods for tracking of facial expression =-=[10, 11, 16, 18]-=-. These efforts, while exciting and important, have had limitations such as requiring makeup, and handinitialization of the facial model. In this paper we improve on these previous systems by removing... |

14 | Physically-based modeling for graphics and vision - Essa, Sclaroff, et al. - 1993 |

9 | Final Report to
- Ekman, Huang, et al.
(Show Context)
Citation Context ...g of facial articulations is the sheer complexity of human facial movement. To represent facial motion using a low-order model, many systems define independent geometric (i.e., FACS [6]) and physical =-=[5, 7, 17] parameter-=-s for modeling facial motion. The combinations of these parameters (mostly called "Action Units") results in a large set of possible facial expressions. The level of detail of facial motion ... |

7 | Correlation and Interpolation Networks for Real-time Expression Analysis/Synthesis
- Darrell, Essa, et al.
- 1995
(Show Context)
Citation Context ...r observations, using the Radial Basis Function (RBF) method [15] with linear basis functions. The details of using this interpolation method for real-time expression analysis and synthesis appear in =-=[3]-=-. The RBF training process associates the set of view scores with the facial state, e.g., the motor control parameters for the corresponding expression. If we train views using the entire face as a te... |

3 |
VActor animation system
- Glenn
- 1993
(Show Context)
Citation Context ...s parameters via an interpolation process, resulting in a real-time facial tracking system. 1.1 Previous Work There have been several attempts to track facial expressions over time. The VActor system =-=[9]-=-, for instance, uses physical probes or infrared markers to measure movement of the face. Another method, which has been used to produce computer animations, is that of Williams et al. [18]. In this a... |