Facial Capture - The Process
I did not create this process from scratch, I based it off of a technique created by two previous Savannah College of Art and Design students, while I was attending for my Undergraduate degree. The great thing about this process, is that it can be done with minimal cost and minimal equipment needed.
The camera was a regular digital camera, held in front of the actor's face.
Step 1: Camera Head Rig
Some type of camera rig needs to be made that can hold the camera directly in front of the actor's face. This will help to stabilize the footage later.

I built my rig from painter sticks, a protective face shield helmet, and foam core. Previous students have made the camera rig out of cardboard boxes. This process can also been done with a GoPro camera mounted on a helmet.
Joint / Marker Placement | Markers on Actor |
---|
Step 2: Marker Placement
There are no specific guidelines for this process as to where the markers need to be placed. I used simple eye-liner for the markers, but reflective markers can also be used with a black-light.

Markers on the actor should coincide with the joints placed on the CG character model.
Screenshot of how the file should look when stabilizing the footage.
Step 3: Stabilize Footage
I used Adobe After Effects to stabilize my footage.
Create a mask around the actor's face. ( Be sure to look through the footage to make sure the mask does not occlude any markers )
Remember to feather the edges of the mask. The purpose of the mask is so the footage will need to process less data and to help track the markers in the next process.
The actor needs to start each shot with a regular calm expression. (Like returning to T-pose)
To stabilize: Effect > Distort > Warp Stabilizer

Export as a JPEG Sequence. (Make sure your Format Options are at the highest quality.
Things to look out for when using the converter
Step 5: Import Tracks
For this process, you will need to download a script.
​
Survey Solver Tracking Converter:
​
Follow the instruction that come with the script on how to install the code. Once you click on the solver icon, a new dialog box should appear.
Select Import Type is Movimento > Locate the .rz2 file that you previously created.
​
Select Export Type is Maya Locator / 2D
In the Settings Tab, make sure your Plate Resolution and Focal Length match the camera you used to capture the original footage.
For Image Plane, locate the JPEG Sequence you used to track your markers.
​
Once all of your settings are configured, click Convert Data.
This will initially appear as a closed group in the Outliner. Open the group and make the locators visible by selecting all of the locators and clicking Shift + S.
Eyes are not captured in this process. They will still need to be animated.
Step 6: Parent Locators
A group is created, called SS_Scene.
The locators will not be visible right away. You need to open the outliner and make the locators visible. (Shift + S)

Next, you need to parent the locators to the corresponding joints on your model. The order in which you select the locator and joint is crucial.

Select the locator first then Shift + Left Mouse Button last. Once the two are selected, you can parent the locator to the joint. To do this go to the Constrain Menu > Parent (click on the options box). Just the regular default parent settings are fine.

Once all of the locators are parented to the corresponding joints, your facial capture is complete.