Choreography creation

From Valve Developer Community
Revision as of 11:01, 10 August 2006 by TomEdwards (talk | contribs) (Created, let's hope I finish this one day...)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This tutorial covers the creation of choreographed scenes (or simply scenes) in Source engine games or mods.


Episode One introduced version two of Source's powerful facial animation system.
a set of instructions stored in a .VCD file that dictate scripted or semi-scripted behaviour for NPCs, including speech, lip syncing, facial expressions, and full body and blendable animations. By combining some or all of those components, we create a choreographed scene.
Choreography should be used for all but the very simplest of NPC speech. A scene defines when the speech takes place, which is vital for synchronising expression and animation data with dialogue. Speech files should be 16-bit, 44kHz mono .WAV files.
Lip synch
Lip synch data is stored in the raw .WAV speech file rather than the choreography VCD for portability, reliability and localisation reasons, but is still created with FacePoser. While FacePoser can automatically extract lip synch data (Phonemes), better results can usually be attained through hand-tweaking.
Facial expressions
Much effort has been put into the Source engine's facial animation technology, and the result has been a flexible, portable, slider-based system. Facial expressions (Flex Animations) are created in FacePoser and stored either directly in a scene's .VCD, or reusably in an external .TXT file (Expressions).
Full body animations
FacePoser is used to control the animations of a scene's NPCs. Animations can be seamlessly blended, have their intensity altered, and even have their playback speed changed at any time ('Gestures'), or take complete and uncompromised control for their duration (Sequences).
Blendable animations
While implemented identically to full-body animations, blendable animations (Blend Gestures) are small movements that can be layered above Gestures to add more depth to an animation without losing flexibility.
An example of a Blend Gesture is Dr. Breen's laughing animation from the climax of Half-Life 2, b_bg_laugh. Viewed in HLMV it can be seen to be no more than the upper body jolting slightly.

Before you start

FacePoser playing back a complex Half-Life 2 scene.

There are several limitations you should be aware of before you begin creating choreography:

  • You are a slave to your voice actor's talents. Unless you are mixing it up a little and having recorded dialogue follow choreographed animations, or making a scene without speech, you will always be following his or her cues and delivery. If your written and/or recorded dialogue is bland, you will have a hard time creating choreography that isn't either ridiculous or bland itself. If you find spotting 'good' dialogue difficult, it's because you've never tried with anything bad…
  • You are limited to your digital actors' animations. While FacePoser can manipulate Gestures with aplomb, it cannot create new ones. Valve's stock actors in particular suffer from limited animation sets: their libraries have been designed around what Valve used, not what third-party choreographers might need. Prominent actors like Alyx will probably have what you are looking for, but minor or generic characters will present problems without custom animation work. Until and unless Valve provide a wider set of stock animations, serious choreographers should learn animation skills or join a mod team.

Tutorial structure

This tutorial will cover all aspects of choreography creation, from the drafting of basic structure to its implementation in a gameplay environment. There is a menu in the top right of each page for navigation, but this page also includes a complete tutorial map for quick reference: