Response System

From Valve Developer Community
Revision as of 18:38, 19 November 2019 by Blixibon (talk | contribs) (Rewrote structure section)

Jump to: navigation, search

The Response System is used to decide what line of speech (and/or animation) should be used by a NPC (or player) when they want to say something, or whether to say anything at all.

Most Response System users are meant to speak in response to events unfolding the game. Speech can be triggered from the code or inputs from the map. Triggering the speech involves a response concept which corresponds to responses in the system. Concepts are categories of speech used with a specific event, like danger being noticed (TLK_DANGER) or an enemy being seen (TLK_STARTCOMBAT). While a concept can be used alone for basic and simple dialogue, speech can also be triggered with various criteria which describes things related to the concept and/or the current state of the speaker and the world around them. This allows for complex dialogue trees that involve different lines for different criteria.

For example, npc_alyx in HL2's episodic series speaks the TLK_PLAYER_KILLED_NPC concept when the player kills an enemy NPC. This concept has additional criteria for conditions related to the NPC or the way it was killed. If the player killed the NPC through a headshot, the response system will pick a response that requires that criteria, which usually results in Alyx complimenting the player's shot or remarking that it was a headshot.

The Response System normally uses scripts in the scripts/talker directory. When it receives a concept and its criteria, it searches through the response scripts for a rule that matches its concept and criteria. When it finds one, it uses that rule's response or selection of responses. A response can be a soundscript, a sentence, or even an instanced choreographed scene.

Despite the advantages, most NPCs do not use this system by default. For example, npc_pigeon only uses soundscripts by default and does not use the Response System.

Here's a quick list of places where the Response System is used:

  • In Half-Life 2, most player companion NPCs (citizens, Alyx, etc.) use the Response System for NPC speech.
  • In the Left 4 Dead series, all of the survivors use the Response System to respond to the players' actions and events in the world.
  • In Team Fortress 2, all classes use the Response System for voice commands and taunt responses.
  • In Portal 2, Atlas and P-body (the robots in co-op mode) both use the Response System for taunt gestures.
  • In Counter-Strike: Global Offensive, all player characters use the Response System and keep track of which responses have already been used.
Code:The Response System is normally used on classes derived from CBaseFlex with the CAI_ExpresserHost<> template class, which is automatically implemented on CAI_BaseActor. All NPCs and players already derive from CBaseFlex, so you'd usually just want to use CAI_ExpresserHost, but the Response System can be used in a more limited way on any entity using the DispatchResponse() function, which is how env_speaker works. GetResponseSystem() can be used to make concepts search a specific response system tree and must be overridden if you are planning on using DispatchResponse() without CAI_ExpresserHost<>.

Purpose

The Response System was created so NPCs, players, etc. use a united and robust system for managing speech. It not only allows developers and writers to create complex dialogue trees, but it also allows modders to easily modify or add new speech to the game without having to write code or modify a BSP file while still having complete control over how and when speech will be used.

This system is best used for speech that comes out of the AI system, since those lines will be spoken many times throughout a game. (e.g. Citizens say something when they reload their weapon. A lot of citizens will reload their weapons during the course of the game.)

Here's an example of a complex dialogue tree from a NPC utilizing the Response System:

  • I just killed an enemy.
  • Did I use a shotgun?
  • "Eat shotgun!"
  • Was the enemy really close?
  • "You got too close!"
  • "Thanks for getting close!"
  • "You're not getting MY head!"
  • "I hate headcrabs!"

You could also use combinations of criteria.

  • Did I use a shotgun against a npc_headcrab really close to me?
  • "You got too close to my shotgun, head crab!"

You can also control individual lines so they aren't repeated for a set amount of time, or never repeat at all.

Structure

Note:"Criterion" is the singular form of "criteria".

The Response System is made up of five core pieces: Concepts, Criteria, Rules, and Response Groups. The way these are used is as follows:

  1. An NPC requests a line of speech for a speech concept.
    • For example, let's say our NPC requests a line of speech for the TLK_SHOT concept. An NPC speaks this concept whenever they're shot by an enemy.
  2. The NPC gathers a bunch of criteria reflecting the NPC's current state and other relevant data about the state of the world. Many concepts also have modifiers, criteria unique to specific concepts.
    • So when our NPC requests a line for the TLK_SHOT concept, they may pass it in with modifiers that include the type of enemy that shot them, the amount of damage they took, etc. General criteria related to the state of the NPC and the state of the world around them is appended when the response is about to be dispatched, which may include the NPC's current health, etc.
  3. The concept and criteria are passed into the NPC's response system, which is normally a single global instance.
  4. The Response System searches through its big list of rules.
    • Each rule contains a list of criteria which is tested against the criteria passed in by the NPC. The concept is treated as a high-level criterion in this phase.
    • Each rule gets a score based upon how many of the criteria are true. If at least one criterion is marked as required and isn't satisfied, the rule will never be chosen. Most criteria are set to be required, but criteria that aren't required simply boost the rule's score.
      • So, in our NPC-being-shot example, there might be a couple of rules used to decide which line to speak. One rule might contain criteria that test to see if our NPC is very close to the enemy who shot him. Another rule might test to see if it was a specific type of enemy that shot him (such as a Combine Soldier). Another rule might test to see if our NPC is nearly dead after the shot (perhaps <25% of their health is left).
  5. The Response System scores all of the rules in its list, and chooses the one that scores highest.
  6. The rule that scores the highest is chosen, and it then specifies a response group.
  7. A response group is simply a list of possible responses, each of which might be a line of speech and/or animation. One response is selected based on the settings of the response group and the individual responses. When a valid response is chosen, it is given back to the NPC to play.
    • In our example, a response group for the rule that checks to see if our NPC's health is <25% might have several ways of responding to TLK_SHOT. For example, the group might contain a list of lines like "One more shot like that, and I'm done for!", "Oh man, I'm in trouble!", or "I need medical attention over here!"
    • Another rule for TLK_SHOT could check if the enemy shooting them is a Combine soldier, and point to a response group with lines like "That Combine soldier's got me pinned down!" or "Help me with this soldier!". Another rule could check for the enemy being a combine gunship, and point to a group with "That gunship is kicking my butt!" and "Someone help me take down that gunship before it kills me!"
  8. If no rule matches the given criteria, or the chosen response group doesn't repeat itself and has been exhausted, the NPC doesn't say anything.

Concepts

A concept is a string that represents the high-level reason for the character's speech attempt. There are a set of concepts defined inside the code which will be called automatically, but a concept is technically just a special, high-level criterion that characters keep track of. They aren't tied to anything and you can freely create your own in the response file. You can invoke concepts in the code or with an actor's DispatchResponse or SpeakResponseConcept input.

Here's a list of some of the predefined NPC concepts used in Half-Life 2:

TLK_HELLO		When I first meet the player.
TLK_IDLE		When I have been idle for a while.
TLK_USE		When the player tries to +USE me.
TLK_PLPUSH		When the player pushes me away.
TLK_STARE		When the player stares at me for a while.
TLK_DANGER		When I sense something nearby that's dangerous (e.g. a grenade).
TLK_WOUND		When I've taken damage.
TLK_HIDEANDRELOAD	When I decide to hide and reload my gun.
TLK_PLYR_PHYSATK	When I've been hit by an object thrown by the player.

Not all NPCs speak all concepts, and not all NPCs speak concepts under the same circumstances. See list of response concepts for a full list.

Criteria Set

Criteria is a set of conditions that contain data related to the speaker's current state and the circumstances of the concept whenever a speech attempt is made. It can be interpreted as a set of KeyValues. Here is an example of a criteria set created by a npc_alyx who's trying to speak because she just killed her current enemy:

concept                = TLK_ENEMY_DEAD          The concept the NPC is trying to speak.
map                    = d3_c17_07               The name of the current map.
classname              = npc_alyx                The classname of the speaking NPC.
name                   = alyx                    The targetname of the speaking NPC.
health                 = 75                      The health of the speaking NPC.
healthfrac             = 0.9375                  The health of the speaking NPC, as a fraction of the NPC's max. (npc_alyx's max health is 80 by default)
skill.cfg              = 1                       The current skill level.
timesinceseenplayer    = 0.090000                The amount of time since the speaking NPC has seen the player.
distancetoenemy        = 312.639679              The distance from the speaking NPC to its current enemy.
activity               = ACT_RUN                 The animation activity the speaking NPC is running.
npcstate               = [NPCState::Combat]      The AI state of the speaking NPC.
enemy                  = npc_combine_s           The classname of the speaking NPC's current enemy.
speed                  = 79.235                  The movement speed of the speaking NPC.
weapon                 = weapon_alyxgun          The current weapon being held by the speaking NPC.
distancetoplayer       = 211.240692              The distance from the speaking NPC to the player.
seeplayer              = 1                       Whether or not the speaking NPC can see the player.
seenbyplayer           = 0                       Whether or not the speaking NPC is within the player's view.
readiness              = agitated                The readiness level of the speaking NPC.
playerhealth           = 100                     The player's current health.
playerhealthfrac       = 1.000                   The player's current health, as a fraction of the player's max.
playerweapon           = weapon_shotgun          The current weapon being held by the player.
playeractivity         = ACT_WALK                The animation activity the player is running.
playerspeed            = 0.000                   The movement speed of the player.

This concept does not have any modifiers by default. All criteria in the above list are general and gathered for each concept.

Criteria such as the ones in the list above can be checked by a rule's list of criteria, and used to make decisions about which response group to use for the desired concept. For instance:

  • The enemy criterion could be used to pick the right response to the TLK_ENEMY_DEAD concept. Instead of making a general statement, Alyx could say "I took care of that soldier!" or "I took care of that headcrab!".
  • The healthfrac field could be used to choose a "Phew, that was close!" line if her health was <20% when she finished off her enemy.
  • The distancetoenemy field could be used to choose different lines for when she killed her enemy at long or short range. i.e. "Those guys are scary when they get that close!" or "It's not easy hitting 'em at that range."

Even though the criteria listed above are general and not concept-specific, the criteria will always vary under different circumstances and might not always be available (i.e. NPCs that aren't in combat won't have enemy or distancetoenemy criteria). Additionally, mapmakers can append extra criteria to specific NPCs, or to all NPCs in the game. See #Response contexts for more info.

Rule Criteria

Rules have a list of criteria that are tested against the character's criteria set. When a rule is scored, each criterion is checked against the given data, and the rule receives points for criteria that successfully match. The amount of points a criterion earns for the rule is determined by the criterion's weight.

Criteria are defined inside the script files (see below). The following format is used:

criterion <criterion name> <key to check> <desired value> <optional: weight X> <optional: required>

The parameters are as follows:

criterion name
The name of the criterion. Must not match an existing criterion.
key to check
The key within the character's criteria set that this criterion will check.
desired value
The desired value of the key within the criteria set. This can take multiple forms:
  • Numeric values: "0", "1", or "100".
  • Inverse Numeric values: "!=0" would match if the value is not equal to 0.
  • String value: "npc_alyx", "weapon_alyxgun", or "npc_combine_s".
  • Enumerated value: "[NPCState::Combat]".
  • Ranges:
    • ">0" : Match if the value is greater than 0.
    • "<=0.5" : Match if the value is less than, or equal to, 0.5.
    • ">10,<=50" : Match if the value is greater than 10 and less than, or equal to, 50.
    • ">0,<[NPCState::Alert] : Match if the value is greater than 0 and less then the value of enumeration for NPCState::Alert.
Note:Does not support wildcards by default.
weight X
An optional parameter, where X is the amount of points this criterion is worth if it matches. If unspecified, criterion are worth 1 point by default.
required
An optional parameter that states that this criterion is required for rules containing it to be used at all. If a required criterion does not match successfully, rules containing it score 0 and are immediately skipped. Most criteria use this parameter.

Some examples from Half-Life 2:

  • This defines a criterion named PlayerNear, which checks to make sure the player is within 500 units of the speaking NPC.
criterion "PlayerNear" "distancetoplayer" "<500" required
  • This defines a criterion named IsCitizen, which checks to make sure the speaking NPC is a npc_citizen.
criterion "IsCitizen" "classname" "npc_citizen" "required"
  • This defines a criterion named IsMap_d3_c17_12, which checks to make sure the game is currently on d3_c17_12.bsp. Useful for making all the citizens in one map say different lines than other maps.
criterion "IsMap_d3_c17_12" "map" "d3_c17_12" "required"
  • This defines a criterion named IsBob, which checks to make sure the speaking NPC has a targetname of "bob". This is a unique citizen in the game, and this criteria makes it easy to have him say unique lines.
criterion "IsBob" "targetname" "bob" required
Tip:A concept is a criterion which almost always has a weight of 5.

Rules

A rule contains a list of criteria and at least one response group. The rule receives points for each of the criteria that successfully matches the speaker's criteria set. The highest scoring rule will direct to one of its response groups, which is used to determine the exact speech the NPC will use. Rules are defined inside the script files (see below). The following format is used:

rule <rule name>
{
   criteria <criterion name 1> [optional: <criterion name 2> <criterion name 3> etc.]
   response <response group name> [optional: <response group name 2> etc.]
   [optional: matchonce]
   [optional: applyContext <data>]
}

The parameters are as follows:

  • rule name : The name of the rule. Must not match an existing rule.
  • criteria : The list of criteria the rule should score with.
  • response : The list of response groups that should be chosen if this rule scores the highest.
  • matchonce : An optional parameter which, if specified, causes this rule to be deactivated once it has been chosen once.
  • applyContext : An optional parameter which applies a response context. See #Response contexts for more information.

For example, the following text defines a rule called CitizenTalkStare. ConceptTalkStare is a criterion that checks to make sure the concept the speaking NPC wants to speak is "TLK_STARE". IsCitizen is a criterion that checks to make sure the speaking NPC is a citizen. NPCIdle is a criterion that checks to make sure the NPC's state is "NPCState::Idle". If this rule scores highest, the response group that will be used is CitizenTalkStare.

rule CitizenTalkStare
{
   criteria     ConceptTalkStare IsCitizen NPCIdle
   response     CitizenTalkStare
}

Note that the rule name and the response group name can be identical because rule names need only be unique amongst rules, and response groups names only unique amongst groups.

Response Groups

A response group contains a set of possible responses, along with some optional data that defines how the responses should be used. When a response group is chosen by a rule, a response is chosen from the list and given back to the speaker to use. Response groups are defined inside the script files (see below). The following format is used:

response <response name>
{
   [optional: permitrepeats]
   [optional: sequential]	  
   [optional: norepeat]		  

   <response type> <response> <optional: ...>
   <response type> <response> <optional: ...>
   <response type> <response> <optional: ...>
}

The response group parameters are as follows:

  • permitrepeats : If specified, responses in this group are allowed to repeat. If unspecified, the default behavior is to use all responses in the list before repeating any,
  • sequential : If specified, responses will be used in the order they're listed in the group. If unspecified, the default behavior is to randomly choose responses from the list.
  • norepeat : If specified, once all responses in the list have been played, the response group will be disabled. Any rules that choose this response group will return no response to the speaker.

The response group then lists as many responses as desired, with each response being one of the following types:

  • speak : The response is a soundscript (see Soundscripts) or raw audio file.
  • sentence : The response is a sentence name from sentences.txt.
  • scene : The response is a .vcd file. See Choreography Implementation for more information.
  • response : The response is a reference to another response group.
  • print : The response is some text that should be printed at the speaker's position in developer 1 (used for placeholder responses).

The response is the actual response or path for the response type. If the response type is "scene", this is the .vcd file. If the response type is "speak", this is the name of the soundscript entry or raw file path. Each response supports a variety of optional parameters.

  • Post-response delay parameters:
    • nodelay : After the response has finished, the speaker is allowed to speak again immediately.
    • defaultdelay : After the response has finished, the speaker won't be allowed to speak for a random amount, between 2.8 & 3.2 seconds.
    • delay X : After the response has finished, the speaker won't be allowed to speak for X seconds. X can also be a range, e.g. "5.5,7.5"
  • weapondelay X : When the response starts, the speaker will not fire their weapon for X seconds. Only available on HL2 NPC allies by default.
  • speakonce : Prevents the response from being used more than once.
  • odds X : If specified, then when this response is chosen, there is a chance the speaker will say nothing instead of saying the response. The odds is a (0-100)% the chance of cancelling the response, meaning an odds of 25 gives a 25% chance of saying nothing.
  • respeakdelay : If specified, this response may not be used unless the concept hasn't been spoken in at least X seconds. X can also be a range, e.g. "5.5,7.5"
  • soundlevel : If specified, this soundlevel should be used for the response instead of the default SNDLVL_TALKING.
  • displayfirst : If specified, this response should be used first (ignores the weight parameter).
  • displaylast : If specified, this response should be used last (ignores the weight parameter).
  • weight : If specified, used to weight the selection of the responses in the list. By default, all responses have a weight of 1. Please note that responses won't be repeated until all of the other responses have been chosen, meaning once all of the high-weight responses are chosen, the system will only count the responses of less weight. This can be counteracted with permitrepeats.
  • noscene : Stops the Response System from creating an auto-generated scene for a speak response.
  • stop_on_nonidle : When the response is spoken, stops the scene when the NPC enters a non-idle state. Only works in the HL2 episodic series on scene responses.
  • predelay : When the response is chosen, it won't actually be spoken until X seconds have passed. Only works on scene responses. X can also be a range, e.g. "5.5,7.5"

For example, the following response group is used by citizens to respond to the TLK_STARE concept. Citizens will use the responses in the list in the order they're listed (due to the sequential parameter). Each response, when chosen, stops the NPC from talking for a random amount of time between 10 & 20 seconds.

response "CitizenTalkStare"
{
   sequential
   scene "scenes/npc/$gender01/doingsomething.vcd" delay "10,20"
   scene "scenes/npc/$gender01/getgoingsoon.vcd"  delay "10,20"
   scene "scenes/npc/$gender01/waitingsomebody.vcd"  delay "10,20"
}

Script files

The /scripts/talker/response_rules.txt is the base script file that contains all the criteria, rules, and response groups used by the Response Rules System. The file can also include other files using the #include keyword, which allows you to cleanly divide the rules up according to NPC, map, and so on. Note that some entities, like the env_speaker, specify their own script files that contain a subset of the criteria, rules, and response groups for the entity to use. See scripts/talker/terminal_pa.txt for an example used by the terminal announcement at the train station.

Response contexts

To do: Unique section on response contexts

Advanced response rules usage

There are several powerful ways that the ResponseContext keyfield and the AddContext input can be used to make NPCs more responsive to the changing state of the world they're in. Both the ResponseContext keyfield and the AddContext input take a string parameter, in the following format: key:value;key:value;key:value;...

It can be used as follows:

Firing custom response events with speakresponseconcept
Absolutely anything can be passed to an NPC as a response concept, so long as it is to be found somewhere in the script files. You aren't limited to the AI's predefined TLK_* concepts at all.
For example, you might want NPCs from a onlooking group to speak congratulatory response concepts when a player successfully solves part of a puzzle.
Setting custom KeyValues on an NPC via the ResponseContext keyfield
The ResponseContext keyfield is a field inside all NPCs that allows you to set arbitrary KeyValues that will be passed in to the Response Rules system whenever that NPC requests lines. he KeyValues that you set will be appended to the Response Data passed into the Response Rules.
For example, you may have a single citizen who you want to have a custom way of saying Hello! to the player. You could set that citizen's ResponseContext keyfield to custom_hello_guy:1, and then define a new criteria that checks to see if the custom_hello_guy key is set to 1. Then you would make a new rule for the TLK_HELLO concept that also contains your new criteria, and this rule would score higher when the custom guy tries to speak TLK_HELLO.
Firing the AddContext input on NPCs
The AddContext input on an NPC allows you to dynamically set arbitrary KeyValues that will be passed in to the Response Rules system whenever that NPC requests lines. The KeyValues that you set will be appended to the Response Data passed into the Response Rules.
For example, you might want a Citizen to say something differently if the player has moved a cup sitting on a nearby table. You would connect the OnPhysGunPickup output of the cup to the AddContext input on the Citizen, with a parameter of something like player_pickedup_cup:1. Then you could define a new criteria that checks to see if the player_pickedup_cup key is set to 1, and use that criteria in a rule that chooses an appropriate line.
Firing the AddContext input on the World
If you fire the AddContext input on the worldspawn entity, the KeyValues you specify will be passed into the Response Rules system everytime any NPC requests lines on that map. This makes the worldspawn entity a valuable storage place for data that all NPCs on the map should use when choosing lines.
For example, you might want all the Citizens in a town to choose different lines if the mayor has died. You would connect the OnDeath output of the mayor NPC to the AddContext input of the worldspawn entity, with a parameter of something like mayor_is_dead:1. Then you could define a new criteria that checks that the mayor_is_dead key is set to 1, and have Citizen rules that use it to choose different lines after the mayor has died.
Firing the AddContext input on the player
If you fire the AddContext input on the player entity (using the !player targetname), the keyvalues will be passed to the Response system every time any NPC requests lines. In addition, these KVs will persist over level transitions, just like the player. This makes the player entity a valuable storage place for data that NPCs in later maps should use when choosing lines.
Note:The word player will be prefixed to every key passed to !player.
For example, you might want a Citizen to say something differently if the player killed a certain combine soldier in a previous map. You would connect the OnDeath output of the Combine Soldier to the AddContext input of the player entity, with a parameter of something like that_soldier_dead:1. Then you could define a new criteria that checks the playerthat_soldier_dead key is set to 1, and have Citizen rules that use it to choose different lines. The playerthat_soldier_dead key would be set to 1 for all NPCs that speak throughout the rest of the game.
Changing existing KeyValues in the Response Data
If you fire the AddContext input on an entity, and pass in a Key name that already has a Value in that entity, it'll simply overwrite the Value for the Key.

Debugging

To do: sv_debugresponses

Notes

  • To be able to use the Response Rules system, an NPC must be derived from the CAI_BaseActor class.
  • See the scripts/talker/npc_*.txt files for examples of specific NPC response rules.
  • The scripts/talker/response_rules.txt is the manifest for the response rules system. If you add new response rule script files for your new NPC, make sure you #include it at the end of the scripts/talker/response_rules.txt file.

See also