Response System: Difference between revisions
m (→Rule) |
m (→Reponse) |
||
Line 131: | Line 131: | ||
**response ; it's a reference to another response group by name | **response ; it's a reference to another response group by name | ||
**print ; print the text in developer 2 (for placeholder responses) | **print ; print the text in developer 2 (for placeholder responses) | ||
*nodelay = an additional delay of 0 after speaking | |||
*defaultdelay = an additional delay of 2.8 to 3.2 seconds after speaking | |||
*delay interval = an additional delay based on a random sample from the interval after speaking | |||
*speakonce = don't use this response more than one time (default off) | |||
*noscene = For an NPC, play the sound immediately using EmitSound, don't play it through the scene system. Good for playing sounds on dying or dead NPCs. | |||
*odds = if this response is selected, if odds < 100, then there is a chance that nothing will be said (default 100) | |||
*respeakdelay = don't use this response again for at least this long (default 0) | |||
*soundlevel = use this soundlevel for the speak/sentence (default SNDLVL_TALKING) | |||
*weight = if there are multiple responses, this is a selection weighting so that certain responses are favored over others in the group (default 1) | |||
*displayfirst/displaylast : this should be the first/last item selected (ignores weight) | |||
==Add your new NPC to the Response System== | ==Add your new NPC to the Response System== |
Revision as of 05:27, 1 August 2005
Introduction
In HL2, AIs speaks/actions based on "Concepts." /scripts/talker/response_rules.txt
is the base script file for this Response System. It contains criterion/rule/response definitions.
The Response system checks each rule against the set, give it a numeric score based on the rule's set of criteria. Then the system picks one rule with the highest score, dispatch a response that the rule points.
Concept
Concept is high level state that the code is trying to convey, such as say hello, or say you're mad, etc.
- code
dlls/ai_playerally.h
- entity input
- "DispatchResponse" input
Enumeration
Enumeration declares an enumerated type so that comparisons can be matched against the string versions of the type.
enumeration <enumerationname> { "key1" "value1" "key2" "value2" ...etc. }
The code and criteria refer to enumerations with square brackets and a double colon separator, e.g.: [enumerationname::key1]
Criterion
Criterion is a match condition. If it doesn't match, score = 0. If it does match, the score is weight value.
criterion <criterionname> <matchkey> <matchvalue> weight nnn required
- matchkey
- Concept
- Map
- Classname
- Name
- Health
- HealthFrac
- PlayerHealth
- Player
- PlayerWeapon
- PlayerActivity
- PlayerSpeed
- NPCState
- distancetoplayer
- seeplayer
- seenbyplayer
- timesincecombat
- shotloc
- enemy
- gordon_precriminal
- attacking_with_weapon
- hurt_by_fire
- numselected
- useradio
- commandpoint_dist_to_npc
- commandpoint_dist_to_player
- numjoining
- reinforcement
- matchvalue
- "0" ; numeric match to value 0
- "1" ; numeric match to value 1
- "weapon_smg1" ; string match to weapon_smg1 string
- "[npcstate::idle]" ; match enumeration by looking up numeric value
- ">0" ; match if greater than zero
- ">10,<=50" ; match if greater than ten and less than or equal to 50
- ">0,<[npcstate::alert]" ; match if greater than zer and les then value of enumeration for alert
- "!=0" ; match if not equal to zero
- weight = floating point weighting for score assuming criteria match (default value 1.0)
- required: if a rule has one or more criteria with the required flag set, then if any such criteria fail, the entire rule receives a score of zero
Context
Map placed entities have up to three "context" keypairs that can be specified. They take the form:
"key:value" (key, single colon separator, value)
When an entity with any such context keypairs is asked to dispatch a response, the keypairs are added to the criteria set passed to the rule system. Thus, map placed entities and triggers can specify their
own context keypairs and these can be hooked up to response rules to do map-specific and appropriate responses
- World
- NPC
- Property
- ResponseContext(string) : "Response Contexts" : "" : "Response system context(s) for this entity. Format should be: 'key:value,key2:value2,etc'. When this entity speaks, the list of keys & values will be passed to the response rules system."
- Input
- AddContext(string) : "Adds a context to this entity's list of response contexts. The format should be 'key:value'."
- RemoveContext(string) : "Remove a context from this entity's list of response contexts. The name should match the 'key' of a previously added context."
- ClearContext(void) : "Removes all contexts in this entity's list of response contexts."
- Property
- env_speaker
- ai_speechfilter
Rule
Rule consists of one or more criteria and a response. The final score for a rule is the sum of all of the scores of its criteria.
rule <rulename> { criteria name1 [name2 name3 etc.] response responsegroupname [responsegroupname2 etc.] [matchonce] ; optional parameter [ <matchkey > <matchvalue> weight nnn required ] }
- matchonce (off by default): means that the rule is deactivated after the first time it is matched
Note: additional "unnamed" criteria can be specified inline in the rule using the same syntax as for defining a criterion, except for the criterion keyword and the criterion name keys.
Reponse
Response specifies a response to issue. A response consists of a weighted set of options and can recursively reference.
Single line:
response <responsegroupname> [nodelay | defaultdelay | delay interval ] [speakonce] [noscene] [odds nnn] [respeakdelay interval] [soundelvel "SNDLVL_xxx"] responsetype parameters
Multipul lines:
response <responsegroupname> { [permitrepeats] ; optional parameter, by default we visit all responses in group before repeating any [sequential] ; optional parameter, by default we randomly choose responses, but with this we walk through the list starting at the first and going to the last [norepeat] ; Once we've run through all of the entries, disable the response group responsetype1 parameters1 [nodelay | defaultdelay | delay interval ] [speakonce] [odds nnn] [respeakdelay interval] [soundelvel "SNDLVL_xxx"] [displayfirst] [ displaylast ] weight nnn responsetype2 parameters2 [nodelay | defaultdelay | delay interval ] [speakonce] [odds nnn] [respeakdelay interval] [soundelvel "SNDLVL_xxx"] [displayfirst] [ displaylast ] weight nnn etc. }
- responsetype
- speak ; it's an entry in sounds.txt
- sentence ; it's a sentence name from sentences.txt
- scene ; it's a .vcd file
- response ; it's a reference to another response group by name
- print ; print the text in developer 2 (for placeholder responses)
- nodelay = an additional delay of 0 after speaking
- defaultdelay = an additional delay of 2.8 to 3.2 seconds after speaking
- delay interval = an additional delay based on a random sample from the interval after speaking
- speakonce = don't use this response more than one time (default off)
- noscene = For an NPC, play the sound immediately using EmitSound, don't play it through the scene system. Good for playing sounds on dying or dead NPCs.
- odds = if this response is selected, if odds < 100, then there is a chance that nothing will be said (default 100)
- respeakdelay = don't use this response again for at least this long (default 0)
- soundlevel = use this soundlevel for the speak/sentence (default SNDLVL_TALKING)
- weight = if there are multiple responses, this is a selection weighting so that certain responses are favored over others in the group (default 1)
- displayfirst/displaylast : this should be the first/last item selected (ignores weight)
Add your new NPC to the Response System
- Make your NPC derived from NPC_Talker
- Write a new rule script for your NPC
- See npc_xxx.txt for examples
- Include the new script in
/scripts/talker/response_rules.txt