HLALYX:Response rules.txt

From Valve Developer Community
Revision as of 12:19, 20 July 2020 by Dynacorp (talk | contribs) (Updated to keep in sync with latest version of Half-Life: Alyx as of July 20th, 2020.)
Jump to navigation Jump to search
// This is the base rule script file for the AI response system for Expressive AI's who speak based on certain "Concepts"
// You can think of a concept as a high level state that the code is trying to convey, such as say hello, or say you're mad, etc.
//
// The format of this file is that there are five main types of commands:
// 1) #include "filename"	// This just causes the included scriptfile to be parsed and added to the database
// 2) enumeration:  this declares an enumerated type so that comparisons can be matched against the string versions of the type
// 3) response:  this specifies a response to issue.  A response consists of a weighted set of options and can recursively reference
//    other responses by name
// 4) criterion:  This is a match condition
// 5) rule:  a rule consists of one or more criteria and a response
//
// In general, the system is presented with a criteria set, which is a set of key value pairs generated by the game code and
//  various entity I/O and keyfields.  For instance, the following criteria set was created in a map with a train terminal 
// "speaker" entity wishing to fire random station announcements
//               concept = 'train_speaker' (weight 5.000000)		; the high level concept for the search request
//                   map = 'terminal_pa'							; the name of the map
//             classname = 'speaker'								; the classname and name of the "speaking" entity
//                  name = 'terminal_pa'
//                health = '10'										; the absolute health of the speaking entity
//            healthfrac = '0.000'									; the health fraction (health/maxhealth) of the speaking entity
//          playerhealth = '100'									; similar data related to the current player:
//      playerhealthfrac = '1.000'
//          playerweapon = 'none'									; the name of the weapon the player is carrying
//        playeractivity = 'ACT_WALK'								; animating activity of the player
//           playerspeed = '0.000'									; how fast the player is moving
//
// Based on such a criteria set, the system checks each rule against the set.  To do this, each criterion of the rule is
//  given a numeric score as follows:
// score = 0 if criteria doesn't match or, criterion weight * keyvalue weight if it does match
// The final score for a rule is the sum of all of the scores of its criteria.  The best rule is the one with the highest
//  score.  Once a best rule is selected, then a response is looked up based on the response definitions and the engine is
//  asked to dispatch that response.
//
// The specific syntax for the various keywords is as follows:
//
// ENUMERATIONS:
//
// enumeration <enumerationname>
// { 
//		"key1" "value1"
//		"key2" "value2"
//		...etc.
//	}
//	The code and criteria refer to enumerations with square brackets and a double colon separator, e.g.:
//  [enumerationname::key1]
//
//
// RESPONSES:
//
// Single line: 
// response <responsegroupname> [nodelay | defaultdelay | delay interval ] [speakonce] [noscene] [odds nnn] [respeakdelay interval] [soundelvel "SNDLVL_xxx"] responsetype parameters
// Multiple lines
// response <responsegroupname>
// {
//		[permitrepeats]   ; optional parameter, by default we visit all responses in group before repeating any
//		[sequential]	  ; optional parameter, by default we randomly choose responses, but with this we walk through the list starting at the first and going to the last
//		[norepeat]		  ; Once we've run through all of the entries, disable the response group
//		responsetype1 parameters1 [nodelay | defaultdelay | delay interval ] [speakonce] [odds nnn] [respeakdelay interval] [soundelvel "SNDLVL_xxx"] [displayfirst] [ displaylast ] weight nnn
//		responsetype2 parameters2 [nodelay | defaultdelay | delay interval ] [speakonce] [odds nnn] [respeakdelay interval] [soundelvel "SNDLVL_xxx"] [displayfirst] [ displaylast ] weight nnn
//		etc.
// }
// Where: 
//   interval		= "startnumber,endnumber" or "number" (e.g., "2.8,3.2" or "3.2")
//   responsetype	=:
//     speak		; it's an entry in sounds.txt
//     sentence		; it's a sentence name from sentences.txt
//     scene		; it's a .vcd file
//     response		; it's a reference to another response group by name
//     print      ; print the text in developer 2 (for placeholder responses)
//   nodelay		= an additional delay of 0 after speaking
//   defaultdelay	= an additional delay of 2.8 to 3.2 seconds after speaking
//   delay			= an additional delay based on a random sample from the interval after speaking. Format is "delay <interval>".
//	 predelay		= delay before speaking the response. Format is "predelay <interval>". i.e. 'predelay 3' or 'predelay "1.2,3.0"'
//   speakonce		= don't use this response more than one time (default off)
//	 noscene		= For an NPC, play the sound immediately using EmitSound, don't play it through the scene system. Good for playing sounds on dying or dead NPCs.
//   odds			= if this response is selected, if odds < 100, then there is a chance that nothing will be said (default 100)
//	 respeakdelay	= don't use this response again for at least this long (default 0)
//   soundlevel		= use this soundlevel for the speak/sentence (default SNDLVL_TALKING)
//   weight			= if there are multiple responses, this is a selection weighting so that certain responses are favored over others in the group (default 1)
//   displayfirst/displaylast : this should be the first/last item selected (ignores weight)
//	 fire			= fire an entity IO output. Parameters should be <targetname> <inputname> <delay>. example: "fire trigger_at_train enable 0"
//	 then			= queue up a speech request after this response has finished being spoken. Format is as follows:
//				then <speaker> <concept> <criteria> <delay>
//			Example:
//				scene "scenes/act2/quarantine_entrance/alyx_russell_first_dead_zombie" then self TLK_PLAYER_MAP_TALK step:3 1
//			<speaker> can be: 
//				self	= The speaker of the original scene.
//				subject	= Searches for an entity with a targetname matching the value of the "Subject" criteria key.
//				from	= Searches for an entity with a targetname matching the value of the "From" criteria key.
//				any		= Finds the highest value expresser within 1800 units of the speaker, and makes them speak.
//				all		= Makes all expressers try to speak.
//				Otherwise, if it's anything else it just tries to find an entity with a matching targetname.
//	thensimple		= queue up a speech request after this response has finished being spoken. Format is as follows:
//				thensimple <delay> <single response>
//			Example:
//				scene "scenes/act2/quarantine_entrance/alyx_russell_first_dead_zombie" thensimple 1 speak "followupline_05"
//			<single response> is any standard single line response entry (i.e. speak / scene / etc). See "responsetype" entry above.
//			Note that they can be chained together:
//				speak "vo.combine.officer.announceattack_cover_01" thensimple 1 speak "vo.combine.officer.announceattack_cover_02" thensimple 1 speak "vo.combine.officer.announceattack_cover_03"
//
// CRITERIA:
//
// criterion <criterionname> <matchkey> <matchvalue> weight nnn required
// Where:
//  matchkey matches one of the criteria in the set as shown above
//  matchvalue is a string or number value or a range, the following are all valid:
//  "0"						; numeric match to value 0
//  "1"						; numeric match to value 1
//   "weapon_smg1"			; string match to weapon_smg1 string
//   "[npcstate::idle]"		; match enumeration by looking up numeric value
//   ">0"					; match if greater than zero
//   ">10,<=50"				; match if greater than ten and less than or equal to 50
//   ">0,<[npcstate::alert]"	; match if greater than zer and les then value of enumeration for alert
//   "!=0"					; match if not equal to zero
// weight = floating point weighting for score assuming criteria match (default value 1.0)
// required:  if a rule has one or more criteria with the required flag set, then if any such criteria
//  fail, the entire rule receives a score of zero
//
// RULE:
//
// rule <rulename>
// {
//    criteria name1 [name2 name3 etc.]
//    response responsegroupname [responsegroupname2 etc.]
//    [matchonce]					; optional parameter
//	  [ <matchkey > <matchvalue> weight nnn required ]
// }
// Where:
// criteria just lies one more more criterion names from above and response list one or more of the response
// names from above (usually just one)
// matchonce (off by default): means that the rule is deactivated after the first time it is matched
// Note that additional "unnamed" criteria can be specified inline in the rule using the same syntax
// as for defining a criterion, except for the criterion keyword and the criterion name keys
//
// Interaction with entity I/O system
// CBaseEntity contains an inputfunc called "DispatchResponse" which accepts a string which is a concept name
//  Thus, a game entity can fire this input on another entity with a concept string and a criteria set will
//  be generated and searched against the entities current response system rule set.
// Right now only the speaker entity and NPC_Talker derived NPCs have any response rules loaded
// In addition, map placed entities have up to three "context" keypairs that can be specified.
// They take the form:  "key:value" (key, single colon separator, value)
// When an entity with any such context keypairs is asked to dispatch a response, the keypairs are added to the
//  criteria set passed to the rule system.  Thus, map placed entities and triggers can specify their
//  own context keypairs and these can be hooked up to response rules to do map-specific and appropriate
//  responses
// In addition, entity I/O can be used to add, remove and clear any such context keypairs via the
//   AddContext, RemoveContext, and ClearContext input functions.
// AddContext takes a keypair of the "key:value" format, while RemoveContext take just the "key"
// ClearContext removes all context keypairs
// The game .dll code can enumerate context keypairs and change them via code based methods
//
// The player and the world have their context added with the string player or world as a prefix, e.g.:
//  "playerkey:value" or "worldkey:value" to differentiate world/player context from the context of the
//  responding entity.
//
// 
// CONCEPT PRIORITIES
//
// conceptpriority <concept name> <priority>
//
// Used to specify priorities of speech concepts. By default, concepts all have a priority of 0.
// When attempting to speak, if an NPC is speaking, they'll be allowed to interrupt the current speech
// if the priority of the new speech is higher than that of the current speech. 
// 
//	Setting the conceptpriority of a concept to "nopriority" will make it exempt from the priority
//	system. It'll never interrupt any existing speech, nor will it be noted as the current priority 
//	when the concept is spoken.
//
// Examples:
//		conceptpriority COMBINE_RADIO_ON			nopriority			// Radio on/off sounds shouldn't affect/be affected by priority
//		conceptpriority COMBINE_RADIO_OFF			nopriority
//		conceptpriority COMBINESOLDIER_PAIN			-1					// Make pain sounds interruptable by all other default speech
//
//

// Base script
enumeration "NPCState"
{
	"None"		"0"
	"Idle"		"1"
	"Alert"		"2"
	"Combat"	"3"
	"Scripted"	"4"
	"PlayDead"	"5"
	"Dead"		"6"
}

response "NullResponse"


// Talker Manifests
// criterion has to be on the top as that's loaded first
#include "talker/npc_combine_criterion.txt"
#include "talker/npc_combine_charger.txt"
#include "talker/npc_combine_grunt.txt"
#include "talker/npc_combine_officer.txt"
#include "talker/npc_combine_suppressor.txt"
#include "talker/npc_combine_choreo.txt"
//#include "talker/npc_combine.txt"
#include "talker/player.txt"