|
|
Line 1: |
Line 1: |
| If all I want to do is to have a combine console go: "Local protection team units report on 603 unlawful entry in progress." using the samples ''reporton.wav'', ''unlawfulentry603.wav'' and ''inprogress.wav'' after eachother, is a scripted_sentence really obsolete for this? Should I learn faceposer when there are absolutely no gestures involved, and all I want is some scripted radiochatter? Why? --[[User:Andreasen|Andreasen]] 22:28, 31 Mar 2006 (PST)
| | There used to be a lengthy discussion here, but this is a summary conclusion (so you don't have to waste your time reading through all of mine confusion): |
|
| |
|
| :Feel free to use OnEndSentence...when using this, don't forget to check if the NPC exists b4 you do this though...else your map will get stuck...anyways, making a [[VCD]] file for this would accomplish the same thing—'''[[User:Ts2do|ts2do]]''' 22:32, 31 Mar 2006 (PST)
| | If you want to make a sentence out of radio chatter, and don't want to use a whole scene when there are no actors present or the only actors are combine (who doesn't have any visible lips to lipsynch anyway), or don't want to go through the process of turning all of the samples into soundscripts so that faceposer can even find them, you can achieve this by adding the (raw) samples sequence you wish to play into the ''sentences.txt'' file (normally located in the ''source engine.gcf'' file under ''hl2/scripts/'', so I guess you'll have to make your own ''sentences.txt'' file) and then using an [[ambient_generic]] to play this script instead of a normal sample. (See the flare ambush at the end of Nova Prospekt for an example of disembodied radio chatter.) |
|
| |
|
| ::''OnEndSentence''? Sure it would get the samples timed, but it would be easier to edit ''sentences.txt'' and have just one ''scripted_sentence'' play the whole sentence like I imagine it's supposed to be used. Anyway, I've found some off-wiki documentation for faceposer now. This wiki should really need some (finished) docs on faceposer. --[[User:Andreasen|Andreasen]] 23:10, 31 Mar 2006 (PST)
| | Here is some off-wiki documentation on faceposer, Jupix: |
| | [http://www.hl2world.com/wiki/index.php/Creating_your_first_Faceposer_scene Creating Your First Faceposer Scene] |
|
| |
|
| :::Feel free to paste the URL for the docs you found :P [[User:Jupix|Jupix]] 23:14, 31 Mar 2006 (PST)
| | --[[User:Andreasen|Andreasen]] 01:31, 2 Apr 2006 (PST) |
| | |
| I've checked some scipts now, packed into ''source engine.gcf/hl2/scripts/'' as they are, and unless I'm very mistaken, sure, faceposer can do the job of a scripted_sentence: You could probably do a scene, with 0 actors, 0 fonetics, 0 whatever, and just load a single sentence into the scene, and have a [[logic_choreographed_scene]] execute the entire scene. ...but instead of this bulky way, you can use this entity to just load the sentence into the game directly. ...because you can't load raw samples into faceposer as sounds. All the sounds that are available are complete sentences - single-sample sentences or multiple-sample sentences - and unless you want to go with entire standard game phrases like "Oh my god, Freeman's over there and he's on his way over that bridge and on to the marshmallow factory to get the rocket rifle stashed behind that green dumpster!", you are going to have to edit that old familiar ''sentences.txt'' file wether you use a choreographed scene or not. ...so no-no-no, scripted_sentence is not obsolete at all - not when it comes to ''unseen'' dialogs. --[[User:Andreasen|Andreasen]] 03:42, 1 Apr 2006 (PST)
| |
| | |
| :Reading some more scripts, I'm not too sure about what I've just stated. ''Sentences.txt'' seems to deal with automatic NPC behavior ''only'', and all other samples in HL2 seems to have been converted to sounds and put into a scene file, even if the actor cannot be seen (''if'' that is the case when Alyx is communicating with Kleiner in the Nova Prospect teleport scene). Sorry about any confusion, but after all, it was 1 April. --[[User:Andreasen|Andreasen]] 00:38, 2 Apr 2006 (PST)
| |
There used to be a lengthy discussion here, but this is a summary conclusion (so you don't have to waste your time reading through all of mine confusion):
If you want to make a sentence out of radio chatter, and don't want to use a whole scene when there are no actors present or the only actors are combine (who doesn't have any visible lips to lipsynch anyway), or don't want to go through the process of turning all of the samples into soundscripts so that faceposer can even find them, you can achieve this by adding the (raw) samples sequence you wish to play into the sentences.txt file (normally located in the source engine.gcf file under hl2/scripts/, so I guess you'll have to make your own sentences.txt file) and then using an ambient_generic to play this script instead of a normal sample. (See the flare ambush at the end of Nova Prospekt for an example of disembodied radio chatter.)
Here is some off-wiki documentation on faceposer, Jupix:
Creating Your First Faceposer Scene
--Andreasen 01:31, 2 Apr 2006 (PST)