Lip sync in Sam & max
Hi there,
Now, some people may complain about the older technology used in the episodic Sam & Max games. They may complain that there's not enough polys or fancy textures and effects. I say if there's anything that truly needs improvement, it's the lip sync.
Don't get me wrong, it's not like it's horrible or anything, but with Sam & Max having such heavy focus on dialogue, it makes sense for anything related to dialogue to be the best it can. The acting is great, the scripts are great, the automated lip sync... Not so great. Many times characters completely miss syllables or don't seem to change their mouth shapes quickly enough to follow the dialogue. It kinda spoils the look when other games have such great lip sync. Have a look at the new Chariots of the Dogs trailer, Bosco misses a large amount of his speech.
Perhaps Telltale could spend some time improving their automated lip sync technology? Or maybe they could license Valve's? Their automated lip sync is awesome, not just for realistic characters, but as anyone who's played Team Fortress 2 would know, cartoony characters too.
Now, some people may complain about the older technology used in the episodic Sam & Max games. They may complain that there's not enough polys or fancy textures and effects. I say if there's anything that truly needs improvement, it's the lip sync.
Don't get me wrong, it's not like it's horrible or anything, but with Sam & Max having such heavy focus on dialogue, it makes sense for anything related to dialogue to be the best it can. The acting is great, the scripts are great, the automated lip sync... Not so great. Many times characters completely miss syllables or don't seem to change their mouth shapes quickly enough to follow the dialogue. It kinda spoils the look when other games have such great lip sync. Have a look at the new Chariots of the Dogs trailer, Bosco misses a large amount of his speech.
Perhaps Telltale could spend some time improving their automated lip sync technology? Or maybe they could license Valve's? Their automated lip sync is awesome, not just for realistic characters, but as anyone who's played Team Fortress 2 would know, cartoony characters too.
Sign in to comment in this discussion.
Comments
I'm using the Logitech Z-5500 speakers, perhaps they are too powerful for your games :P (I'd like to see Sam & Max in 5.1, that'd be very interesting..)
But like ShaggE said, it's better than what was in HTR.
I admit it would be kind of cool if TellTale brought one of those lip-sinc programs that scan the actor's voice and move the lips to exactly match, but it's by no means necessary in my opinion.
Not necessary : in HtR, the lip sync is completely bad, you don't try to make le link between the voice and the image. In TT's games, its quite good but not enough, so you really get the impression it's not finished. Although, I like the way the lips moves in HtR (it's very old-fashion-cartoon-like), and so is the TT's one. By the way, I don't focus on the lips as I read the subtitles
The Uncanny Valley.
See the "Heavy Rain" tech demo for a good example of it. Gave me the chills the first time I watched it.
I heard that the developers for that tech demo were under high pressure to get it finished before E3, so they didnt have time to polish it some more. Which is too bad, because it was already pretty good.
On-topic: I agree, lipsynching is probably one of the things that need improvement most. Mostly beacuse people talking are always staring at the screen, so you notice it more that in games where they might as well be facing away from you.
Perhaps for season 3?
But it would render the idea of episodic Sam & Max practically impossible.
I'd rather have the engine development team to work on better memory management for instance or to fix some of the outstanding problems on Vista.
See for yourself.
Yeah, I'd say it's pretty good.
With the Source SDK it's as simple as typing in a phrase, the program extracts the phonemes, and automatically does the syncing near-perfectly. Might take some tweaking sometimes.. as in some cases it might be glitchy.
It's actually cool because it makes its best guess on where the particular words go based on the soundwaves in the actual audio recorded.
It's a wonder you guys were able to completely close his mouth, too, at the end of 101.:D
I was under the impression the software used pre-set mouth poses chosen by the animators. i.e if sound is like this, use this pose.
Technology, as illustrated in this thread, has moved past the Nightmare Before Christmas lip sync process.
And in case you didn't get that, that's how they did the lip sync in the film. They had a pre-set collection of mouths (or in Jack's case, heads) that the animators picked through a computer program based on the phonetics of the script. Interesting program, but I'm glad we are far from that method.
Now, you would think that characters like Superball or Abe would be easier for the lip sync to read properly, but I guess their rigging isn't as complicated as a HalfLife2 character's face.
...a wonder indeed.
That is true, but it blends less than optimally. As I said, we have plans to improve on it.
1) Faster reaction time - Some characters completely miss phonemes because they're currently in the process of forming a different phoneme.
2) Faster mouths - The character's mouthes seem to move unnaturally slow. For someone like Max, who talks reasonably fast, it looks a little odd.
Basically, characters need to hit their phonemes faster, and more accurately
That's a give-and-take situation. I can run HalfLife2 on my computer, even though it is a POS, because it meets the minimum video requirements to run. The unfortunate thing is it isn't powerful enough to where the lip sync won't mess up if there is too much going on at the screen at once, mostly because the game has an insane amount of detail.
With Sam & Max, if you look at their opening and ending cut scenes for any episode this season or last, the lips tend to hit their mark rather nicely. At least they do on my end. For someone who is running on bare minimum, the lip sync in those cut scenes are probably just as "bad" as the lip sync in-game.
In other words, they can improve it so that it hits the phonics faster, but it all depends on if YOUR computer can do the same thing.
I didn't know Ms. Angelou was an animator!
Yeah, but they still have to export the animation sequence from Maya and into the game, because not everyone that buys the episode will have Maya. Furthermore, they said in last seasons audio commentary that lip syncing in Maya is a pain in the ass, as the dialouge can get so long and complicated that it is better to just use the automated program to save them time. That's why if you know what to look for, you can tell when the Maya animated sequence is done and when the TTTool takes over.
They've gotten better at that blend this season from my end, but there were several parts in the opening and closing of 203 where the cut scenes would skip in the animation because of processing issues. And that's DURING the parts I could tell that were animated in Maya.