I have a question regarding putting live action into a CG scene (ala *cough* SpyKids).
I've dabbled in Max's 'camera tracker' feature and placing CG over a 2D background plate, but what about placing something like a green screened subject in a 3D scene with z-ordering and such?
It seems all the info I can find is dealing with CG over a live action background.
Thanks in advance,
It sounds like you're looking for info on "virtual sets" - the top players in the high end are ORAD and VIZRT for broadcast work while there are a number of manufacturers of smaller software packages for post use. If you do a quick search on "virtual set software" you'll get over a thousand hits.
Note that the hard part is usually getting enough accurate data about the object you're trying to insert into the 3D space. Real time virtual set packages have connections to sensors on the camera to get accurate positioning and lens information - not trivial to do well.
50 East 42 Street
New York, NY 10017
James North writes :
>I've dabbled in Max's 'camera tracker' feature and placing CG over a 2D >background plate, but what about placing something like a green >screened subject in a 3D scene with z-ordering and such?
I just did a ton of this type work for "Sky Captain and the World of Tomorrow."
For those of you not familiar with the project, the entire movie was shot on bluescreen, and all of the environments and sets were added digitally. The post production took a number of years to complete.
I only worked on a few complex sequences. For these we used a matchmove ( like REALVIZÂ® MatchMover Professional or voodoo camera tracker, or 2d3 boujou) system to derive the 3d camera data from the tracking markers in the bluescreen footage.
The camera data was then imported into a 3d program such as Maya where the background scene was constructed. Rendered files were then handed off to the compositors for final integration with the bluescreen material.
There were a few scenes which I composited in combustion which were complex 3d scenes, and did not include any rendered files from our 3d department.
combustion, shake and nuke each have native 3d environments. I imported the 3d camera data directly into a 3d camera in combustion.
I then imported Photoshop matte paintings which retained their individual layers according to depth. These layers were then positioned along the Z axis at their appropriate depth, and viewed through the 3d camera.
I then eyeballed the z distance of each layer until the motion of the matte painting against the bluescreen felt right.
I've done similar stuff in nuke and shake, and the techniques are very similar. The camera in each of these packages emulate the rectilinear distortion apparent in very wide angle lenses.
If you're just playing around, you don't even have to use match moving software, you can eyeball the camera move directly in the 2d package. I usually create a gird in Photoshop, and lay it over the bluescreen sequence, and then repeatedly manipulate the 3d camera until it looks like the grid is a part of the plate.
Granted this takes longer, but there are some instances in which this technique becomes necessary because there are not enough tracking points visible in the plate, or they become obstructed, or not visible due to motion blur.
Take care, and best of luck!!
Rachel Dunn wrote:
>combustion, shake and nuke each have native 3d environments.
That is not correct. Shake does not have a 3d environment. AfterEffects does, however, and you can import a Maya camera directly into it, provided the keyframes for the camera are baked (every frame has a keyframe for 3d position and rotation) - as they do if the camera is reconstructed in a 3d tracking program, like Boujou, and exported directly as a Maya ascii (.ma) file.
The information about combustion and Nuke is correct.
IATSE Local 600
Michael Most wrote:
>That is not correct. Shake does not have a 3d environment.
Sorry for the mistake,
I've only used Shake for one job, and didn't have time to explore it fully. I was under the impression that shake did have a 3d environment.
Rachel Dunn wrote :
>I've only used shake for one job, and didn't have time to explore it fully. I >was under the impression that shake did have a 3d environment.
You might have gotten the impression that Shake understands true 3D space because it has some nodes that would imply that, such as "Depth Key" and "Depth Slice," as well as "Z Blur" and "Z Defocus." This is a false impression, however, because all of these nodes operate as 2D nodes that take their Z depth information from a supplied 2D depth map, which is commonly generated by a 3D animation program.
This is very unlike Combustion, After Effects, and Nuke, which actually create true 3D space. In these programs, you have a camera (multiple cameras in some cases), true 3 dimensional positioning of individual layers, and lights which can illuminate objects and cast shadows. This is also the case (and is perhaps best known) in Flame and Inferno, which go beyond the desktop programs in their ability to actually understand and use 3D geometry, as well as their ability to apply texture mapping.
In practice, however, the use of 3D space in a compositing program is often much more significant when creating motion graphics than it is for realistic shot assembly and compositing - although it often comes in handy these days for applying tracking in proper perspective for background replacements in shots that have been tracked with a 3D tracker, such as Boujou, by animating the camera rather than the layer itself.
IATSE Local 600
Michael Most wrote :
>In practice, however, the use of 3D space in a compositing program is >often much more significant when creating motion graphics than it is for >realistic shot assembly and compositing â€¦
Most of my experience is with nuke, and I've used it's 3d environment heavily since '95, when nuke was in it's infancy. I used it for pan and tile setups and creating 3d smoke out of 2d plates in Dante's Peak, as well as numerous other shots in nearly every show I've worked on.
From Titanic to Grinch to World of Tomorrow.
I'm not saying that this is not an important part of motion graphics, but you seem to severely under-represent it's importance to live action compositing. This technique, while often times a pain to set up, has saved me many, many hours of headache trying to individually animate multi-plane layers, or the correct lens distortion for a painted background.
Rachel Dunn wrote :
>...you seem to severely under-represent it's importance to live action >compositing.
You make some good points. I didn't mean to infer that it's useless for live action compositing, in fact, I use the multi-planning aspect of 3D compositing as well (usually in Combustion), and clearly it has usefulness in 2D/3D matte painting/compositing, as well as allowing the use of 3D tracking techniques, as I mentioned earlier. My point was that the vast majority of 2D compositing is done without the use of 3D techniques (i.e., every shot composited in Shake is, by definition, a 2D comp), whereas in motion graphics the percentage is probably the reverse of that.
IATSE Local 600