I'm not sure anyone has brought up this question yet, but I'm curious :
How (if at all) does/can/should lighting for 3-D differ from that for 2-D films? Obviously 3-D features have to be viewable in 2-D, so the expected answer would be that one should stick to conventional lighting techniques. But are Directors of Stereography <grin> sometimes tempted to dispense with, or modify conventional lighting techniques that are intended to provide "3-D" modeling, separation of planes, etc, because the stereoscopic image no longer requires them?Only as pertains to light and lighting:
Be aware of possible polarization problems with mirror rigs, where light can be seen on a reflective surface in one eye but not the other. Vertical shading and flare are also concerns. One is unlikely to be able to shoot a spatially coherent lens flare. This problem is exacerbated by the general desire to stay on wider lenses. Generally speaking, anytime you're seeing something in one eye that's invisible in the other, you have a problem. So you want to be careful with back lighting especially. Be sure to check what both eyes are seeing.Great post Tim...
>>,,,,,One is unlikely to be able to shoot a spatially coherent lens flare.,,,,, As much as I enjoyed U23D, I thought the 3D lens flares were very strange. But they *did* appear to be spatially coherent... It just seemed very weird to have a third dimension to a flare. David Perrault, CSCDavid Perrault writes :
>>But they *did* appear to be spatially coherent...? It just seemed very weird to have a third dimension to a >>flare. The reason they worked at all was because they were stage spots coming from close to the subject. More typically, when the sun is near the edge of frame, is more what I was talking about. One of the advantages of post conversion is the opportunity to properly design things like that. Tim SassoonTim Sassoon wrote:
>>Generally speaking, anytime you're seeing something in one eye that's invisible in the other, you have a >>problem. So you want to be careful with back lighting especially. Be sure to check what both eyes are >>seeing. This was the first thing that came to mind too. It's even more of a problem if the audience will be viewing with shutter glasses (Less likely for theatrical, but possible if you're making something for other venues). A rig with two lenses close together can still have differences in flare and glare from the subject, obviously, as this happens with our own eyes. Therefore it's not nearly so distractingly different as the case Tim mentions, but with shuttered glasses, these differences really stand out. But you'll probably be shooting for a polarized or variation on anaglyph display. Steven BradfordTim Sassoon writes:
>>Be aware of possible polarization problems with mirror rigs, where light can be seen on a reflective >>surface in one eye but not the other. Is this an actual problem? I mean, light reflected from silvered surfaces isn't polarized. Granted, the mirror may be only half-silvered, but the difference between even a half-silvered surface (nonpolarizing) and the surface of the glass itself (polarizing) would be huge, no? >>One is unlikely to be able to shoot a spatially coherent lens flare. Gotcha.Tim Sassoon wrote:
>>The reason they worked at all was because they were stage spots coming from close to the subject. More >>typically, when the sun is near the edge of frame, is more what I was talking about. One of the advantagesclass="style18" >> of post conversion is the opportunity to properly design things like that.
I have found that it helps to index the iris leafs to reflect the same radial symmetry, so the points of the flare are coincidental to one another.
Tim Sassoon wrote: >>Others may mention things I've forgotten. But apart from that, there's the old saying that good 3D starts >>with good 2D. Your tutorial on 3d lighting is Absolutely spot on! The only thing I could add, which is obvious to most of us , is one needs twice the light when shooting with beam splitter rigs. Max PennerDaniel Drasin writes:
>>Is this an actual problem?
Yes. For example, there are a number of shots in the IMAX film "Wild Ocean 3D" (I Did the digital cinema version) where reflections off the side of wet tidal pool rocks are prominent in one eye, but invisible in the other. Bright, and dark. It's annoying to watch, and hard to fix. Tim SassoonTim Sassoon writes:
>>there are a number of shots in the IMAX film "Wild Ocean 3D" (I did the digital cinema version) where >>reflections off the side of wet tidal pool rocks are prominent in one eye, but invisible in the other. Bright, >>and dark. It's annoying to watch, and hard to fix. So to prevent that, I spoze you'd need to put a weak polarizer on the "straight through" camera and a compensating ND on the "reflected" camera. Does anyone do that? Or would that create other undesirable artifacts? (...apart from the obvious additional light loss and more glass surfaces to deal with.) Dunno if this has been discussed (and this may be an extremely naive question), but is there any way of shooting with a wide, fixed interocular distance and then synthesizing interocular variations in post? I imagine this could be done if you started with THREE side-by-side cameras, which would give you the ability to look around objects to fill in background details. The center camera alone would be your 2-D view, and the side cameras would contribute to the synthesized 3-D product, whose virtual eye positions would end up somewhere between the center and side cameras. (I imagine that these newfangled fly's eye sensor systems work somewhat in this way when creating a stereo image.) Dan "Gyro Gearloose" DrasinDaniel Drasin writes:
>>is there any way of shooting with a wide, fixed interocular distance and then synthesizing interocular >>variations in post? I'm of the opinion that 3D won't take off as a shooting medium until one is barely aware that the camera is capturing depth information to reconstruct stereo pairs later; that there's essentially no difference between a stereo and mono camera. Just as it was with color film - didn't fly until monopack. Plus, autostereoscopic displays particularly don't want binocular stereo. And it should be adjusted at the display. Tim SassoonI tend to agree. That's why the future is a pair of small side by side sensors such as the SI-2K Mini or the prototype Micros they've shown. Or a very large single sensor onto which a pair of lenses can be projected side by side and later combined, such as on the Phantom 65 or some of the larger RED models planned for the future. Make it just one relatively small camera that works in a subtle fashion and you're good to go.
Mitch Gross Whilst we were shooting The Dark Country and were worried about multiple flares from car headlights causing problems Ray Zone, the grand old man of 3D, said not to worry because it was precisely that effect, different eyes seeing different flares, that made diamonds appear to sparkle!
I'm not posting on anything 3D at the moment because I haven't seen the final result of DC.
After what happened with Mutants I've learned not to comment until they finish screwing the images up!!!
Geoff Boyle FBKS
Cinematographer
EU Based
Skype? geoff.boyle
US +1 818 574-6134
UK +44 (0) 20 7193 3546
mobile: +44 (0)7831 562877
www.cinematography.net
Mitch Gross wrote:
>>Make it just one relatively small camera that works in a subtle fashion and you're good to go. Alas, I wish it were as simple as small side-by-side sensors or discrete stereo pairs sampled from a single larger sensor, but it gets very difficult to achieve interaxials less than 65mm if one also requires decent spatial resolution and good lenses of a decent speed. Narrative stereo moviemaking often requires interaxials less than 65mm. Good closeups in stereo often require interaxials of 40mm-ish and other shots even less, and that can only be achieved by the "big iron", i.e. beamsplitter rigs. That's why beamsplitter rigs are so pervasive and why side-by-side systems are unlikely to replace them. It's nice to have both, but beamsplitter rigs are ultimately more useful IMO. I wish it were not so.Tim Sassoon wrote:
>>Others may mention things I've forgotten. But apart from that, there's the old saying that good 3D starts >>with good 2D. I'm late to this discussion but I often think that masters of black and white cinematography would make the best stereo cinematographers because it was essential for them to create separation and planes of visual interest through lighting. It's a misnomer to think that this is a de facto result of stereoscopic cinematography so one can skimp on the lighting. Good lighting tremendously enhances the stereo effect. For example, I think Alan Davieu's studio lighting style which (for me) the represents the best of new and the best of classic technique, would be very good for stereo. There are many so-called extrastereoscopic depth cues for stereo vision. A little (or a lot) of camera movement also enhances the effect, but as with 2D it should be motivated by the story rather than gratuitous. That said, a creeping dolly move or slow boom up or down can subtly but effectively punctuate the stereo effect. If it's not intrusive, I think it's preferable to a locked off shot. I increasingly read opinions that deep depth of field isn't necessary for good stereo and that shallow depth of field can be useful for stereo in the same way that it helps define the focal point for the viewer in 2D. I sometimes wonder if this is a bit of a rationalization made necessary by larger sensors and 35mm DOF or in low light situations. As always, IMAX 3D is very instructive with respect to the extremes of stereo cinematography. Given the size of the format (and I'm referring to 65/70 mm 15-perf), depth of field is always an issue, but the use of wide lenses and mostly exterior situations helps with DOF. The fact that shots in IMAX (2D and 3D) tend to be longer than in other cinema formats allows the viewer to "explore" the frame. It's my opinion, part of the magic of 3D is being able to explore the contents of the frame beyond the intended point of interest (presumably the zero parallax point). If the foreground or background (or both) are out of focus, it feels unnatural because the viewer "expects" to be able to shift their attention to other parts of the frame and that whatever they look at should be in focus. If it's not, it impairs the illusion (the suspension of disbelief, if you prefer). The same goes for temporal resolution. Too much motion blurs impairs viewers' ability to fuse stereo images. Without fusion -- no stereo perception. But these just my opinions. If good cinematographers never stop learning, then that certainly applies to stereographers. The "rules" for stereo are rightfully a moving target. And different philosophies and interpretations ensure that there will always be a great diversity of styles. Vive la difference -- as long it doesn't make the audience physically uncomfortable! Lenny Lipton's first rule of "Do No Harm" is a good one. And by the way, Lenny Lipton's blog is very instructive and thought-provoking, even if one doesn't always agree with his conclusions.http://lennylipton.wordpress.com/
I appreciate that he generously shares his knowledge and experience, even if his math formulae leave me slackjawed and feeling like an idiot. (I know, if the shoe fits ....)
I think the very clever Tim Sassoon is right about how the process needs to be simplified to a single camera recording depth information. My only reservation is that stereo then largely becomes a post process. There goes the fun! If we can somehow get "live" images from such a system, the StereoSassoonoVision system is the future. Greg Lowry>>StereoSassoonoVision system is the future
I generally refer to it as "InsufferableNincomScope" myself. Consultants will tell you that companies and products with personal names attached are harder to sell. And I'm not as smart as I look, which is doubly unfortunate. Tim SassoonTim Sassoon wrote:
>> Consultants will tell you that companies and products with personal names attached are harder to sell.And I'm not as smart as I look....
Hence why, even with this revealing insight, you still name your company Sassoon Film Design.... ;-)))
I'll go back to my corner now.
>>My only reservation is that stereo then largely becomes a post process. There goes the fun! If we can >>somehow get "live" images from such a system
But more seriously than ever before, it wouldn't be entirely a post process, and one should be able to preview on-set, just as one can RT "de-mosaic" (isn't that just a fancy name for Jews for Jesus?). Tim Sassoon
"Those who have forgotten the past must live for the future"
Santa Monica, CA
Tim Sassoon wrote:
class="style18"
>> I generally refer to it as "InsufferableNincomScope" myself.
"InsufferableNincomScope" ... hard to fit that on a marquee.
NincomScope does have a certain je ne sais quoi though.
class="style18" >> And I'm not as smart as I look, which is doubly unfortunate.
Some lines can't be topped. You get the last word.
Greg LowryMichael Most wrote:
>> Hence why, even with this revealing insight, you still name your company Sassoon Film Design.... ;-))) I'll >>go back to my corner now. End of Round 1. Most: 1. Sassoon: 0. Greg LowryTim Sassoon wrote:
>>The big question is, how to generate depth information, and in what form to store it? IMHO either EXR or >>another file format of the same name will probably be the answer to that and the HDR question. It sounds like you feel that the ultimate answer will be largely photogrametrically (is that a word??) based, rather than some other type of technology. Do you not see a future for, oh, I don't know, maybe some variation on a fast scanning sonar technology? I agree that a system such as Lidar probably carries with it some potential for eyeMike Most writes:
>>It sounds like you feel that the ultimate answer will be largely photogrametric Could be, and we're doing an ass-backwards version of that to derive depth from 2D scenes now. Could be sonar, as you suggest. Personally, I think HDR light field capture is where one really wants to end up.All I really know is that binocular capture isn't going to cut it for long, or become widespread, any more than 2-strip Technicolor did for color, or 3D is going to remain primarily a cartoon medium. And I heard essentially that from several major players at a meeting just this morning.
Someone in the R&D department needs to get their thinking cap on.
Tim SassoonA few years ago at NAB, an Israeli company whose name escapes me showed camera that simultaneously captured a Z depth channel. They were using the Z info to isolate objects and extract them from a background. Chroma Key without a color background. Can't recall if Z info was sonar derived or not.
Anyone recall this or the company name?
Jim Reed
Online editor
501 Post
Austin, TX
Jim Reed wrote:
>>A few years ago at NAB, an Israeli company whose name escapes me showed camera that >>simultaneously captured a Z depth channel. They were using the Z info to isolate objects and extract them >>from a background. Chroma Key without a color background. I've seen such systems demonstrated at various Siggraph shows, but always as a technology demo, not an actual product. And always with a static camera.Mike Most writes :
>>Do you not see a future for, oh, I don't know, maybe some variation on a fast scanning sonar technology? As Polaroid found out some years ago when they introduced their sonar autofocus, it won't work through windows. But to cut to the chase here, we've already seen the Adobe multi-lens fly's-eye system and that other similar one (from Stanford, was it??) ... but we haven't heard much about them lately. Has anyone heard any recent scuttlebutt? Dan DrasinDaniel Drasin wrote:
>>As Polaroid found out some years ago when they introduced their sonar autofocus, it won't work through >>windows. Picky, picky.Mike Most
Technologist
Woodland Hills, CA.
Jim Reed asked:
class="style18" >>"A few years ago at NAB, an Israeli company whose name escapes me showed camera that >>simultaneously captured a Z depth channel... Anyone recall this or the company name?"
This was 3DV's ZCam. They demo'd that system for several years but the matte edges were never good enough to replace a traditional chroma key (imo) and they never did enough else with it, although everybody knew it had potential.
Finally in 2007 they realized that were chasing the wrong market, repackaged it as a consumer gaming device, then signed some deals that culminated with a recent purchased of the company by Microsoft. The technology is believed to be the core of Microsoft's "Wii Killer".
Greg Lowry wrote:
>>Narrative stereo moviemaking often requires interaxials less than 65mm. Good close-ups in stereo often >>require interaxials of 40mm-ish and other shots even less, and that can only be achieved by the "big iron", >>i.e. beam splitter rigs. Beam splitter rigs do not have to be " Big Irons". I've been using 2k beam splitter rig, that work with Steadicam or handheld for over a year. The 2K rig is equipped with remote controlled focus, iris, convergence, interaxial, and record on/off, plus v-block battery and primes (8mm, 12.5mm 16mm, 25mm, 35mm, 50mm-all "Linos" Rodenstock lenses T/1.8 with IMS mount). The rig weighs a total on 19.5 LBS. and fits in a 10"x10"x10" space. The interaxial flies from 0 to 63.5mm and will converge (toe-in) from parallel to as close as is necessary. Convergence and interaxial can be programmed to track a specific distance, or run independent of one another. Focus, iris, and record are controlled with a single remote handset, and interaxial and convergence form another remote handset. The rig has evolved through input from three live action features shot last year. The 2k beams splitter has progressed to a Mark III design, and soon will transform into a 3k rig, hopefully by late summer.Max Penner wrote:
>> Beam splitter rigs do not have to be " Big Irons". Yes, indeed, Max. The "big iron" term wasn't meant literally, although there are still plenty of big rigs (sounds like an 18 wheeler?) out there. My main point was that beamsplitter rigs still rule and that side-by-side rigs, however small, can't achieve small interaxials. Greg LowryI've been reading all your posts, and I see that you are pretty much ALL right.
There is so much forgiving in 3D by your brain and so many ways to shoot and light...none of them are completely wrong when following the common sense rules. The general public still aren't fully trained in what "good" 3D is. I've seen them ooh and aah while watching reversed L-R. In the over 50 shows I've shot, we've lit and composed the scene so many different ways. What makes it more interesting for some people may be different for others. I've found that some of the more attention holding scenes we have shot have many layers of depth cues with slight camera movement in any direction to see the layers shift. If the camera is not moving, the picture starts going flat and all the work you did setting up the scene goes wasted.We have found the audience does tend to scan the frame a lot more than a 2D frame, so if the scene is long enough, then I tend to light those other elements more and hold the depth of field, and then when the shots are shorter we can steer the viewers' eyes to a specific subject in the frame by limiting depth of field, or under lighting...again while still following the basic rules.
I light 3D differently than film and video. More front lighting over the entire scene helps in creating a "painting" for people to scan and enjoy the entire frame, while more molding can work for isolating subjects in a frame. A lot of color in a frame is very helpful for distinguishing layers and is easier to see 3D than with a lot of white or shadows. A good example of lack of color being hard to watch was the commercial during this year's Super Bowl with almost everything white.
Max is right about beam splitter rigs...if done right, they can be quite manageable with the proper servos and adjustments to create any interaxial and convergence appropriate for the shot. As long as it can be done quickly and smoothly so as not to hold up production. Side by side rigs can be so much faster to shoot with, but obviously there are a few limitations to the shots you can get. Most of these can be worked around (as we have done many times) and some can be adjusted in Post. So, essentially, there is no reason to get locked into trying to have the "perfect" rig or perfect lighting. It is good to try different techniques and lighting schemes and know the audience will enjoy what you provide them (again with just following some of the basic rules)
Copyright © CML. All rights reserved.