Home of Professional Cinematography since 1996

class="style16"> Lighting for 3D

Published : 13th August 2009


I'm not sure anyone has brought up this question yet, but I'm curious :


How (if at all) does/can/should lighting for 3-D differ from that for 2-D films?


Obviously 3-D features have to be viewable in 2-D, so the expected answer would be that one should stick to conventional lighting techniques. But are Directors of Stereography <grin> sometimes tempted to dispense with, or modify conventional lighting techniques that are intended to provide "3-D" modeling, separation of planes, etc, because the stereoscopic image no longer requires them?
Is it ever necessary to, for example, tone down contrast ratios because, i.e., the side of a face in deep shadow contains little or no stereo information?
I'm kind of shooting in the dark here and oversimplifying in order to start some discussion. There are many issues here I haven't even thought of, so I'm not sure what to ask...


Dan Drasin
Producer/DP
Marin County, CA



Only as pertains to light and lighting:


Be aware of possible polarization problems with mirror rigs, where light can be seen on a reflective surface in one eye but not the other. Vertical shading and flare are also concerns. One is unlikely to be able to shoot a spatially coherent lens flare. This problem is exacerbated by the general desire to stay on wider lenses. Generally speaking, anytime you're seeing something in one eye that's invisible in the other, you have a problem. So you want to be careful with back lighting especially. Be sure to check what both eyes are seeing.
Also, watch out for high-contrast edges, except at convergence (screen plane, where L&R images are coincident). They can create visible "ghosting", which is just the normal percentage of leakage from one eye to the other made visible by extreme brightness. The more diverged the image, the bigger the potential problem.


This is the main reason one ends up with multiple 3D masters, "ghost-busted" Real-D vs. Dolby, etc. The best fix, a RT ghost busting box calibrated to the room, sitting between the server (playing a single neutral master) and the projector, isn't an available option because the image is decrypted in the projector media block. If one could get a clean SDI send out of that, well, why bother encrypting?


The old saw of grading hot for stereo will slowly fade away as projection gets brighter, but as of this date, it's still an issue. The projector's wearing shades, and so are you, so compensation must be made, and, until 3D home video reaches some kind of critical mass, movies will go out on Blu-Ray and DVD as red/cyan anaglyph (among other choices) for homes without a stereo-capable display. In other words, probably for most of the next decade. With anaglyph, you want to avoid either bright red or cyan in the image, so that one isn't forced to desaturate to maintain the stereo and keep it watchable. A red object is bright in the left eye and dark in the right. It's a very unsatisfactory system, and there's some despair among 3D professionals that it's still being used at all.


Others may mention things I've forgotten. But apart from that, there's the old saying that good 3D starts with good 2D.


Tim Sassoon
SFD
Santa Monica, CA



Great post Tim...


>>,,,,,One is unlikely to be able to shoot a spatially coherent lens flare.,,,,,


As much as I enjoyed U23D, I thought the 3D lens flares were very strange. But they *did* appear to be spatially coherent... It just seemed very weird to have a third dimension to a flare.


David Perrault, CSC



David Perrault writes :


>>But they *did* appear to be spatially coherent...? It just seemed very weird to have a third dimension to a >>flare.


The reason they worked at all was because they were stage spots coming from close to the subject. More typically, when the sun is near the edge of frame, is more what I was talking about. One of the advantages of post conversion is the opportunity to properly design things like that.


Tim Sassoon
SFD
Santa Monica, CA



Tim Sassoon wrote:


>>Generally speaking, anytime you're seeing something in one eye that's invisible in the other, you have a >>problem. So you want to be careful with back lighting especially. Be sure to check what both eyes are >>seeing.


This was the first thing that came to mind too. It's even more of a problem if the audience will be viewing with shutter glasses (Less likely for theatrical, but possible if you're making something for other venues). A rig with two lenses close together can still have differences in flare and glare from the subject, obviously, as this happens with our own eyes. Therefore it's not nearly so distractingly different as the case Tim mentions, but with shuttered glasses, these differences really stand out. But you'll probably be shooting for a polarized or variation on anaglyph display.


Steven Bradford
Seattle Washington
http://www.3dstereomedia.com



Tim Sassoon writes:


>>Be aware of possible polarization problems with mirror rigs, where light can be seen on a reflective >>surface in one eye but not the other.


Is this an actual problem? I mean, light reflected from silvered surfaces isn't polarized. Granted, the mirror may be only half-silvered, but the difference between even a half-silvered surface (nonpolarizing) and the surface of the glass itself (polarizing) would be huge, no?


>>One is unlikely to be able to shoot a spatially coherent lens flare.


Gotcha.
Good post, Tim. Thanks.


Steven Bradford writes:


>>A rig with two lenses close together can still have differences in flare and glare from the subject, >>obviously, as this happens with our own eyes.


Glare, perhaps.... and maybe floaters, astigmatism, etc.. But single - element protein lenses don't flare!


Dan Drasin
Producer/DP
Marin County, CA



Tim Sassoon wrote:


>>The reason they worked at all was because they were stage spots coming from close to the subject. More >>typically, when the sun is near the edge of frame, is more what I was talking about. One of the advantages

class="style18" >> of post conversion is the opportunity to properly design things like that.


I have found that it helps to index the iris leafs to reflect the same radial symmetry, so the points of the flare are coincidental to one another.


Tim Sassoon wrote:


>>Others may mention things I've forgotten. But apart from that, there's the old saying that good 3D starts >>with good 2D.


Your tutorial on 3d lighting is Absolutely spot on! The only thing I could add, which is obvious to most of us , is one needs twice the light when shooting with beam splitter rigs.


Max Penner
CTO/Stereographer
Paradise F.X. Corp.
7011 Hayvenhurst Ave. Suite A
Van Nuys, Ca. 91406
www.paradisefx.com


Daniel Drasin writes:


>>Is this an actual problem?


Yes. For example, there are a number of shots in the IMAX film "Wild Ocean 3D" (I Did the digital cinema version) where reflections off the side of wet tidal pool rocks are prominent in one eye, but invisible in the other. Bright, and dark. It's annoying to watch, and hard to fix.


Tim Sassoon
SFD
Santa Monica, CA



Tim Sassoon writes:


>>there are a number of shots in the IMAX film "Wild Ocean 3D" (I did the digital cinema version) where >>reflections off the side of wet tidal pool rocks are prominent in one eye, but invisible in the other. Bright, >>and dark. It's annoying to watch, and hard to fix.


So to prevent that, I spoze you'd need to put a weak polarizer on the "straight through" camera and a compensating ND on the "reflected" camera. Does anyone do that? Or would that create other undesirable artifacts? (...apart from the obvious additional light loss and more glass surfaces to deal with.)


Dunno if this has been discussed (and this may be an extremely naive question), but is there any way of shooting with a wide, fixed interocular distance and then synthesizing interocular variations in post? I imagine this could be done if you started with THREE side-by-side cameras, which would give you the ability to look around objects to fill in background details. The center camera alone would be your 2-D view, and the side cameras would contribute to the synthesized 3-D product, whose virtual eye positions would end up somewhere between the center and side cameras. (I imagine that these newfangled fly's eye sensor systems work somewhat in this way when creating a stereo image.)


Dan "Gyro Gearloose" Drasin
Producer/DP
Marin County, CA



Daniel Drasin writes:


>>is there any way of shooting with a wide, fixed interocular distance and then synthesizing interocular >>variations in post?


I'm of the opinion that 3D won't take off as a shooting medium until one is barely aware that the camera is capturing depth information to reconstruct stereo pairs later; that there's essentially no difference between a stereo and mono camera. Just as it was with color film - didn't fly until monopack. Plus, autostereoscopic displays particularly don't want binocular stereo. And it should be adjusted at the display.


Tim Sassoon
SFD
Santa Monica, CA



I tend to agree. That's why the future is a pair of small side by side sensors such as the SI-2K Mini or the prototype Micros they've shown. Or a very large single sensor onto which a pair of lenses can be projected side by side and later combined, such as on the Phantom 65 or some of the larger RED models planned for the future. Make it just one relatively small camera that works in a subtle fashion and you're good to go.


Mitch Gross
Applications Specialist
Abel Cine Tech



Whilst we were shooting The Dark Country and were worried about multiple flares from car headlights causing problems Ray Zone, the grand old man of 3D, said not to worry because it was precisely that effect, different eyes seeing different flares, that made diamonds appear to sparkle!
I'm not posting on anything 3D at the moment because I haven't seen the final result of DC.
After what happened with Mutants I've learned not to comment until they finish screwing the images up!!!

Geoff Boyle FBKS
Cinematographer
EU Based
Skype? geoff.boyle
US +1 818 574-6134
UK +44 (0) 20 7193 3546
mobile: +44 (0)7831 562877
www.cinematography.net




Mitch Gross wrote:


>>Make it just one relatively small camera that works in a subtle fashion and you're good to go.


Alas, I wish it were as simple as small side-by-side sensors or discrete stereo pairs sampled from a single larger sensor, but it gets very difficult to achieve interaxials less than 65mm if one also requires decent spatial resolution and good lenses of a decent speed. Narrative stereo moviemaking often requires interaxials less than 65mm. Good closeups in stereo often require interaxials of 40mm-ish and other shots even less, and that can only be achieved by the "big iron", i.e. beamsplitter rigs. That's why beamsplitter rigs are so pervasive and why side-by-side systems are unlikely to replace them.


It's nice to have both, but beamsplitter rigs are ultimately more useful IMO. I wish it were not so.
For other types of stereo production (sports, etc.) hyperstereo is often necessary so side-by-side rigs are perhaps more useful.


Stereoscopic production currently requires a range of camera solutions, not unlike other kinds of production. If there's any doubt of that, look at Pace or 3ality's range of camera rigs.


Greg Lowry
Scopica 3D | Scopica Inc.
Vancouver



Tim Sassoon wrote:


>>Others may mention things I've forgotten. But apart from that, there's the old saying that good 3D starts >>with good 2D.


I'm late to this discussion but I often think that masters of black and white cinematography would make the best stereo cinematographers because it was essential for them to create separation and planes of visual interest through lighting. It's a misnomer to think that this is a de facto result of stereoscopic cinematography so one can skimp on the lighting. Good lighting tremendously enhances the stereo effect. For example, I think Alan Davieu's studio lighting style which (for me) the represents the best of new and the best of classic technique, would be very good for stereo.


There are many so-called extrastereoscopic depth cues for stereo vision. A little (or a lot) of camera movement also enhances the effect, but as with 2D it should be motivated by the story rather than gratuitous. That said, a creeping dolly move or slow boom up or down can subtly but effectively punctuate the stereo effect. If it's not intrusive, I think it's preferable to a locked off shot.


I increasingly read opinions that deep depth of field isn't necessary for good stereo and that shallow depth of field can be useful for stereo in the same way that it helps define the focal point for the viewer in 2D. I sometimes wonder if this is a bit of a rationalization made necessary by larger sensors and 35mm DOF or in low light situations. As always, IMAX 3D is very instructive with respect to the extremes of stereo cinematography. Given the size of the format (and I'm referring to 65/70 mm 15-perf), depth of field is always an issue, but the use of wide lenses and mostly exterior situations helps with DOF. The fact that shots in IMAX (2D and 3D) tend to be longer than in other cinema formats allows the viewer to "explore" the frame. It's my opinion, part of the magic of 3D is being able to explore the contents of the frame beyond the intended point of interest (presumably the zero parallax point). If the foreground or background (or both) are out of focus, it feels unnatural because the viewer "expects" to be able to shift their attention to other parts of the frame and that whatever they look at should be in focus. If it's not, it impairs the illusion (the suspension of disbelief, if you prefer).


The same goes for temporal resolution. Too much motion blurs impairs viewers' ability to fuse stereo images. Without fusion -- no stereo perception. But these just my opinions. If good cinematographers never stop learning, then that certainly applies to stereographers. The "rules" for stereo are rightfully a moving target. And different philosophies and interpretations ensure that there will always be a great diversity of styles. Vive la difference -- as long it doesn't make the audience physically uncomfortable!


Lenny Lipton's first rule of "Do No Harm" is a good one. And by the way, Lenny Lipton's blog is very instructive and thought-provoking, even if one doesn't always agree with his conclusions.

http://lennylipton.wordpress.com/


I appreciate that he generously shares his knowledge and experience, even if his math formulae leave me slackjawed and feeling like an idiot. (I know, if the shoe fits ....)


I think the very clever Tim Sassoon is right about how the process needs to be simplified to a single camera recording depth information. My only reservation is that stereo then largely becomes a post process. There goes the fun! If we can somehow get "live" images from such a system, the StereoSassoonoVision system is the future.


Greg Lowry
Scopica Inc. | Scopica 3D
Vancouver



>>StereoSassoonoVision system is the future


I generally refer to it as "InsufferableNincomScope" myself. Consultants will tell you that companies and products with personal names attached are harder to sell. And I'm not as smart as I look, which is doubly unfortunate.


Tim Sassoon
Sassoon Film Design
Santa Monica, CA



Tim Sassoon wrote:


>> Consultants will tell you that companies and products with personal names attached are harder to sell.

And I'm not as smart as I look....


Hence why, even with this revealing insight, you still name your company Sassoon Film Design.... ;-)))
I'll go back to my corner now.


Mike Most
Technologist
Woodland Hills, CA.



>>My only reservation is that stereo then largely becomes a post process. There goes the fun! If we can >>somehow get "live" images from such a system


But more seriously than ever before, it wouldn't be entirely a post process, and one should be able to preview on-set, just as one can RT "de-mosaic" (isn't that just a fancy name for Jews for Jesus?).
AFAIK a considerable number of _current_ autostereoscopic displays prefer RGB plus depth over binocular, because they need to generate a dozen or so views. I'm not that up on autostereo, so I count on someone like Bernard setting me straight, but I believe I'm correct in outline if not detail.
The big question is, how to generate depth information, and in what form to store it? IMHO either EXR or another file format of the same name will probably be the answer to that and the HDR question.


Mike Most writes :


>>"Hence why, even with this revealing insight, you still name your company Sassoon Film Design.... ;-)))"


You have noticed the madness to my method.

Tim Sassoon
"Those who have forgotten the past must live for the future"
Santa Monica, CA



Tim Sassoon wrote:

class="style18"
>> I generally refer to it as "InsufferableNincomScope" myself.

>"InsufferableNincomScope" ... hard to fit that on a marquee.
NincomScope does have a certain je ne sais quoi though.

class="style18" >> And I'm not as smart as I look, which is doubly unfortunate.


Some lines can't be topped. You get the last word.


Greg Lowry
NicomScopica Inc. | NincomScopica 3D
Vancouver



Michael Most wrote:


>> Hence why, even with this revealing insight, you still name your company Sassoon Film Design.... ;-))) I'll >>go back to my corner now.


End of Round 1. Most: 1. Sassoon: 0.


Greg Lowry
Scopadope
Vancouver



Tim Sassoon wrote:


>>The big question is, how to generate depth information, and in what form to store it? IMHO either EXR or >>another file format of the same name will probably be the answer to that and the HDR question.


It sounds like you feel that the ultimate answer will be largely photogrametrically (is that a word??) based, rather than some other type of technology. Do you not see a future for, oh, I don't know, maybe some variation on a fast scanning sonar technology? I agree that a system such as Lidar probably carries with it some potential for eye
damage (wouldn't want to blind an actor just to figure out how far from the camera they are.....), but I would think that technologies other than post intensive image analysis could be developed and portabilized to either fit in or be attached to a camera device, possibly working in conjunction with a few other such devices strategically placed. Just thinking out loud...


>>You have noticed the madness to my method.


My brain is working a bit slowly today..
Mike Most
Technologist
Woodland Hills, CA.



Mike Most writes:


>>It sounds like you feel that the ultimate answer will be largely photogrametric


Could be, and we're doing an ass-backwards version of that to derive depth from 2D scenes now. Could be sonar, as you suggest. Personally, I think HDR light field capture is where one really wants to end up.

All I really know is that binocular capture isn't going to cut it for long, or become widespread, any more than 2-strip Technicolor did for color, or 3D is going to remain primarily a cartoon medium. And I heard essentially that from several major players at a meeting just this morning.


Someone in the R&D department needs to get their thinking cap on.


Tim Sassoon
SFD
Santa Monica, CA



A few years ago at NAB, an Israeli company whose name escapes me showed camera that simultaneously captured a Z depth channel. They were using the Z info to isolate objects and extract them from a background. Chroma Key without a color background. Can't recall if Z info was sonar derived or not.

Anyone recall this or the company name?


Jim Reed
Online editor
501 Post
Austin, TX



Jim Reed wrote:


>>A few years ago at NAB, an Israeli company whose name escapes me showed camera that >>simultaneously captured a Z depth channel. They were using the Z info to isolate objects and extract them >>from a background. Chroma Key without a color background.


I've seen such systems demonstrated at various Siggraph shows, but always as a technology demo, not an actual product. And always with a static camera.
Mike Most
Technologist
Woodland Hills, CA.



Mike Most writes :


>>Do you not see a future for, oh, I don't know, maybe some variation on a fast scanning sonar technology?


As Polaroid found out some years ago when they introduced their sonar autofocus, it won't work through windows. But to cut to the chase here, we've already seen the Adobe multi-lens fly's-eye system and that other similar one (from Stanford, was it??) ... but we haven't heard much about them lately. Has anyone heard any recent scuttlebutt?


Dan Drasin
Producer/DP
Marin County, CA



Daniel Drasin wrote:


>>As Polaroid found out some years ago when they introduced their sonar autofocus, it won't work through >>windows.


Picky, picky.
Besides, doesn't everyone put in CG windows these days ;-) ?

>Mike Most
Technologist
Woodland Hills, CA.


>Jim Reed asked:

class="style18" >>"A few years ago at NAB, an Israeli company whose name escapes me showed camera that >>simultaneously captured a Z depth channel... Anyone recall this or the company name?"


This was 3DV's ZCam. They demo'd that system for several years but the matte edges were never good enough to replace a traditional chroma key (imo) and they never did enough else with it, although everybody knew it had potential.
Finally in 2007 they realized that were chasing the wrong market, repackaged it as a consumer gaming device, then signed some deals that culminated with a recent purchased of the company by Microsoft. The technology is believed to be the core of Microsoft's "Wii Killer".


Glenn Woodruff
Cine-tal Systems
Indinapolis IN, USA



Greg Lowry wrote:


>>Narrative stereo moviemaking often requires interaxials less than 65mm. Good close-ups in stereo often >>require interaxials of 40mm-ish and other shots even less, and that can only be achieved by the "big iron", >>i.e. beam splitter rigs.


Beam splitter rigs do not have to be " Big Irons". I've been using 2k beam splitter rig, that work with Steadicam or handheld for over a year. The 2K rig is equipped with remote controlled focus, iris, convergence, interaxial, and record on/off, plus v-block battery and primes (8mm, 12.5mm 16mm, 25mm, 35mm, 50mm-all "Linos" Rodenstock lenses T/1.8 with IMS mount).


The rig weighs a total on 19.5 LBS. and fits in a 10"x10"x10" space. The interaxial flies from 0 to 63.5mm and will converge (toe-in) from parallel to as close as is necessary. Convergence and interaxial can be programmed to track a specific distance, or run independent of one another. Focus, iris, and record are controlled with a single remote handset, and interaxial and convergence form another remote handset.


The rig has evolved through input from three live action features shot last year. The 2k beams splitter has progressed to a Mark III design, and soon will transform into a 3k rig, hopefully by late summer.
The rig has an operators viewing screen and a remote stereo overlay for real time conv/ia adjustments while operating.


Last week the rig was used inside a Bradley personnel carrier, photographing combat personnel in close courters, with the door close in low light. The interaxial distance was approximately 8mm and the angle of convergence was .8 deg. The stereo images captured in the tank are immersive, Closter phobic, and three dimensional.


Max Penner
CTO/Stereographer
Paradise F.X. Corp.



Max Penner wrote:


>> Beam splitter rigs do not have to be " Big Irons".


Yes, indeed, Max. The "big iron" term wasn't meant literally, although there are still plenty of big rigs (sounds like an 18 wheeler?) out there. My main point was that beamsplitter rigs still rule and that side-by-side rigs, however small, can't achieve small interaxials.


Greg Lowry
Scopica Inc. | Scopica 3D
Vancouver



I've been reading all your posts, and I see that you are pretty much ALL right.


There is so much forgiving in 3D by your brain and so many ways to shoot and light...none of them are completely wrong when following the common sense rules. The general public still aren't fully trained in what "good" 3D is. I've seen them ooh and aah while watching reversed L-R. In the over 50 shows I've shot, we've lit and composed the scene so many different ways. What makes it more interesting for some people may be different for others.


I've found that some of the more attention holding scenes we have shot have many layers of depth cues with slight camera movement in any direction to see the layers shift. If the camera is not moving, the picture starts going flat and all the work you did setting up the scene goes wasted.

We have found the audience does tend to scan the frame a lot more than a 2D frame, so if the scene is long enough, then I tend to light those other elements more and hold the depth of field, and then when the shots are shorter we can steer the viewers' eyes to a specific subject in the frame by limiting depth of field, or under lighting...again while still following the basic rules.


I light 3D differently than film and video. More front lighting over the entire scene helps in creating a "painting" for people to scan and enjoy the entire frame, while more molding can work for isolating subjects in a frame. A lot of color in a frame is very helpful for distinguishing layers and is easier to see 3D than with a lot of white or shadows. A good example of lack of color being hard to watch was the commercial during this year's Super Bowl with almost everything white.


Max is right about beam splitter rigs...if done right, they can be quite manageable with the proper servos and adjustments to create any interaxial and convergence appropriate for the shot. As long as it can be done quickly and smoothly so as not to hold up production.


Side by side rigs can be so much faster to shoot with, but obviously there are a few limitations to the shots you can get. Most of these can be worked around (as we have done many times) and some can be adjusted in Post.


So, essentially, there is no reason to get locked into trying to have the "perfect" rig or perfect lighting. It is good to try different techniques and lighting schemes and know the audience will enjoy what you provide them (again with just following some of the basic rules)
IMHO

Eric P. Bakke
Director of Imaging and Technology
www.3DStereomedia.com
cell 818-416-3742