I am interested how other professionals think about 'softening the cuts', especially in stereo 3D that is shot with camera's. Sometimes you have to cut from an object far away in Z-space, to an object closer to the audience. It is told that this can cause eyestrain, because in real life the eye does not have to be able to do this. No objects jump in zero seconds from far away to very close. I can imagine that this is true, but who of you experienced this? How do we know that this is the case?
Let's suppose it is true, that it causes eyestrain. Than a solution. A couple of frames before and after the problematic cut you bring the two objects, on both sides of the cut, closer to each other in z-space. That is done by shifting the right and left eye images closer to each other, or further away from each other. There is postproduction equipment that does that very good and simple.
Now I get to my final questions: how necessary do you think this is, this 'softening the cuts?' And do you think it is just necessary for a feature length film, or also for a five or ten minute film?
And by the way, I am also interested how CGI people do this, they also have problematic cuts in their life’s.
Best regards,
Ivo Broekhuizen
www.amilliondreams.com
If you have a pair of anaglyph glasses, watch this video in 3D.
>
http://www.youtube.com/watch?v=xn-0AwdCE98
at about 4:52 into the film, you will see that even though both the previous subjects and the next scenes subject was at about the same Z-depth space ( Minus Z depth) AND the cut was somewhat softened by a fade to black, there is still a bit of "jarring" on the eyes.
My observation is that: It's harder to focus on a cut if the subject matter is in negative z space AND the subject matter is not easily recognizable at quick glance (example the brain and eyes search for something to lock onto in the leaves that the softened cut reveals)
I suppose if the scene following the cut has distinct 'easily recognizable' subject, And the z-depth is approximately matched, a quicker cut is possible.
Regards,
Clyde DeSouza
Real Vision, Dubai UAE
www.realvision.ae
>> how necessary do you think this is, this 'softening the cuts?' And do you think it is just necessary >> for a feature length film, or also for a five or ten minute film?
Not to confuse terminology, because a 'soft cut' is something that already exists in editorial vernacular and would then also exist in stereoscopic editorial vernacular.
But this cut-to-cut "z-space handoff" is part of good depth grading and should be seen as such. It shouldn't be seen as something that you only do on pieces of a certain length. It reduces eyestrain *and* increases the speed at which the viewer can resolve the scenes. I would say that if you have a short piece the it makes it just as important to do, because you want your viewer to spend as little time possible trying to resolve the image if you've already got a short time to get your visual message across.
Eric Deren
Dzignlight Studios
VFX & Animation Design
www.dzignlight.com
+1-404-892-8933
Softening the depth jumps (Depth Grading or Depth Blending)is essential, in my opinion, to create good looking movies. We do this all the time at Dreamworks Animation
My general guidance is...
1. Adjust the incoming shot not the outgoing shot. Your brain can see the
outgoing space change but is still adjusting to the incoming shot.
2. Blend the depth over 10 - 20 frames.
3. Try to disguise it within a camera move or character action.
4. Locked camera shots are very hard to blend unless it is very gentle so generally I leave them alone.
But really you do anything you want to make the two shots cut at the same depth and as long as you can’t see the cheat it is good.
I don’t think it is important to cut at the screen although that is normally where you end up. You just need to match depth.
Phil Mcnally
Stereoscopic Supervisor
Dreamworks Animation
I think the issue with this cut may be more the stereo settings on the incoming shot. It looks to me like they set up for the wide then zoomed in, *without* changing the interaxial to compensate for the longer focal length. The result is that there is a LOT of negative parallax on the incoming shot which makes it difficult to look at, even with a fade to black in between.
You can test the principle of cutting in 3D by holding up your finger as close to your nose as close you can while still retaining focus on it. Then look to a background object in the distance. It takes about 1/2 a second to adjust, doesn't it? If that was a 12 frame cut away in an action scene, your audience just missed the whole shot.
Cheers
Markus Stone
"Stereo blending" is the term Phil "Cap't 3D" McNally uses when we talk about cut-to-cut "z-space handoff" . Much easier to do on a CG film. But not impossible in live action. The tools are much better now than when we used image axis offset tool on Scratch on "Journey 3D" winter 2007.
In Live action you need to fix the vertical offsets between the eyes if there are any. Seems like a long time ago in stereo years now that there is so much more 3D product out every 2 weeks. And it's only been 2 1/2 years I've been doing exclusively stereo films.
Time flies when you’re having fun.
best,
Jeff Olm
Stereo Colourist
Dreamworks Animation
LA. CA
>> Much easier to do on a CG film. But not impossible in live action
Being able to design the stereo space is a prime motivation to go 2D to 3D conversion.
Tim Sassoon
SFD
Santa Monica, CA
>>Being able to design the stereo space is a prime motivation to go 2D to 3D conversion.
Fascinating - first I've heard of that motivation, but it makes sense.
But I thought you could tweak the depth in post from shot 3D with offsets? No? What complications?
Mike Curtis
hdforindies.com
Mike Curtis wrote:
>> But I thought you could tweak the depth in post from shot 3D with offsets? No? What >>complications?
You can alter the convergence point, but you're adjusting the entire image when you do that. What Tim is talking about is being able to decide where individual objects are placed, much as you can do with CGI animated material. And he's absolutely correct, because when you do a 3D conversion, you essentially separate individual objects and assign depth to them individually, giving you the ability to "design" the 3D space and make it deeper or more shallow both on an overall and an individual object basis. That's a very simplistic description, but it's what he's basically referring to.
Mike Most
Technologist
Woodland Hills, CA.
Also don't forget at $1000 per second of conversion, a 1 second "soft cut" is still decent money
I'm not even going to get into the other "issues" of 2d to 3d, since in this context we are rightfully using 2d-->3d as a special effect only.
Regards,
Clyde DeSouza
Real Vision,
Dubai, UAE
http://www.realvision.ae/blog
Michael Most wrote:
>> ....when you do a 3D conversion, you essentially separate individual objects and assign depth to >>them individually giving you the ability to "design” the 3D space and make it deeper or more >>shallow.
Well I'm proposing Hollywood shoots everything in black and white, then colorize it to have full control of the colours. HEY, NO ONE STEAL THAT IDEA!..Oh, wait a minute...
JCarbonetti
Stereoscopic (black & white only) Supervisor
L.A.
Jim Carbonetti wrote:
>> Well I'm proposing Hollywood shoots everything in black and white, then colorize it to have full >>control of the colours...
I know this is intended (probably only partially) as a joke...
I wasn't putting an overall value judgement on the ability to design 3D space in post. I was simply pointing out that it is an advantage of using a conversion approach - which it is. There are also disadvantages. It all depends on the needs of the project, creatively, practically, and financially.
Horses for courses.
Mike Most
Technologist
Woodland Hills, CA.
>> I was simply pointing out that it is an advantage of using a conversion approach - which it is. >>There are also disadvantages...
As Mike said, "cel-shifting" the convergence just ratchets the stereo space forward or backward. It's an essential technique, but doesn't solve every problem.
When considering conversion, one must remember that stereoscopic shooting for feature films isn't cheap, either. The decision is a balance of factors like, film vs. digital, shooting schedule, location difficulty, Producer/Director/DP/Editorial/Studio comfort factor, percentage of VFX shots, hand-held, flare control, shading control, being able to actually do more extreme stereo effects because one knows one can manage the cuts, mixed stereobase, and on and on. The majority of the work we get is not end-to-end, but rather a handful of hard to shoot 2nd unit shots in an otherwise stereo film.
For manipulating stereo space, there's also The Foundry's Ocula. The main limitation is that, because it works by building a disparity map from correlated points between the two eyes, surfaces that only appear in a single eye cannot be correlated, and have to be dealt with by hand. Which is often 25% of the frame if you think about it. The Foundry, to their credit, takes pains to explain this in demos.
Since we have the capability to do full conversion pretty efficiently, we haven't found half-measures like that worth messing with. You get an easy ride up to a certain point, but if that's not sufficient, you have to start all over again.
Binocular stereo acquisition should be superceded by monocular RGB+Z streams as soon as possible, and that binocular stereo transport standards should also have a finite useful lifetime. Change will be driven by the same needs that kept colour as primarily a cartoon medium for thirty years until monopack colour negative became widely available, and the need to feed auto stereoscopic displays of widely differing sizes and types with a single program master.
I tell people that doing 3D conversion now is doing "the right way the hard way"
An important concept is the difference between "stereoscopic" and "3D". One is a subset of the other. 3D is "scene-referred", stereoscopic is not.
Tim Sassoon
SFD
Santa Monica, CA
Tim Sassoon wrote:
>>Being able to design the stereo space is a prime motivation to go 2D to 3D conversion.
I think the initial issue is the current lack of a comprehensive and concise way to model and express all of the issues, including perceptual and technical, but perhaps the most nebulous ones are the
temporal.
2D to 3D conversion doesn't let stereo live up to its potential.
We will be able to appropriately design the stereo space on live action shoots once we have suitable models and preferences. Some stereographers may have it totally nailed already, but I doubt it.
I'll start to think live action pipelines are maturing once overscan is a standard feature from start to finish.
One of the largest variables is the final cut, which is one of the temporal issues referred to above.
It is incredibly liberating to be working on a cg project as all these issues can be assessed and refined. We too have built a pipeline to deal with cuts (cut-cushioning). Overscan allows us to x-pan the images around the cuts without bringing black into frame, but perhaps more significantly it allows you to reposition the window across the entire duration of the shot if you wish.
Regarding the cushioning, I'd recommend not necessarily perfectly matching the disparity of the subject, instead let a small step kick- start re-convergence of the eyes and pan it out as the eyes chase it at a comfortable rate.
Note that there is an asymmetry between the rate people converge and diverge (you can go cross-eyed much faster than you diverge), so the process should be adapted accordingly.
I'm pretty sure all live action shoots will ultimately also utilize overscan.
Tim Baier
Supervising TD - Stereoscopics
Animal Logic
> 2D to 3D conversion doesn't let stereo live up to its potential.
One can also make the reverse case.
Tim Sassoon
SFD
Santa Monica, CA
A bit late, but still:
Thanks for your reactions to my questions.
We are shooting our third short 3D film right now and using your tips in 'depth blending'.
However, one of the questions I asked is still waiting for an answer. How do we know that 'depth jumps' are irritating for the viewer? Are we sure about that?
Does anybody have proof?
Thanks,
Ivo Broekhuizen
3D Producer/director
A Million Dreams
>>How do we know that 'depth jumps' are irritating for the viewer? Are we sure about that? Does >>anybody have proof?
Regarding "proof", I am not aware of any double-blind scientific tests of a statistically significant population with a control group and a test group... but that doesn't mean that there aren't any.
In lieu of any scientific testing, use logic. People complain about 3D movies causing fatigue in both their eyes and their brains, there is proof of that. Fatigue is a cumulative effect from different visual stressors associated with stereoscopic viewing. A rough depth cut singularly creates a sense of visual stress (there IS proof of that), which would add to any growing sense of fatigue. Fixing rough
depth cuts would remove this particular source of fatigue, thereby making your film easier to watch.
It seems like common sense to me. Do you feel that viewer fatigue isn't something producers of stereoscopic media should be worried about?
Eric Deren
Dzignlight Studios
VFX & Animation Design
www.dzignlight.com
+1-404-892-8933
>>Do you feel that viewer fatigue isn't something producers of stereoscopic media should be worried >>about?
That is something everybody on this list should worry about! Actually I am particularly sensitive for eye fatigue myself. When there is something wrong with the stereo, my eyes literary start to tear. Usually I take of my glasses then, and start studying how the two images differ from each other and what is wrong with them.
However, it is not always clear what my poor eyes are suffering from. Sometimes it is the lack of light, sometimes the alignment, sometimes the vibration of film stock. And sometimes, probably, the depth cuts. But it is hard to experience after each individual cut. The crying usually starts after a couple of minutes...
Ivo Broekhuizen
Producer/director
A Million Dreams
>> How do we know that 'depth jumps' are irritating for the viewer? Are we sure about that? Does >>anybody have proof?
Could this be considered as a proof?
Repeated Vergence Adaptation Causes the Decline of Visual Functions in Watching Stereoscopic Television
M. Emoto and T. Niida and F. Okano
Journal of Display Technology 1 328-340 (2005)
http://www.nhk.or.jp/strl/publica/labnote/pdf/labnote501.pdf
Frederic Devernay
Research Scientist, INRIA Grenoble Rhone-Alpes, France
class="style4" >>Could this be considered as a proof?
>class="style4" style="font-style: italic">>>Repeated Vergence Adaptation Causes the Decline of Visual Functions in Watching Stereoscopic >>Television M. Emoto and T. Niida and F. Okano
Frederic,
This looks like real, solid proof indeed! I am going to take a weekend to read this.
Ivo Broekhuizen
Producer/director
A Million Dreams
Copyright © CML. All rights reserved.