I have to start by admitting that I've been a fan of V3 for many years and have failed to persuade people to use the system, I wanted to use it for the background plates for DC initially.
The system has been described as Emperor's new clothes and the Mexican jumping rat effect, both obviously not in favour of it.
Why the Mexican jumping rat effect should be a putdown when it's a proven survival trait is beyond me!!
You can see the original system at http://www.inv3.com/index.html
However, that's not what I'm posting about.
At the Santa Fe workshop last weekend Chris Mayhew showed a great new use of it and it blew me away.
What he had done is to add the system to a pair of Optimo Rouge 16-42 lenses and put them on RED's in an Element Technica rig.
They then shot a number of sequences with the V3 system off and on.
No changes were made other than turning the system on or off.
The 3D we saw was much rounder in much greater depth and just much easier to watch than any 3D I have ever seen when the system was turned on.
It was truly gob smacking, I saw 3D that had an IO so small, around 11mm, that most of the time you could watch it without the 3D specs and just think you were watching 2D but put the specs on and the depth leapt out at you in a smooth natural way.
Focus pulls worked and didn't disturb, extreme low light level with lack of DoF worked, reflections worked brilliantly.
It was a real eye-opener to me.
I want to use it for every shoot I do from now on.
As it was shot by a bunch of CML members I know I don't think that there was any fakery!!
I'll be uploading examples to the website as soon as I figure how to deal
with them.
Good, but not as impressive, was some single system that Chris showed where
they had taken a mono shot done with the V3 system and copied the mono image to L&R with one offset by 4 pixels.
Hmm, long lens shots with depth and not cardboard. Some odd artifacts introduced by the system that were apparent on static shots with strong straight lines but on moving stuff........
Really worth checking out and if you're at IBC make it your first place to check out!!
Cheers
Geoff Boyle FBKS
Cinematographer
EU Based
Skype geoff.boyle
mobile: +44 (0)7920 143848
www.gboyle.co.uk
So you mean that background wobble business? What I've seen of it was queasily awful with no 3D depth at all. Apologies if I've got the wrong stick, or end, or whatever
Simon Woodgate
Stereofan
UK
>> So you mean that background wobble business?
Exactly the reply I expected!
Throw your preconceptions away and look at what it does now!
Cheers
Geoff Boyle FBKS
Cinematographer
EU Based
Skype geoff.boyle
mobile: +44 (0)7920 143848
www.gboyle.co.uk
>> So you mean that background wobble business? What I've seen of it was queasily awful with no >>3D depth at all.
Remember circle scan 4D? http://www.epindustries.com/CircleScan4D.html
Are the oscillations on both the units in sync?
Regards
Clyde
Stereo Rebel, RealVision
Dubai
I believe this technology shows us how little we still know about spatial perception. I have not seen their 3D tech, so I cannot comment on it. I have an intuitive grasp on how it gives you a spatial impression on 2D movies, since it simulates the way we constantly shift our perspective when looking around. I get a bit of seasick watching some of the 2D examples on their website, but I also get a spatial impression.
I really look forward to seeing how this works in 3D (my initial reaction after hearing about it at NAB was negative, but I did not get to see it). One thing I learned in the past years when working with stereo material is, look first and build your mental model (and opinion) afterwards. My usual mental model would say, moving vertical offset is a terrible thing to have, but maybe this model is wrong?
Our understanding on how our brain perceives 3D is still just at the very beginning.
Lin Sebastian Kayser, CEO
www.iridas.com
>> Are the oscillations on both the units in sync?
Yes and in opposite directions.
Geoff Boyle FBKS
Cinematographer
EU Based
Geoff Boyle wrote
>>"They then shot a number of sequences with the V3 system off and on."
Hello there:
I thought I would post some background information on the v3D examples that Geoff is uploading to the CML ftp site. The material was shot last March in Washington D.C. It was a collaborative effort by the following people:
Chris Mayhew – Vision III
Walter Pollard – Vision III
Leo Fernekes – Vision III contractor
Mark Pederson – Off Hollywood
Aldey Sanchez – Off Hollywood
Brian Garbellini – Just Cause 3D
Jason Chen – Just Cause 3D
Mike Rintoul – Element Technica
Gary Adcock – Studio37
We shot the material using Off Hollywood’s Quasar rig and Red One cameras. Vision III and Angenieux provided two Optimo Rouge 16 – 42 mm zoom lenses equipped with v3 AX4 parallax scanning units. Brian was the DP and Jason was the Stereographer. Mike, Mark, Aldey and Gary managed the cameras and rig. Brian and Jason set the shots. Leo and I setup the lenses. Walter was a producer. This was our first time working together.
Most of the images in the edited piece were captured at focal lengths between 16 and 22 mm. The disparities (aka IO/IA) were between 11 and 17mm. Although the disparity in the comparison piece was 27mm (I need to check my notes to be sure). Two parallax scanning approaches were investigated in the comparison sequence. In the first approach both AX4 units were scanning in a synchronized fashion in the same clockwise direction. In the second approach the AX4 units were scanning in a synchronized fashion in opposite directions. The left lens iris was scanning clockwise and the right lens iris was scanning counter clockwise. The longer piece was shot in the 2nd fashion with the lens iris scanning clockwise/counter clockwise. The parallax scan amplitude was varied on a shot by shot basis. It was set by eye by the DP. Most shots were captured at amplitudes of around 3 on a scale of 1 to 10. The comparison pieces were captured at f/6.7 at the follows scan amplitudes:
Off example - zero (0) amplitude no scan
Clockwise/clockwise example – 6 parallax scan amplitude
Clockwise/Counter Clockwise – 6 parallax scan amplitude
The only difference between the off/on examples is the scan amplitude, scan direction and the moment of capture. The shots were done one after another. Gary Adcock supervised the comparison shooting.
We believe that parallax scanning improves stereoscopic perception. Scanning in a synchronized clockwise/counter clockwise fashion seems to provide the greatest visual benefit. Scan amplitudes can be reasonably small (30% or less) and still trigger a strong visual assumption on the part of the viewer.
We have added the v3D functionality to our AX3 and AX4 units. The AX3 is designed to be used with Angenieux and Fujinon ENG B4 lenses. The AX4 works with the Angenieux Optimo DP 16 – 42 mm and 40X ENG zoom lenses. We are currently developing AX4 adapters for all the other short Angenieux Optimo zoom lenses.
I have included the following to provide some background on our research.
The simple psychophysics of v3D™ (pulled from a paper I did):
Three-dimensional visual perception is a series of cognitive exercises built on fragmentary information. The human eye is continuously scanning although these actions are generally imperceptible. This scanning action is called a saccade. The saccade serves in part to refresh the image being cast onto the fovea and surrounding retina at the back of the eye.
Current psychophysical and physiological evidence suggests that vertical disparities influence the perception of three-dimensional depth, but little is known about the perceptual mechanisms that support this process. Perhaps these perceptual effects are reconciled by a specific encoding of non-horizontal parallax. Whatever the specific mechanisms are, it is clear that the motion and gaze direction of the eyes contribute significantly to the process of three-dimensional sight.
Conventional thought is that because human have two eyes separated horizontally by an average distance of 65 mm (the interocular distance), two cameras capturing images in the same manner would work equally as well. However in the art of image capture, lens distortions, misalignments can cause vertical parallax. Vertical parallax is created by a misalignment of the two camera’s points of view. It can be a cause of eyestrain.
Conventional stereoscopic image capture goes to great lengths to avoid and/or eliminate any vertical parallax differences in the images. The stereoscopic production trend is also increasingly capturing images with disparities that are less than the human interocular (IO) of 65 mm. This trend is fuelled in part by a desire to keep the images in a comfortable range for the general viewing public. However, with less disparity comes less horizontal parallax and therefore less 3D effect. Less disparity also leads to a flattening of background scene elements. The addition of parallax scan information into the left and right image capture improves the overall perception of three-dimensionality in the final stereoscopic production. We suspect this is because the viewers have the benefit of the additional sub-process visual information with which to generate a more unified three-dimensional perception.
Vision III is gearing up to start working with Dr. Proffitt at the University of Virginia Perception Lab to study the psychophysics of parallax scanning. The Lab’s mission is “to gain an understanding of how people perceive and think about space.” I will post links to the results of our efforts for interested parties to review.
We are in the process of editing a new Newseum piece for IBC. I will pass it along to Geoff to post when it is done. Anyway, I hope the above information is helpful. I look forward to reading your comments. Take care.
Chris
Chris Mayhew
Vision III Imaging, Inc.
8605 Westwood Center Drive
Suite 405
Vienna, VA 22182
(703) 639-0670 (O)
(703) 639-0749 (F)
www.inv3.com
http://www.youtube.com/user/v3Imaging (anaglyph examples)
>> I believe this technology shows us how little we still know about spatial perception.
I wouldn't say it shows that; it's fairly well known that a moving point of view increases spatial and depth perception.
>> I have not seen their 3D tech, so I cannot comment on it.
I saw this technology being used with S3D in the Angenieux booth at NAB 2010 and the examples they showed reminded me of exactly what Geoff mentioned: emperor's new clothes. Everyone in the booth was standing around telling me how much better things look and all I am seeing is something that looks like the camera support has a mild case of Parkinson's disease. I was the only non-sales person in the room, and apparently the only person with the ability to give an objective view. The oscillating motion of the footage feels like a mistake, like there were mis-shapen wheels on the dolly.
As an industry we have spent a lot of time trying to make our images more stable and minimize gate weave, and here is tech that offers very little (if anything) but a constant wobble. It would be interesting to see, with actual scientific testing guidelines, what kind of effect this has on a viewer's ocular fatigue on a big screen in S3D.
Lens flares, already a pain in S3D, have the opportunity to become ridiculously distracting with the V3. This technology would also be an easy way to screw over any chance of easily doing VFX or compositing fixes.
I usually try to refrain from giving negative hardware reviews but I am surprised that this product is receiving anything more than a passing glance from serious filmmakers.
Eric Deren
Dzignlight Studios
VFX & Animation Design
www.dzignlight.com
Atlanta, Toronto
>> I wouldn't say it shows that; it's fairly well known that a moving point of view increases spatial and >>depth perception.
Which is why S3D doesn't add much to console/PC games.
Tim Sassoon
SFD
Santa Monica, CA
Eric
I agree that the original system shows wobble that's why I posted that it should only be used for moving shots. However, try and keep an opening about the stereo version.
I don't think I'm that blind or dumb and I was impressed by the material I saw.
When the large file is finally finished uploading have a look.
Geoff Boyle FBKS
Cinematographer
Sent from my iPhone
Travelling somewhere
This technology and a lot others show something interesting, the fact that people have a very different viewing system, and stereo content often fails where monoscopic has found a consensus. A friend of mine has been charged to setup a 2d>3d conversion dept in his facility, and he had put some tests on the artists there, mostly Nuke/AfterFX operators. From the results, a 3rd couldn't see properly stereo content, and a second 3rd were not able to "understand" and correct the stereo problems in the test. Those guys work every day on pictures and I guess they view and understand (or interpret) pictures better than the average of population.
A lot has yet to be understood on the way people interpret the signal from their 2 eyes. What works for some could be extremely irritating for others, like the sensibility to colour wheels or speckle effect. Considering what is already known on other subjects in neurology and cognitive sciences, there is a serious chance that the disparity would solve only into personal viewing systems, with personalized settings. But that's not really what cinema is about, I don't know how to put that into the theatre experience!
Maybe in augmented reality setups...
Cédric Lejeune
www.workflowers.net
La Madeleine, France
Eric Deren wrote:
>>I usually try to refrain from giving negative hardware reviews but I am surprised that this product is >>receiving anything more than a passing glance from serious filmmakers.
Eric,
I find it rather amusing that you say that you refrain from giving negative views, but this is not the first time you have voiced a negative opinion on this technology here on CML. We are all working pro's on this list and how many times have you seen Geoff allow any imagery here on CML that he did not think was a valid test or example of a new technology.
I will be the first person to say that it is not for everyone or every situation. I did some 2D greenscreen tests and for VFX usage where tracking targets are needed on the backgrounds- yes there were issues that I could not overcome and with a VFX guy like you this is most likely where your apprehension initiates, as you do not see a use because of the nature of work you do.
Contrary to your opinion there has been enough interest in V3's technology that manufacturers like Angenieux and Element Technica are working with Vision 3 to allow greater ease of use. I believe that there could be some interesting developments in 2D -3D conversion space using the same underlying principles.
CML disclaimer:
It is my understanding that I shot most of the Side by Side comparisons (the helicopter shot) I have been paid by V3 on 1 additional occasion to evaluate the technology and offer my opinion.
Gary Adcock
Studio37
HD & Film Consultation
Chicago, USA
>> a 3rd couldn't see properly stereo content, and a second 3rd were not able to "understand" and >>correct the stereo problems in the test.
One answer to the question, "Why is there so much bad 3D conversion being done?"
Another answer: a past employee now working at a competing facility doing three major films was outlining (only) their process to me this morning, and I said, but of course, you know that doing it that way isn't scalable with semi-skilled freelancers? "Oh, yeah, and that's why they're totally behind; the quality is all over the map. Also with the roto, because it's outsourced.
But I get to work 16-hour days, with time and a half and double time, whereas when I was working for you we only got a little bit of overtime".
Sigh.
Tim Sassoon
SFD
(80 employees as of yesterday)
Santa Monica, CA
+++++++++++++++++++++++++++++++++
Tim Sassoon wrote : "...But I get to work 16-hour days..."
I can't even imagine the kind of torture it must be to work with stereo footage that long. The strain and the effort definitely reduces my capacities compared to monoscopic stuff. I thought it would go better with time but it doesn't really. Are there numbers anywhere about the productivity in stereo? I guess it's tough to compare, but the way it play with (not to say other words) your brain really makes me less willing of staying long hours in front of the screen. At the moment nobody (except maybe the historic stereo players) really charge for the real cost because everybody wanna be in the game, but in post there are a number of factors that make it seriously more expensive (to be done correctly).
Cedric Lejeune
La Madeleine, France
I don't know anything about any earlier or later version of Chris' V3 system, but I did see the footage Chris showed at the Santa Fe 3D seminar last week with Geoff. I think pretty much all of us were very
impressed. No wobble that I noticed, though I did see that in the single camera version. The stereo footage however was more rounded and just felt more natural and realistic than typical stereo. It
definitely looked better than the standard 3D version of the same footage shot with the V3 scan off. I thought it was a fair side by side comparison and we all preferred the V3.
Leonard Levy, DP
San Rafael, CA
http://www.leonardlevy.net
home 415-453-2373
cell 415-730-6938
>> I thought it was a fair side by side comparison and we all preferred the V3.
I'm somewhat curious who the investors in this technology are/have been, if it's not secret. Obviously they've been very patient, and probably enthusiastic.
Tim Sassoon
SFD
Santa Monica, CA
>> Which is why S3D doesn't add much to console/PC games.
On the other hand this slow "Interaxial Distance Modulation" idea could be applied to give 3D SFX more roundness, and make 2D->3D conversions
less cardboards like.
Jean-Pierre Beauviala
Aaton / France
The 4Gb upload is taking longer than expected, my ISP keeps disconnecting me as does our own server.
It'll get there eventually.
And guys, if you change the subject change the subject line.......
Oh and this V32D stuff does have post and animation options.
Cheers
Geoff Boyle FBKS
Cinematographer
EU Based
Skype geoff.boyle
mobile: +44 (0)7920 143848
www.gboyle.co.uk
I said:
> > I believe this technology shows us how little we still know about spatial perception.
Eric Deren responded:
>> I wouldn't say it shows that; it's fairly well known that a moving point of view increases spatial and >>depth perception.
Eric, I said in my previous email, that I have an intuitive understanding how "scanning" could increase the depth perception of 2D footage. This simulates what our eyes are constantly doing. But they are doing it both eyes in sync.
Why introducing an unnatural vertical oscillating parallax would increase spatial perception is not explained by the models I am aware of. Maybe you are right and this really doesn't work. Like I said, I was getting a bit of seasick watching the 2D examples.
On the other hand, if Geoff tells me the stereo looks better with this system on, this spikes my interest. Before Geoff's comment, I discounted the tech as fairly crazy, given the amount of time and work my company alone has put into stabilizing and getting rid of vertical offsets in stereo footage.
I have also not yet fully understood, what exactly the motion is, that V3 is introducing - maybe Chris Mayhew could point us to a diagram that shows the various scanning modes.
I will make sure I take a look at IBC. If only to increase my limited
understanding of 3D.
Lin Sebastian Kayser, CEO
www.iridas.com
>> Before Geoff's comment, I discounted the tech as fairly crazy,
In the single lens approach it is fairly limited, I could see some uses for it but not a lot.
In a full 3D rig it genuinely shocked me.
I rescheduled the whole of that day's workshop as a result.
I kept trying to not see the effect but I just couldn't help it, the depth was better, the roundness was better, it was less tiring, the reflections worked well, the focus pull worked well. It's not often I see something and think "I wish I had some spare cash because if I did I'd invest in this" I did here.
The big upload is still going on, it says 4 hours left but as I keep having to restart it well who knows?
I've never tried an upload this big before and I guess it'll be a huge download as well, at least I have a download of 7Mb where mu upload is only 700K and I pay almost double the normal rate to get that fast an upload here!
Cheers
Geoff Boyle FBKS
Cinematographer
EU Based
Skype geoff.boyle
mobile: +44 (0)7920 143848
www.gboyle.co.uk
Try this:
Close one eye and don't move your head.
Try to stare at a single point and don't move your head.
The image will eventually disappear but before it does note the amount of depth you are seeing.
Too me it looks quite flat.
Now without moving your head allow your eye to scan the scene normally.
Looks like 3D to me.
V3?
Leonard Levy, DP
San Rafael, CA
Geoff Boyle wrote:
>>the roundness was better
That's the point that makes me raise an eyebrow You'd say "Occlusions revelations was better" I'd say; "Sure, that's what it's all about, it changes that sub-pixel fringe, it changes that relative parallax"
If you mention roundness, it means the add'l depth cueing is on object shapes, not on relative depth placement. It's on the object textures, not on occlusions revelations.
Which is the very thing where, to my understanding, an in-occular parallax would have no visible effect.
Depth from my nose to my chick will not be enough to show off, until the background starts to REALLY wobble.
Hum... definitively need someone from Berkeley on that one.
/Bernard Mendiburu/
Stereographer, Consultant
Los Angeles
>>on the other hand this slow "Interaxial Distance Modulation" idea could be applied to give 3D SFX >>more roundness, and make 2D-3D conversions less cardboards like.
That particular effect is a problem of technique or lack thereof, and larding on some "IDM" wouldn't improve things because the data is being incorrectly represented. OTOH, we often suggest that since we have a full CGI representation of scenes in S3D conversion, that we add a little camera creep to provide a richer spatial sense, but rarely are given license to do so.
Tim Sassoon
SFD
Santa Monica, CA
>> OTOH, we often suggest that since we have a full CGI representation of scenes in S3D >>conversion, that we add a little camera creep to provide a richer spatial sense, but rarely are given >>license to do so.
I don't think I've ever asked you this question, but since you sort of brought it up....
One of the many advantages full CG productions have over live action photography with respect to stereoscopy is the ability to use more than one set of cameras in the scene and photograph specific objects differently to solve specific problems (roundness being one), compositing these different elements together for the final frame. Is that a technique that is viable in a conversion scenario? I would assume that it is, given that most major objects in the scene are rotoscoped already and thus available for a specific camera treatment. Whether it's practical or not is another matter (it would seem that paint backs might be a problem, one that wouldn't exist in a CG production), hence why I'm asking whether it's commonly done.
Mike Most
Colourist/Technologist
Next Element by Deluxe
Burbank, CA.
>> Is that a technique that is viable in a conversion scenario? I would assume that it is, given that >>most major objects in the scene are rotoscoped already and thus available for a specific camera >>treatment.
This all started with ILM's use of projection mapping for matte paintings more than a decade ago (the establishing aerials of Naboo being classic examples). So the answer is yes. Typically, provided one can solve the other problems you mention, one can travel up to about 30 degrees off the original axis before stretching becomes too severe.
Now of course the same technique, of reprojecting an image back onto a similar surface is also being used in environmental design:
http://www.youtube.com/watch?v=8IICGkOtJ9E
Tim Sassoon
SFD
Santa Monica, CA
Lin Sebastian Kayser writes:
<< Why introducing an unnatural vertical oscillating parallax would increase spatial perception is not >>explained by the models I am aware of. Maybe you are right and this really doesn't work. Like I >>said, I was getting a bit of seasick watching the 2D examples.
Here's my guess, and let's see if I can come up with the language to do it justice:
What seems to be going on is a bit of "strategic disorientation" or "perceptual pump-priming." It subtly pulls the rug out from under our psychovisual expectations, adding just enough entropy or looseness for the mind to fill in what it would expect from a natural stereoscopic vision experience.
Apart from our perceptual mechanisms and neurological computations as they report the objective world to us, the mind seems to have huge potential for whole-cloth "reality creation." Dreams and mass beliefs would be two fairly gross examples.
The mind fills in gaps, projects "the world" outward based on quite a narrow band of perception of what's really out there, and has awesome capacities. So it's not surprising that it could create a synthesized depth experience based on expectations, when given just a little shove out of its normal rut.
My guess is that Geoff's positive experience was the result of this added entropy creating the psychological (not visual!) equivalent of a diffusion filter. In other words his mind was less locked into the literal, mechanical aspects of stereoscopy, so it could provide its own "smoothing effect" -- ignoring to some extent the mechanistic shortcomings of the system and therefore creating an experience more like natural vision. (Geoff... does that sound right?)
The problem with this approach is that this is a psychovisual effect, not "improved stereoscopy" per se. Each of our minds works differently, so it's a bit of a crapshoot and the results will vary from person to person.
Dan Drasin
Producer/DP
Marin County, CA
Gary Adcock wrote:
>> It is my understanding that I shot most of the Side by Side comparisons (the helicopter shot)
Gary,
Would you mind giving us the details of the shooting parameters ?
Especially the focal length and IOD used for each pass, with and without the v3 systems.
Any chance the v3 affects the actual IOD ?
/Bernard Mendiburu/
Stereographer, Consultant
Los Angeles
Geoff Boyle wrote:
>> Throw your preconceptions away and look at what it does now!
So I was looking forward to doing a blind test with some guinea pigs at work this week (without telling them what to look for) but the burned in text that says "v3D" on some clips kind of makes that hard. Obviously a blind test requires that there be no obvious labelling which tells you which clip you are viewing.
Any chance a version of the short clips minus the burned in text could be uploaded as well?
Rob Engle
3D vfx guy
Los Angeles, CA
Rob Engle wrote:
>>So I was looking forward to doing a blind test with some guinea pigs at work this week
Did these guinea pigs get into the business working on G-Force or were they with you before Hoyt's show? (Sorry - had to ask.)
I've been working with some humans (well, camera crew) who worked on that show.
Mark H. Weingartner
LA-based DP & VFX Supervisor
http://www.showreelsonline.com/Schneider_Entertainment_Agency/Mark_Weingartner/
So I am still trying to see the difference.
>>The "side-full-0.mov" does not look that different to me. I am watching on a 22" Zalman passive >>monitor.
Sorry to say, I gave it a very fair chance, and this is my opinion only, but it did nothing for me.
Nothing looks different as far as depth goes, that could not be done when shooting with *normal* stereo 3D cameras, with the same i/a and the system turned off. In fact what it has introduced is an un-settling wobble that can clearly be seen on the railings in the newseum piece at the start, and the vertical judder on the window shadow on the far wall at about 1;12 minutes into the presentation.
What has to be asked is, what happens in a feature length presentation with such an effect. Will it make people squint and remove glasses to soothe their eyes?
What will happen in fast moving scenes? What will happen when a Director want shallow depth of field with everything defocused in the background (yes it works in 3D when done that way)
Is it worth it to have unsteady "un-cinematic" style shooting for all stories?
Again for me, it's a no go and the reason I've ventured into answering here, despite the fact that it may not be a welcome analysis, is that I think it was the right thing to do.
I can't be at IBC, but I did manage to see the quicktimes (2 mins of the full Quicktime newseum piece, all of the helicopter ones and the full newseum piece on YouTube)
Regards
Clyde,
stereo rebel,
Real Vision FZ LLC Dubai UAE
www.realvision.ae
>>What will happen when a Director want shallow depth of field with everything defocused in the >>background(yes it works in 3D when done that way)
It doesn't work for everyone. Me, for instance.
I find the "eyes sharp, ears soft" style works poorly in 3D. In fact, I find it to be one of the dead giveaways in 2D to 3D conversions when the 3D conversion part is an afterthought and the 2D part was shot, shall we say, conventionally.
Should 3D go mainstream, I think creative’s need to find additional ways to narrow the audience's attention besides keeping just one thing in the frame in focus. Like, say, with lighting, camera moves, and framing.
Bob Kertesz
BlueScreen LLC
Hollywood, California
I recently finished a project that was notable for going the other direction: Everything was shot 3D and with 3D in mind, meaning generally deep DOF, with the intent of creating a narrower DOF in post on some shots for the 2D release. The director's mantra was "To hell with 2D, we'll fix that in post..."
And, no, personally, I don't think shallow DOF 3D works in most cases...
Tom Tcimpidis
L.A. DIT/VC/Whatever
>>And, no, personally, I don't think shallow DOF 3D works in most cases...
It's definitely a case-by-case basis. Some of it works, some of it doesn't. It works more often in the background than it works in the foreground. But even some blurry foreground elements work.
It makes sense that the general rule, before on-set stereo preview, was deep DOF for S3D, because you'd never know if you were going to be screwed by the content. But with today's technology we are able to return to the time-tested rule: If it works, it works, "rules" be damned.
Eric Deren
Dzignlight Studios
VFX & Animation Design
Atlanta, Toronto
>>I find the "eyes sharp, ears soft" style works poorly in 3D. In fact, I find it to be one of the dead >>giveaways in 2D to 3D conversions when the 3D conversion part is an afterthought and the 2D part >>was shot, shall we say, conventionally.
Creative use of Depth Of field in 3D is quite different than follow-focus and extremely shallow Dof as used in 2D(that's my take on it) I call it the "circle of isolation" : http://realvision.ae/blog/2010/07/circle-of-isolation-shooting-good-stereoscopic-3d-for-live-sports/
Scroll mid way through the fluff in that article to get to the section. It would mean using Hyperfocal to your advantage in 3D.
Regards
Clyde
stereo rebel,
RealVision, Dubai
www.realvision.ae/blog
> If it works, it works, "rules" be damned.
There have to be two Eric's!
The one who posted the message I quoted above and his evil twin who doesn't open his eyes!
Cheers
Geoff Boyle FBKS
Cinematographer
EU Based
Skype geoff.boyle
mobile: +44 (0)7920 143848
www.gboyle.co.uk
I've found that even when something is shot at a small stop and appears to have a reasonable DOF in 2D, when it's viewed in 3D it's obvious where point of focus because it's the only place that appears critically sharp.
This gives the impression of a shallower DOF.
Paul Hicks | Director of Photography
m | +61 (0)413998815
Copyright © CML. All rights reserved.