Cinematography Mailing List - CML
    advanced

Single-lens - Single-camera 3D Test

I've been asked to test a 3D lens system that adapts to any existing single-lens, single-camera system (recording both eyes to one sensor).

The device abuts the front of the taking lens. The R & L eye images are arranged above and below on the sensor (although the engineers say that if it needs to happen, the images could be parked side-by-side).

What camera system would make the best use of its sensor to get the best resolution with the least amount of wasted sensor area in the above-mentioned configuration?

The feature project this system is being considered for needs a 2D filmout plus HD deliverables in 3D and 2D, as well as a Digital 3D master for theatrical projection.

In looking at RED, two 2K images top and bottom leave a lot of wasted space in the width. In the height, there’s some extra pixels to work with, since the sensor is 2304 pixels vertically. But to enlarge the frame horizontally to take advantage of the extra vertical pixels means uneven scaling when outputting to HD, which may be worse than just wasting pixels and going one-to-one.

Anyhow, in doing the math, the ARRI D-21, using ARRI RAW, shooting two 1920 X 1080 frames top and bottom, seemed like the best choice for this situation, but I'd be curious to get some feedback from the list.

Also, it's not my problem to solve, but just curious: On the post side, is there a way to get right and left eye information from a single clip into a dual (stereoscopic) timeline, so that right and left eye can be handled discreetly?

Thanks in advance for the help.

Jacques Haitkin DP
San Francisco


What camera system would make the best use of its sensor to get the best resolution with the least amount of wasted sensor area in the above-mentioned configuration?

It's the Phantom HD Gold. The camera has a square sensor with a resolution of 2048x2048. If you stack two frames and shoot a 2:1 frame, you get 2048 x 1024 (2kx1k). If you crop to 1.85, you get two 1894x1024 frames, and if you crop to 1.78 (16x9), you get two 1823x1024 frames.

Most primes and some zooms will cover this image area. I'm sure someone can write a program to extract the two frames in post and turn them into left eye/right eye. No other camera has a sensor of this shape, so unless you plan on turning a camera on its side you will not get a better option.

Jacques, sorry we've been playing phone tag. Let's try to speak later today.

Mitch Gross
Applications Specialist
Abel Cine Tech


Mitch wrote:

>> Most primes and some zooms will cover this image area. I'm sure someone can write a program >>to extract the two frames in post and turn them into left eye/right eye. No other camera has a >>sensor of this shape, so unless you plan on turning a camera on its side you will not get a better >>option.

SpeedGrade and FrameCycler have an option to process single frame stereo, either side-by-side or above below. Since we also support the Phantom RAW files natively, you could use our tools to directly playback, correct parallax (an any geometrical offsets), and colour correct. We demoed it at IBC with Phantom footage shot on a single lens stereo setup.

Cheers,

Lin Sebastian Kayser
Chief Executive Officer
IRIDAS - www.iridas.com
Tel: +49 89 330 35 142
Fax: +49 89 330 35 143


Mitch I couldn't agree more especially for Jacques project:)

Iridas would be my choice for a on-set viewing system as most Video Assists (even Qtake) doesn't have that option.

Dane Brehm
DIT: Phantom Tech
SF & LA


Mitch Gross writes:

>>I'm sure someone can write a program to extract the two frames [over-and-under] in post and >>turn them into left eye/right eye.

If high-end post is beyond your means, this would be a piece of cake to do in Final Cut Pro, and probably in most NLEs, assuming you've got enough CPU power to handle the data stream (a proxy approach might be preferred here). FCP details upon request.

Viewing 3D in real time with FCP is beyond my present expertise, but I'm sure it can be done.

Dan Drasin
Producer/DP
Marin County, CA


Mitch Gross :

>>It's the Phantom HD Gold. The camera has a square sensor with a resolution of 2048x2048. If >>you stack two frames and shoot a 2:1 frame, you get 2048 x 1024 (2kx1k). If you crop to 1.85, you >>get two 1894x1024 frames, and if you crop to 1.78 (16x9), you get two 1823x1024 frames.

Thanks, Mitch. I did consider the Phantom HD. Look-wise, it's great, with its 14-bit RAW output and excellent DR. Love that.

I'm about to show my ignorance, but what the hey: Don't we eventually require 1920 X 1080 frames (preferably without scaling) to get 3D and 2D HD masters?

>And, the film I'm testing for has multi-unit work. I'd need a minimum of 6-7 cameras; maybe more at times, if 1st, 2nd, underwater and aerial units are concurrent. Plus backup. Budget is always an issue. Also, is the Phantom as formidable for remote, aerial, and underwater work as conventional production cameras? Does it require field techs? Also, being a "specialty" camera, isn't its workflow different enough to limit choices in p.p when dealing with more than 100 hrs of footage?

Most primes and some zooms will cover this image area.

Some zooms...? Is the frame physically larger than S35? Will the Optimos (24-290, 15-40, 17-80, 27-86) cover the field? If not, that's huge. Don't have time for shooting with primes only. Another issue for me, is, in a perfect world, I'd prefer to shoot 3D with a 2/3-inch sensor, for greater DOF; not S35. But under these circumstances, storing R & L eye on one sensor, that's not going to happen. But if the phantom is physically larger than S-35, it means longer focal lengths, less DOF.

I'm sure someone can write a program to extract the two frames in post and turn them into left eye/right eye.

Good to know. Thanks!

Jacques Haitkin DP
San Francisco


>>Don't we eventually require 1920 X 1080 frames (preferably without scaling) to get 3D and 2D HD >>masters?

I don't know how this gadget is configured, but I presume from the comments that it's relaying normal two lens exit projections into an over/under configuration without changing the image circle.

I don't want to get on Mitch's bad side, but assuming you can line up the cameras and recorders, wouldn't the D-21 shooting ARRIRAW be a much more practical choice than the Phantom? 24x18mm imager producing a 2880x2160 raster, so any S35 lens should cover fine, and theoretically you'd get 1080 frame height, though that would assume a perfect optical division between the two images, which is unlikely. I'd also have much more confidence in the colourimetry and reliability, and you have back end workflow from s-Two or Codex.

http://www.arridigital.com/technical/recording-options

Tim Sassoon
SFD
Santa Monica, CA


I absolutely agree about Phantom HD for spherical lens work, using the square sensor to its maximum advantage.

If anamorphic is an option then it's worth considering a native 16:9 camera (F35, Genesis, Red, Phantom 65 etc) and laying the images side by side on the sensor. You would crop the image top and bottom to give you 1.2:1 before desqueeze, but only a little more than the crop on the Phantom HD to get 1.85 from a spherical lens.

Disclaimer: I've never done this or even seen it done, but it's certainly possible. Maybe there's someone out there with practical experience?

Richard Bradbury
Focus Puller
London, UK


>>...wouldn't the D-21 shooting ARRIRAW be a much more practical choice than the Phantom?

With the D-21 you would crop 33% of the sensor area. With the Phantom HD you would crop 11%.

However as Tim points out the D-21 would give you (theoretically, although perhaps not in practice due to optical limitations) full raster 1920x1080, whereas the Phantom would give you 1820x1024. Scaling this raster to HD may be a greater quality sacrifice than the extra crop on the D-21. A test would have to be done to evaluate the two, particularly given that a 2D film out is also on the table.

Richard Bradbury
Focus Puller
London, UK


>>Scaling this raster to HD may be a greater quality sacrifice than the extra crop on the D-21. A test >>would have to be done to evaluate the two, particularly given that a 2D film out is also on the >>table.

Absolutely. I honestly could not say which would yield a "better" image and neither could anyone else until they tested all the way to final deliverables. I will say that if it comes down to which camera is more portable and friendly to uses like underwater, handheld or Steadicam, then Phantom HD Gold is the clear winner.

Mitch Gross
Applications Specialist
Abel Cine Tech


I'm entering this discussion very late, but some months ago I corresponded with Kommer Kleijn who owns a set of ARRIVISION 3D over/under lenses (circa 1985 design) and who used them on a Phantom HD, reportedly with good results. The ARRIVISION 3D lenses were originally designed for 35mm 4-perf film cameras (with a horizontal septum so each eye is two-perfs high -- the standard over and under format of the day). I'm taking the liberty of excerpting a few of his more specifically relevant remarks:

"... I have also used them [the ARRIVISION lenses] recently on a Phanton HD (because of its large chip size) and had a good digital result. The Phantom HD and Dalsa are the only digital cameras that have a chip high enough to catch the image produced. Even the D21 has a chip that is not high enough and Dalsa is gone so at this time Phantom HD is the only digital option."

And this:

> GL question: What resolution did you achieve for each image in the stereo pair?

Kommer replies:

Scope : 713 x 1704
1.85 : 713 x 1319
16/9 : 713 x 1268 (= close to 720p)"

end of quotes

For more info about his Phantom HD experience, I suggest you contact Kommer directly because he knows of what he speaks and is a very nice guy: Kommer Kleijn <kommer@kommer.com>

Greg Lowry
Scopica Inc. | Scopica 3D
Vancouver


Scope : 713 x 1704
1.85 : 713 x 1319
16/9 : 713 x 1268 (= close to 720p)"

That is interesting, but please note that these lenses were designed to project an image on a 4:3 Silent Aperture image area (about 24mmx18mm). Please note that the available real estate of the Phantom HD chip is considerably larger at 25.6mmx25.6mm. With a lens that can project this larger coverage, much higher resolutions are possible on the Phantom HD Gold.

Mitch Gross
Applications Specialist
Abel Cine Tech


Mitch Gross :

>>That is interesting, but please note that these lenses were designed to project an image on a 4:3 >>Silent Aperture image area (about 24mmx18mm).

Actually, I asked Kommer about that and he indicated that they were designed for the anamorphic/scope aperture with is, as you know, almost square.

Unless I missed a key post, the one thing that seems to be missing from this discussion are any specifics regarding the lenses that are being used for the test. No two over/under 3D lens designs from various different manufacturers had exactly the same format specs during the 80s 3D bubble. Clearly, the max spatial resolution that can be used with the Phantom HD depends on the lenses. The ARRIVISION 3D lenses serve only as an example. (An interesting side note: those lenses are T6.3!)

Greg Lowry
Scopica Inc. | Scopica 3D
Vancouver


Greg Lowry wrote:

>>GL question: What resolution did you achieve for each image in the stereo pair?
>>Kommer replies: 16/9 : 713 x 1268 (= close to 720p)

Having been Kommer's assistant on 2 3D-ARRIVISION shows and one 3D-STEREOVISION, let me add that due to its optical construction, the separation between left image (upper) and right image (lower) is huge! (about 0,8mm) which is a lot more than the interframe. Depending on the lens-design, this may be drastically reduced and therefore vertical resolution increased.

Jacques Haitkin wrote:

>>The device abuts the front of the taking lens. The R & L eye images are arranged above and below >>on the sensor (although the engineers say that if it needs to happen, the images could be parked >>side-by-side)

Do I understand this right, that this stereo-device will be mounted as an optical-addition to the prime-lens, NOT between 2 prime-lenses and the camera-body?

Could it be the same design for different focal-lengths and even zoom-lenses?

Kind regards

Tim Mendler
vfx-supervisor / stereographer
South-France

P.S. : Kommer definitely is a nice guy and knows of what he speaks.

Disclaimer:

I do not work for Kommer, I did work with him and will be honoured to do so again.


Tim Sassoon writes:

>>That would assume a perfect optical division between the two images, which is unlikely. Mmm... >>Thanks, Tim! Something I'll be vigilant about (among the other 75 or so optical and physical >>issues to investigate), since we need the full sensor-- IF we use the ARRI.

No matter the sensor, we could be forced into scaling anyway because of the need to repo often, yes?

Question : Is it true, that in 3D post, EVERY 3D live-action shot could be a potential candidate for VFX fixes?

Is it wise to just accept that fact and budget for it liberally, or is there a way for DPs to make part of our stereographic technique to get usable-without-fixes footage by nuancing the system and accepting (or embracing) its limitations and find our "creative space" within those limitations?

Or, do we only worry about the creative and forget things you can't control, like the inherent unpredictability of live-action filming which inevitably creates R/L eye discrepancies, because they happen in reality. So post-production fixes are REQUIRED-- in the same way our brains work all the
time to seamlessly "clean up" real-world anomalies.

Jacques Haitkin DP
San Francisco


>>Question: Is it true, that in 3D post, EVERY 3D live-action shot could be a potential candidate for >>VFX fixes?

Pretty much, if the goal is to produce a nice, clean 3D show, at least until we abandon binocular capture, which will eventually happen.

>>"Is it wise to just accept that fact and budget for it liberally, or is there a way for DPs to make part >>of our stereographic technique to get usable-without-fixes footage by nuancing the system and >>accepting (or embracing) its limitations and find our "creative space" within those limitations?"

This is a major reason why some shows have embraced post conversion. 3D then becomes something you worry about later, and it's completely off the production budget. Granted, it then becomes a significant post line item, but it's tendered at fixed bid, and it gets worked by the vendor until it's right. The more VFX there is in a show, the more attractive this option becomes, because one's going to be ripping the shots apart anyway.

I read a paper at fall SMPTE describing an assist system to ease post conversion and make it more accurate that would be a sort of halfway between shooting 2D and 3D, but could be used with literally any camera.

>>"Or, do we only worry about the creative and forget things you can't control, like the inherent >>unpredictability of live-action filming which inevitably creates R/L eye discrepancies, because they >>happen in reality. So post-production fixes are REQUIRED -- in the same way our brains work all >>the time to seamlessly "clean up" real-world anomalies."

I would agree with all that, and you can just shoot and clean up later what bothers you, but it might be expensive, perhaps unnecessarily so. IMHO "Avatar" did themselves no favours with the lighting. You really want to be careful of high contrast vertical edges which are significantly in front of or behind (if converged) the screen plane, for instance. Polarized systems like Real-D are not 100% efficient, and inter-eye leakage results in visible "ghosting" of divergent high contrast images (in exhibition - separate from polarization artifacts of mirror rigs).

As an example of creative solutions to that particular issue, on "Magnificent Desolation" (Playtone/IMAX), which featured re-creations of the Apollo Moon landings, the astronauts were shot on a narrow set against greenscreen curtains. They were of course wearing white suits against what needed to be an inky-black sky, which would ghost severely, so DP/Stereographer Sean Phillips shot towards the light (stereo VistaVision using the ILM Beaucams and Leica lenses on the Hines rig) with flares carefully cut, but we later overlaid CGI flaring back into almost every shot to lift the black level. You cannot really shoot coherent flares in binocular stereo, BTW.

Another answer to your question is that you should have a stereographer on set (could be yourself) to catch these issues and come up with a coherent shoot/post plan to deal with them. Stereo on a feature film level does require the same coordination between shoot and post as VFX.

Tim Sassoon
SFD
Santa Monica, CA


>>Question: Is it true, that in 3D post, EVERY 3D live-action shot could be a potential candidate for >>VFX fixes?

Since the time feature-films went digital-intermediate, IMHO: absolutely yes!

You can try to perfectly align cameras on a mirror-rig physically on-set i.e., but having experienced this (on a themepark 4D-show), you wouldn’t want to do this on a feature-film production! Every crew-member on-set certainly will do it to his or her best possible, but you should be prepared for "post-fixing" in some degree (time is money and post-time often is less expensive than set-time, depending on the crew-size waiting...).

On the other hand, isn't that what we already do with DI in 2D to a certain level?

A lot of DI-tools did enter the 3D-world and some of the "3D-fixing" probably won't need to go for VFX-compositing....

Kind regards

Tim Mendler
vfx-supervisor / stereographer
South-France


Tim Sassoon writes :

>>Pretty much, if the goal is to produce a nice, clean 3D show, at least until we abandon binocular >>capture, which will eventually happen.

Wow, Tim, your last post was most enlightening. Thank you!

Jacques Haitkin DP
San Francisco


>>If anamorphic is an option then it's worth considering a native 16:9 camera (F35, Genesis, Red, >>Phantom 65 etc) and laying the images side by side on the sensor.

Anamorphic makes sense in principle; and may be viable. But because Single-Lens 3D acquisition already has a big optical component that must be dealt with, anamorphic could definitely add complexity (and development costs) to a situation whose main goal is to simplify the process.

Then again, if anamorphic is what makes it work, hallelujah!

Bottom line-- In order for 3D to really go mainstream, it has to be streamlined to a point where on-set, it's as production-friendly as present series TV, sports and news; and there's an automated post pipeline like we have now. It won't float with an earthquake hitting the industry.

Jacques Haitkin DP
San Francisco


>> Anamorphic makes sense in principle; and may be viable.

Hmmm, doubt it. I'd be extremely worried whether the anamorphic element is precise enough (and matched) that stereo image alignment isn't all screwed up non-linearly when it's unsqueezed, leading to expensive remedial post work. Electronic squeezing after the sensor is fine because it's done in a regular way with a partially reversible algorithm. 3ality among others has proven that side by side 2:1 squeeze HDSDI recording and transmission works very well.

It's examples like this that remind me that lenses are some of the last and most beautifully made "tubes" (in a photonic sense, especially if using the British term "valves") still in serious use. And why people want tubes in preamps; non-linear distortion applied judiciously can make audio at least
more beautiful.

The age of diffraction is almost upon us, and while the machines might be more efficient, they won't be as cute.

Say, are you using Zoran Perisic's Z3D dingus?

Tim Sassoon
SFD
Santa Monica, CA


>>Say, are you using Zoran Perisic's Z3D dingus?

Say what? There's a name from the past...

Jacques Haitkin DP
San Francisco


I'd be extremely worried whether the anamorphic element is precise enough (and matched) that stereo image alignment isn't all screwed up non-linearly when it's unsqueezed,

So, optical anamorphic: bad, digital anamorphic: good...?

Not that I'm lobbying for it, but just to put the issue to bed: Even if it's a single lens, single anamorphic element?

Jacques Haitkin DP
San Francisco


>> Even if it's a single lens, single anamorphic element?

If any possible distortion would be applied equally, or they were well-enough made that discrepancies are insignificant, then it could work, but I'd need to know a lot more about the setup before I could say.

Stereo through a single lens implies that the left and right images are being extracted from different parts of an overall image circle, thus different parts of the anamorphic lens. I don't know whether that's true in your
case.

Shooters don't necessarily realize how much distortion even good lenses can have. But when you have to track objects into scenes, you find out very quickly. I'm not saying it couldn't work, but I am saying it would need to be tested up the wazoo. That sort of thing throws up an immediate red flag.
Side-by-side electronic compression also did, but it's been tested.

There are plenty of other things I'd also worry about with a single-lens stereo solution, starting with light fall-off (vignetting).

Tim Sassoon
SFD
Santa Monica, CA




Copyright © CML. All rights reserved.