Is there a preference between Parallel vs. Mirror Rig in 3D photography?
Isn't most of IMAX 3D shot Parallel?
Los Angeles, CA
New Orleans, LA
Tom McDonnell wrote:
>> Is there a preference between Parallel vs. Mirror Rig in 3D photography?
>> Isn't most of IMAX 3D shot Parallel?
I'm guessing you mean side-by-side vs. mirror rig?
Graham D Clark,
phone: why-attempt, s3d phone: fad-take-two
There may be some mixed terminology here...
Just to clarify:
The physical rig you use to shoot with might be a side-by-side or mirror (beam-splitter) rig.
A side by side rig simply mounts the cameras next to each other. The mirror rig allows you to adjust your IOD down smaller than the physical size of the camera package allows as they are no longer physically next to each other.
On either such rig you can support a parallel or converged shooting style - assuming the rigs provide this adjustment of course. This just means that you either keep the cameras parallel and do any convergence in post or you converge in camera (i.e. "toe in" the cameras) such that they are pointing at the object you wish to be converged.
Hope that helps a bit.
. physicist . stereogtrapher .
. maker of 3D rigs and IOD calc. www.speedwedge.com
Yeah I'm trying to get a head start so when I attend the 600 3D course I at least understand the lingo and have a basic understanding of 3D.
I had a recent discussion with a 3D post professional who said Parallel shooting is less prone to error except for close up situations. Then the Mirror Rig is used where Inter-Axial needs to be less than what side by side rigs will allow. It was said Parallel photography also produces a more naturalistic 3D environment. It was also explained to me IA sets the depth of the 3D and Convergence sets the point on screen where an object or scene can be shifted or slid either screen forward or screen back.
Another question if I may. I see DSC came out with a 3D alignment chart. When using this chart is a specific focal length and distance from the camera required at a standard Inter-Axial? How then is Convergence lined up or are you looking for equal amounts of toe-in? It seems this chart is limited in its application as each shot will be immensely different than any bench test chart situation.
Los Angeles, CA
New Orleans, LA
I think there might still be some confusion between the terms parallel and side-by-side.
Sorry if I'm grinding the point...
You can shoot parallel or converged. This is how much the cameras are angled inwards. i.e. not at all in parallel.
Independently of this you can rig the cameras side-by-side or in a beam splitter rig. You will need a beam splitter to achieve smaller IOD's but you can still shoot parallel or converged in that arrangement.
It is completely true that the distortion from convergence is greater at closer quarters than for distant objects. If you converge in camera you will be angling them in quite a bit and if you converge in post you will be sliding the images much further left/right requiring a greater blow up to hide the edges.
Which is better? worse?
It is dependent on the situation of course and people do also have preferences one way or another like everything in life. There is no wrong answer sorry!
re alignment charts.
What they really give you is a nice array of easily recognised points so you can track out distortion in the rig. There is always distortion from different sensors, lenses, mirror & mechanics. If there are plenty of trackable points in the actual scene it helps also!
Sometime people place the chart at the convergence plane which makes aligning any post convergence easy although often you will find yourself chasing this a little shot to shot as people land slightly off their mark or whatever.
If you have the chance to completely profile the rig and lenses you will be using against charts then this is a great starting point for your post alignment. You will not always use them and you can still post the footage fine.
. physicist . stereographer .
. maker of 3D rigs and IOD calc.
Tom, Leonard's points are exactly right –
I would just add one thing to be aware of in parallel 3D photography. When you shoot parallel you will have to find and make a convergence point somewhere in the frame in post production. Depending on how near/far that point is (it is where you will place the screen plane area) the two data streams will have to be slid towards each other creating a gap on the sides that will have to be dealt with by either scaling the image up, or cropping the picture. The latter solution is fine to use when you have shot at a frame size larger than your delivery frame size, like shooting 2, 3 or 4K for 1920x1080 HD delivery. If you are shooting directly for HDTV with an HD camera at 1920x1080 then the scaling process will almost certainly be used and a slight or greater loss of resolution will result. The only exception I have dealt with is green/blue screen where you can always garbage matte the missing side pixels and not have to scale up.
I'm pretty certain this is the main reason people converge when they shoot because if you are very close or exactly at the right convergence point then the image size stays pretty much intact. And yes, you can alter the convergence point in post, but the IA is pretty much baked in at the point of original photography.
It's my understanding that Tron is being shot parallel, and as we all know by now Avatar was shot with the convergence point pretty much always at the point of camera focus.
LA-based IATSE 600