Perhaps this should be basic...but 720/1080/1880/2400 whatever, what does it mean?
Is a pixel a fixed size? I reckon it's not, 9microns...etc.
Any English translations of this concept appreciated. Looking at DI...Acquisition formats...etc. More "pixels" during acquisition doesn't equal more "pixels" during projection...or does it ?
Nick Hoffman NYDP
> Is a pixel a fixed size? I reckon it's not, 9microns...etc.
Surely a pixel can be defined as the smallest indivisible unit of a digital image - the smallest building block. It is inherently used as a non-scaled measurement of resolution, not of image size. It's stating the obvious, but the larger you scale a digital image, the larger you make the pixels (assuming projection here and not re-sampling the image)
The original word comes from "picture element", and that's what it is!
With reference to CCD's, it is the smallest surface area that can independently respond to light; of course the old trade off - if the actual physical dimensions are larger, the light sensitivity is increased, if smaller, the higher the number than can be placed on a given size CCD and thus sharper due to more samples of the image being taken.
From the DI process, it recall depends on what happens to the original pixels from the CCD. At some point they will have been processed, downsampled, compressed, etc. If you used an f900, you'd start off with 1920 at the CCD which would have been reduced to 1440 to tape (not concerned with bit depth for this is a discussion of pixels!!!), probably upscaled to 1920 in post and then maybe downsampled in the projector!
Right ... back to work.
Chris Cooke-Johnson
Director
Creative Junction Inc.
Barbados
Nicholas Hoffman wrote :
>Perhaps this should be basic...but 720/1080/1880/2400 whatever, what >does it mean? Is a pixel a fixed size?
In digital cinema terms, it means that you can divide the size of your screen by the number of pixels and arrive at the effective size of a pixel. For example, if you blow up a 1920 pixel wide image to fit a 40 foot screen, a projected pixel is .25 inches wide.
Re-sampling only divides the existing pixels, and aside from any pixel mathematics voodoo, your effective resolution will never be any greater than the number of pixels you captured. It CAN become much less resolution than you captured if your image gets re-sampled anywhere in the pipeline, and once that resolution is lost it is lost.
Dave Stump
VFX Supervisor/DP
>For example, if you blow up a 1920 pixel wide image to fit a 40 foot >screen, a projected pixel is .25 inches wide.
Dave :
I find the "effective resolution will never be greater" part, a rather bold statement. In our experience, the REAL resolution of a motion image sequence can become much be greater than that of the number of pixels originally captured.
Temporal image processing for Super Resolution is real and works. James Cameron's new 15/70 film "Aliens of the Deep" was shot in HD and recorded on HDCam tapes. We did the DI on this film, including processing the images (with an up-res to 4K) for 70mm film out. On an 80 foot Imax screen the images do not show 0.50 inch wide pixels, but, you can see for yourself when the film is released in January.
Best regards
John Lowry
Lowry Digital Images
Burbank CA
What exactly is an "up-res"? it sounds counter intuitive...I know it's been done for a while, I've even done it, but what exactly is it? The new info has to come from some interpolation algorithm no? If it's so great why bother with a higher def camera system?
Thanks,
Nick Hoffman NY600
>The new info has to come from some interpolation algorithm no? If it's >so great why bother with a higher def camera system?
Let's assume that you want to make a stereo IMAX film at the bottom of the sea, and a couple of HD cameras is all one can reasonably send. Given that the resulting images are somewhat less than the maximum resolvable on 15-perf 65mm, it behoves one to massage the images into the best possible shape before film recording.
So you sharpen the living bejeesus out of it, then expand it out to 4k using one or another algorithm, perhaps in several steps adding noise, either Gaussian or fractal, along the way, and even do some temporal sharpening by accumulating high-frequency information across a group of images, or perhaps using optical flow. Then we convert to log colour space, figure out where the white point's going to be and other colour correction, slate it, and send it off to FotoKem or CFI on Firewire drives.
Tim Sassoon
SFD Vfx & creative post
Santa Monica, CA
Thanks, Tim. John, please chime in here. Yes, things can be done to uprez. Expensive things nonetheless. That's kind of what I meant when I said "mathematical voodoo"...
Dave Stump ASC
VFX Supervisor/DP
> Right that's what I thought...It ain't film grain but then again it is...not log >but etc... thanks any way I'm thinking of the Varicam which I know you >have and like ...have you shot any film outs with yours? Thanks
Nick,
We've done a few commercials to 35mm for use in the cinema. The main brand-name clothing store is actually doing a series of commercials only for film release. Most of our commercials are shot HD irrespective of final output, but because we've been sourcing the film-outs, we've been asked to get others (SD) converted as the chap we're using is cheaper than what the cinema complexes charge for the conversion.
Now, we don't have the luxury of flying to NY to check the process, we just send up a DVD with 2k images and get back film. Obviously this is higher than 1280, and we just do the upconvert in After Effects. I was quite nervous at first and had posted on this list for advice, and was directed to a chap called John Rizzo at Metropolis Film Lab in NY. Figured it was only 30sec, and if we really didn't like it we could always do it again.
Three things I learnt :
1/. Whoever is doing the filmouts for the cinema is really not very good, either that or John is very good.
2/. HD to film looks way better than SD to film. Sounds obvious but with the number of people who've said they can get SD to look like film, you start to wonder.
3/. The filmouts we have had are indistinguishable from 35mm originated material. Admittedly we're talking fast, contrasty commercials, but we've had allot of very positive comments about the quality.
Now, we're a third world country (ok, we've been the #1 third world country for like 5 years or something...) but the cinema complex that this stuff is shown is very well designed and constructed and most of the Caribbean "film" festivals are held there, so I'm happy to use them as a reference in terms of their projection.
Take care,
Chris Cooke-Johnson
Director
Creative Junction Inc.
Barbados
Chris Cooke-Johnson wrote:
> 3/. The filmouts we have had are indistinguishable from 35mm >originated material.
Really? Did you shoot and finish any identical spots on 35mm that would justify making this comparison? If not, the statement is simply conjecture, not fact - and also likely to be untrue. That's not to say that what you did is unacceptable, but to make a statement like this without having tried both approaches is simply not logical or fair.
Mike Most
VFX Supervisor
IATSE Local 600
Los Angeles
John Lowry wrote:
>...In our experience, the REAL resolution of a motion image sequence >can become much be greater than that of the number of pixels >originally captured.
How can REAL resolution be greater than what was captured? You can reproduce MORE detail, more lines, than was actually captured?
>...James Cameron's new 15/70 film "Aliens of the Deep" was shot in >HD and recorded on HDCam tapes.
If you can't see the pixels they are either too far away or mushy, right? But does that mean more resolution?
I'm not questioning whether or not the result looks great, I'm just really curious about your claim. Can you explain it more thoroughly?
Wade K. Ramsey, DP
Dept. of Cinema & Video Production
Bob Jones University
Greenville, SC 29614
> How can REAL resolution be greater than what was captured?
>You can reproduce MORE detail, more lines, than was actually >captured?
The important factor here is time.
If you have a single frame of material you cannot create more information from it. Yes, you can uprez a single frame and it will look quite nice, but this frame will not contain more information. This is the equivalent of using the resample function of Photoshop. An image that is re-sampled using bicubic interpolation will look more pleasant to the eye than a zoomed image. It will look blurry though. You can go further, sharpen it artificially, so it looks less blurry, but here we are already getting problems when multiple frames are involved - because these processes are prone to creating artefacts. The bottom line here: using single frame processing you can create an uprezzed image that can look quite nice, but you won't have more information in the image. So it is not "true" higher resolution.
However things look quite different when you are feeding a moving image sequence into an algorithm. Details that might be invisible in one frame might be visible in the next because the grid of the CCD is slightly moved or the lighting conditions change subtly. Let's say you are recording a meadow. The CCD (or the film emulsion) might register only every other blade of grass in a frame, because the blades are smaller than a pixel (or film grain). So if you look at a single frame, half of the blades of grass are missing. No single frame uprez can bring the other blades back. However if you pan across the meadow, chances are pretty high that the other blades are indeed visible in other frames of your movie.
Now let's imagine we had an ultra accurate tracker that tells us exactly where each pixel is in three dimensional space. If we had such a tracker we could assemble all the details from other frames and insert them into the frames where they are hidden.
Such a tracker is pretty much beyond our technology right now, but companies like Lowry have something that comes close. So by tracking the movement of the features in subsequent frames, they are able to reconstruct details that are not visible in a single frame, therefore essentially creating a true higher resolution image, that indeed has more information in each frame than before.
As a by-product, you can of course also eliminate film grain, defects like scratches and dust or even electronic interference.
Now, why should we bother getting higher resolution digital cameras then? Because with these higher resolution cameras we can create even better pictures. 4K from a 2K camera for example. The more data you give the algorithms to crunch on, the more you can do. And just to emphasize the point : This is not limited to the digital world. The process works just as well (or even better because of the randomness of film grain) when you have material that is recorded on film.
Now John Lowry probably shakes his head smiling about this very crude explanation, but I hope the general idea came across.
Cheers,
Lin Sebastian Kayser - CEO - IRIDAS - www.iridas.com
"Temporal image processing for Super Resolution is real and works....
I'm not questioning whether or not the result looks great, I'm just really curious about your claim. Can you explain it more thoroughly?"
It uses motion estimation, analytical mathematical algorithms, predictive object models, retracing techniques, replacement and theoretical restoration models.
Simply put, the more smooth motion in a sequence the better.
The reason it's not used more is the same reason only Lucas had a non-linear editor in the early 80's. Speed, cost and it's not perfect yet.
Greg Folley
Southerncoast Video
Mesa, AZ
>Now, why should we bother getting higher resolution digital cameras >then?
>Now John Lowry probably shakes his head smiling about this very >crude explanation, but I hope the general idea came across.
Lin :
I am smiling and would like to thank you for a good response to these questions. You have personally seen the results so I suspect it becomes rather easier to understand (and explain). Since we are dealing with images that move there are endless possibilities for their improvement.
Why higher resolution cameras? To me there is no end to the need for more resolution and dynamic range dependent of course, on the nature of the displays, today and tomorrow, and the needs of the story telling process.
Best regards
John Lowry
Lowry Digital Images
Burbank CA .
> that what you did is unacceptable, but to make a statement like this >without having tried both approaches is simply not logical or fair.
Michael,
You're quite right in a technical sense, but from a viewer's comparison, I can only state what I and others have seen. I disagree that I would have needed to have done the process myself through another medium in order to judge the apparent resolution. The cinema has a number of spots running, some that were shot on video, some that where shot on 16mm and some that were shot on 35mm. I found it very easy to spot the ones from video and 16mm and enough people asked me if we had shot the spots on 35mm to concur with my statement.
Part of my job, I believe is to get the most out of any medium that we work with. If I can get people in the know to think that something shot on HD was done on 35mm, then I'm doing a good job. We recently finished a documentary shot in Brazil, Sierra Leone, Haiti, Palestine and a few others that was shot on a DVX100, but everyone at the local TV station swears it was shot on HD.
Again, as I had clearly noted, the commercials in question are fast and contrasty and thus I was not concerned about dynamic range, but more apparent resolution, which is where this whole discussion started.
To be clear, upscaling with "mathematical voodoo" can produce what I would consider acceptable results. Partly because film itself isn't a bunch of precisely defined squares but is more rounded and slightly bled. I certainly would sit in the camp that more is better, and that if you start with a higher resolution, you have much more headroom, but, probably because I come from a post background, I'm not so fast to knock what can be done with a digital image.
Take care,
Chris Cooke-Johnson
Director
Creative Junction Inc.
Barbados
Dear Chris Cooke-Johnson, do you think this also suggests that the Celco Extreme has an edge over ArriLaser in some way ?
(I'd be curious to see a sample, does Jack Rizzo have any of the 35mm filmouts for demo purposes ?)
Was there anything you did in post before sending to the lab for the filmout ? (and did you filmout to IP or camera stock ?)
Hope this is not too many questions !
Sam Wells
Lin wrote :
>...However things look quite different when you are feeding a moving >image sequence into an algorithm.
Thanks for such a well presented explanation. However, I don't believe we were comparing a still image to a moving image. We experience this difference every time we pause or still frame a movie--the moving image seems to have, and doubtless does have, more resolution for the reasons you outlined.
But I understood that we were discussing uprezzing a moving image and achieving MORE resolution on the uprezzed moving image than was captured on the original moving image. I can accept that it is possible to process it so that it LOOKS better in various ways, but more actual resolution? Is it a matter of emphasizing detail that is too subtle for the eye to recognize in the original?
Wade K. Ramsey, DP
Dept. of Cinema & Video Production
Bob Jones University
Greenville, SC 29614
>it LOOKS better in various ways, but more actual resolution?
Wade :
There is actually more resolution.
Best regards
John Lowry
Lowry Digital Images
Burbank CA
> But I understood that we were discussing uprezzing a moving image >and achieving MORE resolution on the uprezzed moving image than >was captured on the original moving image.
Correct. Speaking in simplified terms, the algorithm looks at the frames before the current one and after the current one and extracts details that are found there. It inserts these details into the current image, thus creating an image with more information than before.
Again extremely simplified : Somebody is walking through your image. The person has a wristwatch. Now you look at the first frame. You see a blurred image of a wristwatch. You look at the next frame, you see the wristwatch more clearly. Now you extract the "wristwatch" detail from frame 2 and insert it at the correct position in frame 1. Now you have a "wristwatch" detail in frame 1 that was not there before, hence more information.
Please note that this is extremely simplified. The actual algorithms work very differently, but this explanation demonstrates very clearly how you can enhance the amount of information in a frame by looking at other frames and extracting information that is not present in the original frame.
Cheers,
Lin Sebastian Kayser
CEO - IRIDAS
Lin replied:
>Correct. Speaking in simplified terms, the algorithm looks at the frames >before the current one and after the current one and extracts details that >are found there. It inserts these details into the current image, thus >creating an image with more information than before....
Thanks! Ain't technology wonderful?
Wade K. Ramsey, DP
Dept. of Cinema & Video Production
Bob Jones University
Greenville, SC 29614
<< Ain't technology wonderful? >>
Wade:
Both useful and very satisfying to work with.
Exciting times.
Best
John lowry>Speaking in simplified terms, the algorithm looks at the frames before >the current one and after the current one and extracts details that are >found there.
Back in the interlaced only days, the Sony HD Centre used a somewhat analogous concept when outputting HD to film. They would combine several interlaced fields in various percentages to produce a film frame.
The process worked fairly well most of the time, but on the cuts there were often problems since the fields they needed prior to the cut were not available. So either the entire original source had to be processed before the on-line assembly, or short handles, which could later be removed, added to the cuts.
Handles might be necessary here as well.
Noel Sterrett
Admit One Pictures
> Let's assume that you want to make a stereo IMAX film at the bottom of >the sea, and a couple of HD cameras is all one can reasonably send.
...or even to us. We have (I believe) the only 65mm film recording service outside of North America.
It's also worth pointing out that we have handled a few standard def commercials (from DigiBeta masters) to full frame IMAX blowups. When you're doing this kind of thing you quickly realise how inadequate 16 point bicubic interpolation is, which is why we developed our own 64 point interpolator.
Simon Burley
RPS Film Imaging Ltd
And Simon, what did you do to "up-res" your material? Digi-to Imax? Hmmm....
Nick Hoffman 600NYDP
We have (I believe) the only 65mm film recording
"Let's assume that you want to make a stereo IMAX film at the bottom of the sea, and a couple of HD cameras is all one can reasonably send. Given that the resulting images are somewhat less than the maximum resolvable on 15-perf 65mm,"
Why should we assume when the actual event took place? Speaking as one who was on the original IMAX deep dive expedition to Titanic in '92, back for the film in '95, and then DP on the Ghost of the Abyss effort in '01, it amazes me we get stuck at the gateway to the Large Format entrance with a simple "Real Resolution" requirement. More importantly are all the other factors that go into good image capture.
I have a tremendous amount of respect for the Large Format DP's out there that strive for "maximum resolvable" images but for my dollar if I can bring the viewer closer to the subject, challenge their imagination through images never before captured in 3D, give the viewer a sense of adventure the same I feel when I climb into a three person submersible and descend three miles below the surface. Then I don't care if you "sharpen the living bejeesus out of it", the fact is if the "real resolution" does not distract the viewer from the presentation, wouldn't it be worth the effort?
When my friends watch Ghost or soon Aliens, they don't come back to me and say: "Vince, don't you think film would have carried the highlights better?" The reaction I get is "Are you nuts for going down in those things?" or "What was that creature down there?"
I have been down to the decks of Titanic and Bismarck 3 miles below the surface with a HD3D system. When I watch the footage, I relive the trip each and every time. In many ways the view from the camera was better then my 4 inch porthole. I have been up in a P51 mustang and in the cockpit of a B24 bomber with a HD3D system. Again, a wild ride visually captured. I have stepped on the filed of an NFL championship game with a 28lb HD3D camera on a steadicam rig. If the Lowry process can help make the images more acceptable as they present themselves on an IMAX screen, you can call it any term you want. For me, John summed it up the best:
Exciting Times.....
Vince Pace
PaceHD Productions
Copyright © CML. All rights reserved.