Home of Professional Cinematography since 1996

RED 4K HD

Published : 1st May 2010

Hi gang-

I just got a call from the rental house that's supplying the gear for an upcoming shoot (starting tomorrow) and they asked me if I wanted 4K HD programmed into the RED camera. I asked them if it's a common setting and they said "Yeah, for about the last year it has been."

I spoke to the post house and they said "4K 16:9, 4K HD, we're fine with either." I've only shot 4K 16:9 and 4K 2:1. It seems like there should be an advantage when scaling 4K HD to 1920x1080, which is how the project is being finished, but I figure I should get some more information.

What are the real benefits to 4K HD?

Thanks -

-----------------------

Art Adams | Director of Photography
4 1 5 . 7 6 0 . 5 1 6 7

showreel -> www.artadams.net
trade writing -> art.provideocoalition.com

ICG, SOC, NWU


> What are the real benefits to 4K HD?

The primary benefit of quad-HD is speed, as far as I know. The RED "half-high" demosaic takes advantage of the exact 4:1 ratio to get very good quality in very fast render times compared to rendering full-res quad-HD or 4K then down sampling with a sinc filter.

If you shoot in 4K, then the half-high demosaic only gives you 2K output, which you then have to crop for 1080p. If you resize 2K to 1080p, the results are poor (aliasing artifacts or softness). Of course you can do a full-resolution 4K demosiac, then resize to 1080p directly, which will give you the highest quality of all, but the render times are long.

If you can handle the render times of a full-res, highest-quality demosaic, then you don't need Quad-HD. But for mere mortals that need a fast 1080p with high quality, it's just what the doctor ordered.


Daniel Browning
Software Engineer
Portland, OR


> But for mere mortals that need a fast 1080p with high quality, it's just what the doctor ordered.

I almost always listen to my doctor, and long render times are the bane of "cost-effective" budgets. 4K HD it is. It doesn't appear that I'm sacrificing anything that will be visible in 1920x1080, and if it actually results in sharper footage at that resolution then so much the better.

I've never known any of my projects to render more than half-res high. 4K HD sounds like a winner.

-----------------------

Art Adams | Director of Photography
4 1 5 . 7 6 0 . 5 1 6 7


> What are the real benefits to 4K HD?

If you're using Final Cut Pro's Log & Transfer to ingest the footage, 3K+ material comes in at half-resolution. 4H HD appears on the timeline as 1920x1080. Assuming you don't need any additional wiggle room for reframing (thus re-opening the whole "RED push-in" debate again), this gives you stuff on the timeline that's good to go with no further scaling needed.

Cheers,


Adam Wilt
filmmaker, Meets The Eye LLC, San Carlos CA
tech writer, provideocoalition.com, Mountain View CA
USA


Hi Art,

I used 4k HD on the movie I shot on it in October. It would have an HD master for main deliverable, so no film-out, and the editor wanted 1920x1080 dailies instead of 2k. 4K HD was an integer resize to 1920, and in the end should produce a sharper picture as well as naturally creating dailies in the correct pixel ratio.

Best,

Graham Futerfas
Director of Photography
email GFCine@gmail.com

www.GFuterfas.com


Adam Wilt wrote:

>> If you're using Final Cut Pro's Log & Transfer to ingest the footage, 4K HD appears on the >>timeline as 1920x1080.

Art,

Adam hit it on the head, if the post is using FCP, Apple's Log and Transfer software does a direct conversion to 1920x1080 for workflow simplicity. Note that the RedRocket Card shows increased speed ( faster conversions) when converting the Quad HD format to 1920x1080 as ProRes or DnxHD.

Gary Adcock

Studio37
HD & Film Consultation
Chicago, USA


By shooting 4kHD do you change your field of view like when you go from 4k to 2k?

Garrett Shannon
LA Cinematographer
www.garrettshannon.com


Garrett Shannon wrote:

>>By shooting 4kHD do you change your field of view like when you go from 4k to 2k?

The field of view does change, but only slightly. 4K to 2K is a 2.00X crop, while 4K to 4xHD is a 1.06X crop.

Daniel Browning
Software Engineer
Portland, OR


18mm becomes 19mm equivalent or thereabouts.

Cheers

Geoff Boyle FBKS
Cinematographer
EU Based
Skype geoff.boyle
mobile: +44 (0)7831 562877
www.gboyle.co.uk
www.cinematography.net


Thanks, all.

We're shooting 4K HD and all is well. The frame area is slightly reduced but no big deal.

Ran into an odd thing today, though. We're using Build 20, and I have one of the side buttons set up to toggle into RAW mode. After changing locations and shooting in a new location that was fairly monochromatic, I noticed that my RAW toggle button didn't seem to be doing anything. We were under time pressure so I just kept shooting, trusting my meter and the raw sensor barber pole thingy.

I discovered later that the viewing gamma had reset itself to RAW from RedSpace, and when I toggled into RAW I saw no change because I was already in RAW.

Anyone seen this kind of thing before? I've seen the RED spontaneously white balance (early build 20) but not this.

-----------------------

Art Adams | Director of Photography
4 1 5 . 7 6 0 . 5 1 6 7

ICG, SOC, NWU


>>I discovered later that the viewing gamma had reset itself to RAW from RedSpace,

Hi Art,

Yes, I've seen this a lot on my last two RED jobs, Build 20. Even with the camera sitting overnight, somehow the setting would change. I was panicked about it after shooting a scene when I thought I was viewing a hotter RedSpace when I was really looking at RAW, and now I change the Look in the camera to make RedSpace more closely match RAW by reducing exposure and saturation a little. The tip-off comes when you can't toggle into Edge-Detect because that doesn't show up in RAW.

'RAW' isn't really RAW of course, since it's gamma encoded. I now view RedSpace in a closer-to-RAW view mode all the time, and I don't worry so much about toggling back and forth. I'm comfortable knowing how to get the exposure where I want it, and the in-camera look adjustments are carried over to one-light dailies.

But I seem to always run into technical difficulties with the RED. This was one of them. I went through three camera bodies on the 18-day movie I shot for all sorts of reasons, and now that companies' RedDrives seem to be going bad. I shot a music video on their camera last week, and had a lot of Codec faults and dropped frames.

But I will say I can get Red Tech Support on the phone very easily. Not much they can do when there's a problem with the camera not re-booting for 8 minutes. Just send the thing in for service.

Best,


Graham Futerfas
Director of Photography
Los Angeles, CA
www.GFuterfas.com


The nice thing is that, when something like this happens, you know there's nothing wrong with the footage because it was exposed properly--or so sayeth the meter. It'll show up a bit brighter than it should, but then I warned the production company about that. I'm rating it at 160, which some say is overkill--and sometimes I wonder myself if that's the case--but my footage is for the most part remarkably noise free. That's definitely one of the benefits of the RED: the ability to pick your own EI based on how you want the footage to turn out.

It's fun using my meter again, too. For too long it had been replaced by waveform monitors.

I had a lot of codec faults on a job with Build 20 recently, and heard it was an issue with a particular software release; then just heard the same thing about Build 21. I do like that Build 21 tells you what gamma/colour space you're seeing in the viewfinder, but the rental house providing the gear doesn't send out Build 21 yet as they don't consider it stable. (This seems to be a trend with every RED software build release: the rental houses hold it back until they're fairly sure there's a version that works.)

I'm enjoying shooting with the RED, but it is a challenge sometimes. Fortunately I like challenges.

Thanks, Graham.

-----------------------

Art Adams | Director of Photography
4 1 5 . 7 6 0 . 5 1 6 7

ICG, SOC, NWU


Art Adams wrote:

>> (This seems to be a trend with every RED software build release: the rental houses hold it back >>until they're fairly sure there's a version that works.)

There are Beta Builds and Release Builds. Beta Builds are noted right on the RED website that they are not for professional production so any rental house could be held liable if there was a problem.

Mitch Gross
Applications Specialist
Abel Cine Tech


>>I had a lot of codec faults...

Hi Art,

I've heard that as the drives age, this can become a problem as well.


Just a rumour that I heard though. Anyone know what may be going on?

As for exposure, I tested the camera with a chart and grey card at 160. I put the camera's false-colour at 18% grey and set exposure and compared to my meters. My gaffer claims it varies, but I'm not so
sure. I definitely like to expose hotter if I can so that I reduce noise, hence my bringing down the exposure in RedSpace to mimic RAW. Of course highlights become an issue to watch, which is why I still like 100% Zebras.

And yes, I agree about still using light meters, but I make my gaffer carry them on his belt nowadays, especially when I'm operating. I tend to scuff up the set walls with them or get hung up on the dolly.

Best,

Graham Futerfas
Director of Photography
Los Angeles, CA


>> I put the camera's false-colour at 18% grey and set exposure and compared to my meters.

I've found through testing that at 320 the RED has about 4.5 stops of overexposure and 5 under, so at 160 I'll meter white highlights (today it was clouds) and open up 3.5 stops and I'm pretty much dead on where I want to be, based on metering the talent.

So far I've been setting fill by eye, but I haven't been doing a lot of jobs where I have to be consistent within a scene. It's mostly spots and PSAs where we milk one lighting setup for a few shots and then move on. I'm not getting a solid feeling for where the noise kicks in on dark tones. I want to say that about two stops under 18% (reflected, at EI 160) seems to be where it kicks in if you try to bring up the darker tones in post. Do you have any insight on that?

-----------------------

Art Adams | Director of Photography
4 1 5 . 7 6 0 . 5 1 6 7

ICG, SOC, NWU


>>I noticed that my RAW toggle button didn't seem to be doing anything. We were under time >>pressure so I just kept shooting, trusting my meter and the raw sensor barber pole thingy."

I see that happen all the time on Build 20. As far as I can tell it has been happening whenever I swap batteries. No other settings change but for some reason my view colour button changes from colour space to RAW so that when I am in RAW and try to view colour space I see no change. Hmmmph.


Garrett Shannon
LA Cinematographer
www.garrettshannon.com


Hi Art,

Yes, it is a known bug where the view space defaults to whatever it was when it is shut down. It had been toggled to RAW when it was shut down and so when it came back up again that was the chosen view space- thereafter you were toggling between RAW and RAW. Build 21 (that got pushed back to beta for some more bug squashing) corrects that problem and also gives us a display of the view we are using.

Clint Johnson
Red #351 owner and DIT as well as AC or Camera Operator if you are desperate
enough.


I have testing in deep IE and Exposure on REDOne and I found that 160ASA is a real IE for Daylight and 125 ASA for tungsten. At 160 ASA I can use 2 2/3 stop over middle grey, and 4.5 stops under. I have tested overexposed with models, charts and real shoots and if I want to have some detail and texture of whites, those can´t exceed 2.5 stops over middle grey. Of course if I use an IE of 320 ASA to exposure, you have almost 3.5 useful stops over middle grey, but you have less information on shadows. You can see those test at:

http://www.alfonsoparra.com/php/ver/structure_all.php?s=3&n=3&id=382&ln=spa&id_cms=497

http://www.alfonsoparra.com/php/ver/structure_all.php?s=3&n=3&id=374&ln=spa&id_cms=489

Those test have been done with build 17. Right now I have shot a Tvmovie (As reliquias do santo) with build 20 and I haven´t noticed a big difference on highlights, except that Blue Channel is less noisy on shadows

I’m sorry, articles are in Spanish at the moment, but I think pictures can be interesting to see it.

Best Regards

Alfonso Parra AEC
Spanish cinematographer
www.alfonsoparra.com


> >if I want to have some detail and texture of whites, those can´t exceed 2.5 stops over middle grey.

That really doesn't sound right to me. 2.5 stops over, at EI 160, is about where the highlights start being compressed if you're viewing in RedSpace. I got 3.5 stops over when I did my tests on builds 17 and 20.

125 sounds reasonable under tungsten if only to boost the blue channel exposure. The red channel still seems a bit sensitive to overexposure as flesh tones can very easily appear to clip a lot sooner than when shooting under daylight.

Whenever possible I use either a Schneider 1/2 CTB or Tiffen 80D Hot Mirror when shooting under tungsten light, rating the camera at EI 100.

It looks like I'll be shooting the interior of a large theatre on Friday and I don't have an adequate lighting budget, so I may have to forego any filtration. Should be interesting...

-----------------------

Art Adams | Director of Photography
4 1 5 . 7 6 0 . 5 1 6 7

ICG, SOC, NWU


Sometimes it just makes more sense to ask the Gaffer to through some 1/4 CTB on the lights.

Even an 80D with a 8/10 stop loss can make for a whole new can of worms. I don't normally request it unless were doing Tungsten on Greenscreen.

Good luck,


Dane Brehm
DIT
SF & LA


>> Sometimes it just makes more sense to ask the Gaffer to through some 1/4 CTB on the lights.

Maybe, but I find putting one filter in the camera to be faster and more cost effective than gelling every light. As long as the filter doesn't cause problems (reflections, etc.) one can then avoid the problems caused by gel kicks, different colours of gel due to aging, etc. There are times and places for both solutions.

I like to employ the least complicated solution possible. One can't avoid chaos in production, but one can avoid introducing further chaotic elements.

-----------------------

Art Adams | Director of Photography
4 1 5 . 7 6 0 . 5 1 6 7

ICG, SOC, NWU


I've shot quite often under pure tungsten light with both my RED's...on Build 17 still [20 was too buggy with the view swap]. No filtration of either camera or lamps..left it all at 3200k.

Never had problems unless we are doing very very low light exposures..see a bit of noise only then.
But I mean it is a very rare occurrence. This would be at the bottom of the curve near the toe, and usually only is situations where you have a smooth non-textured surface that 'blends' from a mid tone to dark.

I see this in RedAlert! when I play with the looks..adding a tiny amount of contrast seems to completely eradicate this noise. I try to treat low light shooting levels as I would on film, slightly overexpose the scene to prevent grain [or noise]. I always have clients who freak due to their complete misunderstanding of the production world [why the hell do they NOT teach ANY production basics in advertising school is way way beyond me!].

Either crank the best on set monitor to the way you like it..or roll a few seconds of the scene and then import into RedAlert! on set [guys, all of you should go out and buy a MacPro laptop now & write it off for this year...go do it!].

Grade it as you like and show the director & client what you wish this to look like...send the TIFF in email to the post house.

It’s just like the days of shooting Polaroids on set to your taste & showing those around to the various parties...

I hear that the new Builds have better& better performance and am looking forward to working with them when they are perfected.

Cheers,

Jeff Barklage, s.o.c.
www.barklage.com
agent: TDN ARTISTS www.tdnartists.com
online reel:
http://www.interdubs.com/r/tdnartists/index.php?namedlink=Cinematographer_Jeff_Barklage
USA based DP


Art Adams writes

>>4K HD it is. It doesn't appear that I'm sacrificing anything that will be visible in 1920x1080, and if it >>actually results in sharper footage at that resolution then so much the better."

I'd be curious what might be lost with a partial-decode, which is what it sounds like you're doing in this workflow.

A wavelet can be partially decoded, but as I've stated before in similar threads, you aren't getting 'scaled 4K' which would indeed make a sharp 2K image...your extracting a 2K proxy of 4K footage...quarter res...or half-res/quarter size...whatever.

None of this is to say that this workflow is somehow unacceptable, but keep in mind what the process actually is...you get a 2K partial decode of a 4K Bayer sensor image. You still get all the benefits of a 35mm compatible image target, but without a full res decode -before- scaling, all you're
really doing is leaving 75% of the pixels on the cutting room floor...or in the couch cushions, or wherever discarded bits accumulate.

With all the subsampling and compressing our industry has been doing...there's a lot of unused bits accumulating somewhere...

Tim Kolb
Director/Editor
Neenah WI USA


Tim Kolb wrote:

>>I'd be curious what might be lost with a partial-decode, which is what it sounds like you're doing >>in this workflow.

While I agree with the points Tim is bringing up, I would like to make a distinction.

Disclaimer - I am not alone in not having any idea what the code is doing under the hood... the following is supposition, but ...

It is important to make a distinction between the wavelet compression being used to cram ten lbs of stuff into a 2 lb bag....and the totally separate issue of constructing an image with RGB information for each display pixel from the data matrix of the RAW file, which has recorded a luminance value for
each photosite.... which photosites are either red, green, or blue filtered.

I would surmise that when rendering files at whatever chosen resolution, the first step is to expand the compressed data back into discrete code values or each photosite, and then to do one of the following:

1. Full de-bayer: Create(through de-bayer algorithm(s) an RGB code value for each display pixel at full spatial resolution 2 Partial de-bayer - Create through pixel sampling an RGB code value for each display pixel (someone described this as "pixel picking"

All other things being equal, one would expect a resized image created from a Full de-bayer to have finer luminance and chrominance transitions than the same size image created using the so-called partial de- bayer, but since it is rarely the case that all other things are equal, I would want to render out a few frames and look at them in a critical viewing situation if I had to choose one or another way of getting to 1920x1080 from an image that started larger.

Totally separate from the issue of the "creation of the other two colour channels for each display pixel of the full-size image is the issue of spatial resolution affected by integer re-sizing (quad HD down to HD) verses non-integer re-sizing.

So, to reiterate :

Five different issues:

1. Compression and recovery of the data
2. Different ways of arriving at an RGB code value for each display pixel, whether by sampling or "full-debayer" followed by resizing
3. Spatial resolution differences between non-integer resizing vs integer resizing
4. Significantly different processing and rendering times depending on which choices are made re: 1,2, & 3 above and in which order (and with which hardware/software combinations.
5. Different qualitative and financial outcomes depending on all of the above.

Comments?

Mark H. Weingartner
LA-based VFX DP/Supervisor

http://schneiderentertainment.com/dirphoto.htm


Hi Tim,

I guess my original thinking, and this is from a non-engineer, is that with 4KHD you scale from 3840x2160 to 1920x1080, instead of 4096 to 2048 to 1920x1080. I guess this came up the other night at the ASC\PGA screening, where David Stump mentioned that scaling from 1920 to 2048 (HD to DCP) can create a significant softness.

Anyway, it's not something I can easily test at this point, but if the final deliverable is an HD master in 16x9, then I would think shooting 4KHD could produce a better quality image than a 4K 16:9 would.

Anyone here tested this?

Thanks,

Graham Futerfas


>> ...if the final deliverable is an HD master in 16x9, then I would think shooting 4KHD could produce >>a better quality image than a 4K 16:9 would.

That's correct for the common case when rendering time is limited, for the reasons you gave as well as a few others. When you have the luxury of slower raw conversions (high quality demosiac and downsample), then 4K and 4xHD produce 1920x1080 output of essentially the same quality.

Daniel Browning
Software Engineer
Portland, OR


Graham Futerfas

>>I guess my original thinking, and this is from a non-engineer, is that with 4KHD you scale from >>3840x2160 to 1920x1080, instead of 4096 to 2048 to 1920x1080. I guess this came up the other >>night at the ASC \PGA screening, where David Stump mentioned that scaling from 1920 to 2048 >>(HD to DCP) can create a significant softness."

Hi Graham,

I should clarify that my note wasn't about shooting the 3840...that makes perfect sense to me.

My concern was more toward the 2K debayer, and the notion that it would have some increased crispness from being acquired at 4K. It is along the line of what Mark was referring to...

1. You have this Bayer pattern image information...the checkerboard pattern of green and red/blue. Basically this system stores the equivalent pixel count of one channel of a full RGB image at the same resolution. It's very efficient (Whether or not it's good or not good in any particular situation is superfluous to this conversation.)

2. This information (the matrix of green, red, and blues values) undergoes wavelet compression.

3. I'm surmising that the wavelet compression needs to decode, and then the debayer (sometimes called 'demosaic') takes place, constructing the image from the discreet, specific values of each photosite.

So...what I'm unsure about is just how a 2K partial wavelet decode would work. How is the demosaic process changed? If the image isn't completely decompressed, how is the demosaic affected?

Using the 2K decode of the 4K image (actually a 1920 decode of the 3840 image) may be fine for an HD workflow, but I'm just curious what the actual image mechanics compromise would actually be when compared to fully decoding and constructing a 4K (3840) image before scaling...

I have some difficulty believing that there isn't -any- compromise (even if it turns out to be an acceptable compromise).

I'm guessing Graeme could illuminate some of this for us....

Tim Kolb
Director/Editor
Neenah WI USA



Hi!

>> 3. I'm surmising that the wavelet compression needs to decode, and then the debayer >>(sometimes called 'demosaic') takes place, constructing the image from the discreet, specific >>values of each photosite.

There is are important things to note:

1. As you wrote, we have a kind of compression by storing away not RGB, but RAW, so only 1 colour component per pixel. This makes the data rate about 3 times more efficient in first place.

2. The resulting "checkerboard" Bayer pattern image "R G1 B G2" is transformed into four colour quadrants. Each quadrant keeps only pixels of either R, G1, B or G2 data, grayscale images. Now you can apply a wavelet compression on each individual colour quadrant to cook it down. Essentially you can use a different compression settings per colour component, so you may result in having more compression on green, while applying less compression on the more critical blue channels (or whatever you figure out to make sense).

Important: I am not saying that this is exactly what RED is doing, because I don't really know it. But its roughly what you can do, and what I would choose to do, and pretty much what they did until including Build 15 firmware as far as I was told by people who actually decoded such files...

So when decoding from a 3840x2160y RAW image you do a wavelet decode of four R, G1, B and G2 colour quadrants, each 1920x1080y in size. After fully wavelet decoding those four channel quadrants, there are roughly three options that come to my mind immediately :

a) reassemble the Bayer RAW image from the gained uncompressed image data and apply a quite computing intense demosaic process on that data to result in a fully deBayered RGB data at full 3840x2160y resolution. From there you need to downscale to your target 1920x1080y resolution using an integer scaling by actually summarizing now four RGB pixels into one new RGB pixel. This makes use of 33% captured pixels (beside the wavelet compression artefacts) and 66% interpolated pixels from demosaic process (actually "inventing" what has not been there before).

b) you take the four colour quadrants R, G1, B, G2 and average the G1 and G2 channels to get a "finer green" into a new channel G. Now you take the resulting 3 channels R, G and B (each 1920x1080y pixels in size) and put them into a new RGB image WITHOUT any further processing. This is pretty fast and makes use of 100% captured data pixels (beside the wavelet compression artefacts).

c) you take three of the four colour quadrants R, G1, B (each 1920x1080y pixels in size) and put them into a new RGB image WITHOUT any further processing. This is extremely fast and makes use of 100% captured data pixels (beside the wavelet compression artefacts).

One can say "Hey, there is a pixel shift between R, G1, B and G2, and the resulting image isn't perfect!" That’s true, but will it get really better by inventing pixel data that has never been there before and scaling down from there?

Further, I have very often seen prisms that hadn't the sensors aligned perfectly. So pixels don't really matched and very shifted around "somewhat", causing an error of +/- 0.5 pixels per colour component. The resulting images where always processed by completely ignoring that issue and still we loved the results. Today you can even find cameras that have misaligned sensors as a "feature", which the vendors use to interpolate missing information to gain more resolution (which isn't always really true...). So from failure to feature its often not such a big step...

We use Cineform RAW over here, and also Iridas SpeedGradeDI, for decoding RAW to RGB, with various demosaic options. Cineform has a cool converter R2CF.exe which can actually transform a RED R3D file into a Cineform RAW file.

Remember: Inside the fully demosaiced RGB image is still the vivid capture RAW data alive, one colour component per RGB pixel has actually been captured. So Cineform takes that particular "true" information and puts it back into its RAW wavelet compression, which is pretty much the same what I described here: Four quadrants, each compressed as wavelet, and so on.

The Cineform RAW decoding allows optionally decoding only the wavelet and NOT applying the demosaic, so you end up with a dramatically faster decode at 1/4 the resolution with a very good quality. 1/4 is full HD now. Many systems do this, especially in the high speed imaging sector and industrial imaging.

The same, in realtime, on the GPU when using SpeedGradeDI and RAW data. I pointed them a long while ago to try to do deBayer via GPU and they heard me, our reason was that at that time transfer speed towards the GPU was to slow for full 4K RGB, but 1/3 was working, and the process is fast enough on the GPU for realtime. I have no idea if it has been done yet, but if you could actually use Adobe DNG for storing those RED Bayer pixel data (the same way as Cineform does), and use SpeedGradeDI even faster, now even bypassing any further compression/decompression steps in post. And data rates are still 1/3 of the source resolution, but 1.33 of the target RGB 1/4 sized HD image, though...

Btw, in some tests we did we found footage sometimes sharper in SpeedGradeDI than in RED Cine, which shows their demosaic algorithms differ. Which is better is often a matter of taste, sometimes we and our customers prefer the SpeedGradeDI results for their crisper look, it depends on footage. It’s unfortunate that RED isn't opening up a door for other vendors to bypass its RAW to RGB conversion and just output a RAW directly. But this has been talked about many times on RED USER.

So, at the end of the day, there are much simpler reasons why the conversion from RAW 4K-HD to RGB HD is fast and still a valid good result, as explained above.

Cheers,

Mit freundlichen Grüßen,

Best regards,

Axel Mertes
Geschäftsführer/CTO
Tel: +49 69 978837-20
Fax: +49 69 978837-22

Magna Mana Production
Bildbearbeitung GmbH
Alexanderstraße 65
60489 Frankfurt am Main
Germany
Tel: +49 69 978837-0
Fax: +49 69 978837-34


Hi!

>>... I guess this came up the other night at the ASC \PGA screening, where David Stump mentioned >>that scaling from 1920 to 2048 (HD to DCP) can create a significant softness."

For any scaling the Nyquist theorem applies, which says that you need at least double samples per resulting sample to get a good information and capture the right data.

So in other words:

Scaling up is always bad (as you INCREASE resulting samples over source samples). What is not there before isn't there afterwards... And as the scaling isn't integer multiples, there are sticky "waves" of  sharp vs. unsharp areas in the resulting image.

Further, when downscaling from 2048 to 1920 you again haven't enough samples to do so, again creating such waves (at other frequencies) that affect sharpness of the image and sacrifice what you had at 2K now on HD. It’s a big mistake that shooting 2K is always better than HD. If you end up with HD at some point, shooting HD directly is always better... Or you need to do a Pan & Scan of your 2K and crop away some pixels.

You can so easily make that sharpness / unsharpness waves visible by using an (clearly artificial hardcore test pattern) black&white pixel by pixel checkerboard image at your source resolution and scale it up or down to you desired target resolution with your desired algorithm.

If you once see that, and that shitty gray smear results you get, you know what Nyquist is all about.

Cheers,
Mit freundlichen Grüßen,

Best regards,

Axel Mertes
Geschäftsführer/CTO
Tel: +49 69 978837-20
Fax: +49 69 978837-22

Magna Mana Production
Bildbearbeitung GmbH
Alexanderstraße 65
60489 Frankfurt am Main
Germany
Tel: +49 69 978837-0
Fax: +49 69 978837-34


>>(When) the scaling isn't integer multiples, there are sticky "waves" of sharp vs. unsharp areas in >>the resulting image.

Well put. It's always better to shoot at an exact multiple of your release resolution (even if that exact multiple is "1").

Failing that, the next best result is to throw away lines and/or pixels. Last choice should be scaling the image "somewhat".

Bob Kertesz
BlueScreen LLC
Hollywood, California
The Ultimate in ULTIMATTE® compositing.©
For details, visit www.bluescreen.com


>>but the rental house providing the gear doesn't send out Build 21 yet as they don't consider it >>stable. (This seems to be a trend with every RED software build release: the rental houses hold it
>>back until they're fairly sure there's a version that works.)"

In the middle of a shooting with 3 Red One in Build 21,5 more weeks to go.

Till now everything fine and stable, same as my previous project in Build 17..BTW start from the first time I use Red One, I depends on my meters a lot, think it’s the only way to go with multi cameras on set.

Regards
Chan Chi Ying
DP HK



>>It looks like I'll be shooting the interior of a large theatre on Friday and I don't have an adequate >>lighting budget, so I may have to forego any filtration."

Should try 320 under tungsten without filtration, face the noise and fight using lighting. Be confirm with the look on set, no way to lift the black in post.

Happy shooting.

Regards


Chan Chi Ying
DP HK


>> Should try 320 under tungsten without filtration, face the noise and fight using lighting. Be confirm >>with the look on set, no way to lift the black in post.

I rated it at 160 and shot wide open, at T1.3 on Super Speeds. It turned out beautifully. I used to be an "expose to the right" RED user, but at EI 160 I'm becoming totally confident in following my meter.

I still run into noise, mostly because my clients aren't allowing me to overfill and bring the levels down in post. They really want to treat it as a "what you see is what you get camera." It's a little frustrating, but I'll live.

-----------------------

Art Adams | Director of Photography
4 1 5 . 7 6 0 . 5 1 6 7

ICG, SOC, NWU


Jeff,

When you shoot under tungsten conditions without correction on the red, do you find it hard to get the skin tones exactly right in colour correction?

Is there a certain skin quality in the subject that would motivate you to correct with a filter or on the lamp ?

Mark Eberle
Director of Photography
www.markeberle.com
www.Cineflight.com
818-448-5367 cell


>>I still run into noise, mostly because my clients aren't allowing me to overfill and bring the levels >>down in post. They really want to treat it [RED ONE] as a "what you see is what you get camera." >>It's a little frustrating, but I'll live.

I may wind up shooting a RED ONE feature for a WYSIWYG director. There's a lot of greenscreen (probably 50% of the shots), and the director has been known to say, "if you still see detail in the faces, it's not dark enough." I'm pretty much convinced that the only way this won't come to tears is if I supply the director with a Cine-tal monitor or a feed through a DAVIO box, so I can light for and capture a clean, properly-exposed, post-friendly image, while showing the director some horribly dark, crushed image just the way he likes it.

Of course this requires having a DAVIO or Cine-tal monitor in the budget. I'm sure production will let me get one, seeing as how we're saving so much money by shooting on the REDs. <grin> Failing that,
I'll set up the monitors with stretched contrast and crushed blacks (e.g., from 30% on down mapped to black!), and hope for the best.

Adam Wilt
filmmaker, Meets The Eye LLC, San Carlos CA
tech writer, provideocoalition.com, Mountain View CA
USA


Adam Wilt :

>>"I'm pretty much convinced that the only way this won't come to tears is if I supply the director >>with a Cine-tal monitor or a feed through a DAVIO box, so I can light for and capture a clean, >>properly-exposed, post-friendly image, while showing the director some horribly dark, crushed >>image just the way he likes it."

Motion RAW means never having to say you're sorry...

Tim Kolb
Director/Editor/some other stuff
Neenah WI USA


Can you go into the camera settings and tweak the image so the output is really dark? I don't remember what settings are in there, but there's got to be something useful.

...or rate the camera at 80 and use your meter set to something more realistic. The picture should look plenty dark.

-----------------------

Art Adams | Director of Photography
4 1 5 . 7 6 0 . 5 1 6 7

ICG, SOC, NWU


Hi Art,

The Red's look can be adjusted in the menus, and this includes reducing 'Exposure'. These settings will transfer over to the dailies unless you re-write the metadata. It's under the Video Look and Colour.

So yes, you could make the look darker while viewing RedSpace, but the RAW files are unaffected. As I mentioned earlier, I like to make RedSpace's exposure more like RAW, and I reduce the saturation a notch or two. This carries over to the dailies.

Best,

Graham Futerfas
Director of Photography
Los Angeles, CA


Mark,

No, flesh tones seem fantastic.

I do notice some odd green build up when shooting under fluorescent..I always colour meter the environment & set the green level on our lamps to correspond [minus 1/4..as always..meaning that if the meter says you need full plus green to compensate, we always add 3/4 instead..seems matching exactly always is too much], then white balance the camera.

Again, the HD-SDI downconverted image is merely a reference..when in doubt, burn a few seconds of the image onto a CF card and import directly into RedAlert! to really see what you are really capturing..especially when you begin to see either extra green or magenta build-ups in flesh...99% of the time its merely an artefact of the 720P downconverted 'video assist' output.

Yeah, sounds like a bit of a drag, but it is VERY seldom I need to do this....shoot a bunch with the camera, get familiar with it under all sort of situations, and you get comfortable with what you can & can't do..just like getting familiar with a certain film stock.

Honestly, I really like shooting with the RED, it is a blast.

Off to my first 35mm shoot in 6 months..had to oil my Moviecam & shoot a test to make sure everything was OK...6 months since I had really ran it...

cheers,

Jeff Barklage, s.o.c.
www.barklage.com
USA based DP


>>Off to my first 35mm shoot in 6 months..had to oil my Moviecam & shoot a test to make sure >>everything was OK...6 months since I had really ran it...

LOL .....my Arri III now makes a great cup of cappuccino !! Haven’t used it in a while but my set of Zeiss standards seem to be all the rage with the red users in the rental house in LA !!

Thanks for the info Jeff ....happy shooting and happy holidays.

cheers

Mark Eberle
Director of Photography