I don't know if anyone else was at the Gamma Density seminar a few weeks ago, but I've had a chance to play around with the software and here's a bit about why I find it useful.
If you have a tool like 3cP, you can see your footage on-set with calibrated monitors to see what it is you're actually able to get out of the image, and then send those presets to post. It takes a lot of the guesswork out of exposing on the RED camera. With the RED camera - the best way to make it look like video instead of film is to improperly expose the image. A lot of people think that because it's RAW, you can just fix any exposure problems in post. But the truth is that you can never change the amount of light that hits the sensor, which is the only way to get correct exposure in the field. If the exposure is not correct, you have to pay a penalty to fix it, usually in added noise (or sometimes a lot of money to pay to get it fixed). One problem with getting correct exposure in camera is that you can't trust your monitors like you do for most high-end HD cameras. This is difficult to explain to some people who come from the F900R or Panasonic P2HD world. The HDSDI outputs of the RED in both REC709 gamma and Redspace settings don't quite look right.This means that you can't ever quite trust your colors as you see them.
Even if you take the footage into REDalert, the color look you created in camera and saw in onboard monitorswill most likely look different on the computer screen.
As far as exposure, your best option is to use the in-camera exposure meters, either the Spot meter, stop lights, or false color, in conjunction with a standard light meter. However, those meters are reading the data depending on your output gamma, which can throw you off. You really have to check it in RAW view all the time to get the most accurate information. For this reason, I see the video outputs of the RED all as if they were like a film video tap. They give you a reference for framing and an example of what your footage can look like, since you're only looking at part of the data you're recording. That's part of the blessing and curse of shooting RAW. Tim SutherlandRed One Tech - Camera Assistant - DIT - EditorTim Sutherland wrote:
> That's part of the blessing and curse of shooting RAW. But it needn't be. That's what kills me about the RED One. The workflow has been rendered so needlessly mysterious. It's a camera that records a 12-bit linear image at the sensor. The hardest thing to do on set is simply to preview everything the sensor's recording. The second hardest thing to do is decode the footage in post in a way that everyone can agree preserves all the sensor had to offer, and drops into existing post pipelines with ease. Heaven forfend that you should want to do both, and have them match, so that your on-set preview LUTs might be of some use in post. The cats at RED have given us a zillion ways to decode our footage. All that's missing is one good way. My favorite thing about the Genesis is that there are no fricking gamma or matrix settings. John Galt twiddled the knobs so you don't have to. Adjusting image settings on a digital cinema camera is like arranging furniture to look nice in the moving van. Stu MaschwitzStu Maschwitz wrote:
>>Adjusting image settings on a digital cinema camera is like arranging furniture to look nice in the >>moving van. Best line of the year (so far). Jeff KreinesStu wrote:
>>Heaven forfend that you should want to do both, and have them match, so that your on-set preview >>LUTs might be of some use in post. Hopefully this won't come off as a bad infomercial. I think the nice thing about the SI-2K in this area (which is also a digital cinema camera) is that the embedded IRIDAS SpeedGrade engine greatly helps to prevents that issue. You load a frame from the camera, grade it either in-camera itself with the embedded SpeedGrade interface or SpeedGrade OnSet, and then load the resulting 3D LUT back into the camera. It's quite WYSIWYG. The CineForm codec embeds the LUT from the camera, so when you're working in FCP or PPro, you still have WYSIWYG. And finally at the DI stage, when you load the footage into SpeedGrade XR, you can simply load up the LUT's you used in-camera, and it remains WYSIWYG, plus you can now tweak the original LUT's and grade from the RAW. There are "knobs" like different linearization LUT's, matricies, etc. that you can tweak if you really want to get low-level, but generally you don't have to touch those if you don't want to . . . . simply use the defaults that ship with the camera as a starting point when customizing a LUT, and the matricies and other pre-calibration settings are already there. I think the biggest problem in this area comes from those who want to start from complete "scratch", but don't quite understand how the pipeline works per-se (there are materials on the SI-website to help out in understanding how the pipeline works for those that want to go that route) . . . I think true flexibility comes though from the fact that overall it's pretty hard to mess up, meaning if you don't include the matrix, pre-calibration LUT's, etc., you can still get very good results (for instance, my understanding after talking with Martin at MPC was that they graded straight from the LOG-based RAW data-i.e., they did not grade with a matrix baked into the footage, and simply graded the white-balanced RAW material to match the film-shot material). >>Adjusting image settings on a digital cinema camera is like arranging furniture to look nice in the >>moving van. So no, I don't think it has to be that hard . . . it really shouldn't be that hard for general use. I think what's "hard" is determining the "best" way to-do something. That is generally the question that comes up to me all the time, and unfortunately just like with any camera, asking what the "best-quality" workflow should be is going to be a loaded question. Thanks, Jason RodriguezWhich is what every DSLR on the planet does more or less (sometimes 14 bit)
>>The hardest thing to do on set is simply to preview everything the sensor's recording. Which in a DSLR is easy, you have a jpeg to look at & which is embedded also in the RAW file and aSam Wells wrote:
>The hardest thing to do on set is simply to preview everything the sensor's recording.
Which in a DSLR is easy, you have a jpeg to look at & which is embedded also in the RAW file and a couple picture control settings (~ = videotap, LUT) - with practice & an eye for the histogram you have a very good idea what you'll have to work with in serious post processing.
What?
Sam, I am sorry, I agree with the original poster about how in the hell can you look at 12 or 14 bit data on an 8 bit display space and properly understand what is being captured? Certainly not on that prosumer grade camera. The max viewable bit depth on most displays is limited to 8 bits, with a very few now actually showing true 10bits properly. Even photoshop has to do some some shenanigans when working with HDR imagery to capture the subtle details at more than 12 bits.
Every piece of hardware we use has some sort of profile attached to allow it to reproduce or portray color, all of them modify the viewing space to achieve what it's designer has "planned" for that use. if you are planning on looking at "Raw" content on a REC 709 monitor, it is incumbent on you to take that into consideration and compensate for the "auto" setting on your display.
Some display systems ( like Cine-tal) allow you to use other LUT's, turn off or bypass any "setup" completely as part of their function so that the user can properly determine what is being viewed.(albeit in a reduced 8bit space, but by far the most accurate conversions I work with) Gary Adcock>Sam, I am sorry, I agree with the original poster about how in the hell can you look at 12 or 14 bit >>data on an 8 bit display space and properly understand what is being captured?
Because you don't need to see the gradations to shoot, you need to see the range.
I can shoot film with a light meter and even with 16+ stop dynamic range I can asses what is on the negative with a spot meter & 10 bit Zone System ! In any case you've got the histogram a slick spot meter. (and for the "that's only the histogram of an 8 bit file crowd" well if you see 1/2 stop more into the far right highlight and do something about it you can change a really crappy highlight into a sort of crappy one. Wow. Don't need videoville to shoot digital stills and make a Lightjet print with all the gradations we know & love. Don't get me wrong, I'm not opposed to more subtly representative files nor portable displays with more range (what will OLED do Sam WellsI'm working on an in-camera LUT for REC 709 in the RED that gives you wave-form accurate IRE numbers as correspond to the RAW image. That way you can monitor on set fairly accurately. Then when you do get to post, the LUT still holds up and looks the same, which surprised me.
That's one of the benefits of G&D's 3cP. They have their chart, which you can shoot and then compare into their monitor-calibrated software on set to the digital version. You have all the same tools (WFM, Vectorscope etc) to make sure you've got what you think you have. Then send a 3D LUT, ASC CDL, Apple Color preset, or a bunch of other presets to your post facility. That way you can see on set what you're going to see in post. And with my camera LUT I'm working on, you can see on camera, in 3cP, and in post the same image and not have to flip back and forth between RAW and REC709 or REDSPACE so often. Tim SutherlandStu Maschwitz wrote:
>>Adjusting image settings on a digital cinema camera is like arranging furniture to look nice in the >>moving van. This is one of the smartest things I've heard someone say for years!I must say this conversation strikes me as a bit funny... Years ago when we first started discussing RAW formats on CML, one of the big advantages was that RAW would allow us to get away from scrutinizing scopes on set – now people seem to want waveforms and vectorscopes on set with RED so they can treat it as though it's an HDCAM...
Have we gotten so used to WYSIWYG monitoring that we can no longer live without it? Or are people only concerned with this so that they can send "proper" reference files to post?For me it's the difference between the repeatability of film, which you can rely on, and the relative non-repeatability of HD. If you shoot with a RED all the time, and follow the footage through post, I think you can get a pretty good feel for what's going on and how it responds. But if I work with a Varicam or an F900R they're all set up slightly differently, and they don't respond to my light meter the same way film does.
In film I could use my spot meter and get my dailies damned close to looking how I thought they should look when I shot them. But HD... it just doesn't work that way for some reason. So that's why I want all those extra tools. If something in the shot clips a color channel I want to spend a minimal amount of time looking at a waveform monitor to figure out what it is so I can fix it or decide to let it go. On the RED I have to wave my hand in front of the lens (an Adam Wilt trick) until the channel stops clipping, and then I can figure out what's causing the problem. On film I don't worry about it.
Raw is, in theory, like shooting film. But where there used to be one stock, or three stocks, or seven stocks, there's now TONS of stocks to choose from. Some of the choices are unintentional and are specific to that camera. And one top of that, how many truly "raw" cameras are there? The RED certainly isn't "raw" as there's still some processing going on in "raw" mode. Once you knew a film stock, you KNEW that film stock, and there'd be minor changes from batch to batch but it was very forgiving and very consistent. HD frequently is neither. Film was much easier to shoot than HD has ever been. And that's why I like the scopes: I want to play things right on the edge, but since I can't count on my meter anymore... I need to count on something! The one saving grace is now that color bit depths are becoming greater it's becoming easier to light for HD, because a lot of the separation in film that allowed us to light a scene with one bulb seems to have to do with both latitude and the depth of the recorded color. Cameras like the F23 and F35 finally seem to be breaking that barrier. With the F900 and earlier cameras I found I had to edge and backlight more than I would have liked to keep the image from going to mud. Layers just didn't separate as well. The newer cameras... not so much of a problem anymore.Art Adams:
>>"Raw is, in theory, like shooting film. But where there used to be one stock, or three stocks, or >>seven stocks, there's now TONS of stocks to choose from." ....and you're still shooting positive of course. As you point out in your S Log article, shooting video as positive-linear eats up about half the data in the first stop...in highlights where our eyes aren't looking for detail anyway.Copyright © CML. All rights reserved.