While discussing Red workflows it's worth remembering that unless you are working from the full 4K r3d file the image decoding is a compromise, with no debayering taking place, and sub-pixel selection being used to generate the lower resolution viewing image.
You only get maximum quality when working from the full 4K r3d file, not one of the sub resolutions. One of the problems is that a lot of the systems that work directly with the raw r3d file use this approach, and then output this as the final deliverable, without going back to the full 4K data to perform a full quality debayer and decode, to whatever resolution the final deliverable requires. If the r3d file is decoded to a full resolution 4K, or lower, dpx image, from the full 4K r3d file, and a LUT used to view and grade through there is no compromise at all, assuming the r3d decoding has been done with the in-camera metadata turned off to prevent image clipping. Steve ShawSteve Shaw wrote:
>>While discussing Red workflows it's worth remembering that unless you are working from the full 4K >>r3d file the image decoding is a compromise, with no debayering taking place, and sub-pixel selection >>being used to generate the lower resolution viewing image. Steve, my understanding is that what you say is true if you work from one of the rapidly-extracted proxy resolutions, but that applications like RED Cine make 2K downconverts by processing a full 4K image and then downsampling. Are you saying that many SDK-based systems are going for the fast, lower-quality decode rather than the full enchilada? I think this is worth clarifying because making a good-quality (i.e. subsampled from the full 4K) 2K or 3K DPX from an R3D should be an ideal way of working with RED footage. Stu MaschwitzStu Maschwitz wrote:
>>Are you saying that many SDK-based systems are going for the fast, lower-quality decode rather than >>the full enchilada? I think that all of the SDK-based solutions I've seen allow you to do either. For generating stuff like editorial media, people are often willing to make a trade-off favouring speed rather than quality. When generating media for the online, people tend to go for the most full- on debayer to full-res and then down-sample. Disclaimer: I work for FilmLight on Baselight, a system utilising the R3D SDK. Martin TlasKal>>Are you saying that many SDK-based systems are going for the fast, lower-quality decode rather than >>the full enchilada? I think this is worth clarifying because making a good-quality (i.e. subsampled from >>the full 4K) 2K or 3K DPX from an R3D should be an ideal way of working with RED footage.
I cannot speak for other programs, but the downconverts that are in R3D Data Manager give you both options for getting to 2k from 4k material. You can get the faster, lower quality downconvert or you can get the full 4k sampled to 2k. With both options you will end up with the same 2k image, just a quality difference between the two. It’s a simple option in the program ("half high" or "half normal"). With the other applications I have used I see the same options, the full 4k sampled to 2k and the quick 2k. I think that’s fairly common amongst most of the standard applications. There are some free-as-in-beer new decompressors just released that don’t seem to have those same options, but you wouldn’t be using those for your final grade anyways. Conrad Hunziker IIIStu Maschwitz wrote :
>>.... if you work from one of the rapidly-extracted proxy resolutions, but that applications like RED Cine >>make 2K downconverts by processing a full 4K image and then downsampling.... If I remember correctly this thread began talking about shooting with the Red One at 120 fps and how it only used 2k for that. I tested it that way too, and in using only 2k it uses a much smaller portion of the sensor, only 2k worth. As noted in the original post, they were able to cover the image size on the sensor using 16mm lenses.? That being said, where does a 2k downconvert come into play here? There is no 4k to begin with. It originates as a 2k image. Is it me, or did this discussion get subverted and onto a very different path? Roberto "probably me" Schaefer, ascTo reiterate a bit more specifically, any system providing a real-time decode of the raw r3d files to any resolution (2K or below) will be using one of the virtual proxy resolution versions and not doing a full debayer and decode from the full quality 4K file.
To get a full debayer and decode is a non real-time process on any system.
So, if a system is providing a workflow that provides for real-time operation from the raw r3d data it will be working from these proxy images and doing a sub-pixel pick without any debayering. This can be seen by the fact no system provides for 4K proxies as it is not possible to sub-pixel pick a 4K proxy image from a 4K bayer image. However, it should be possible to perform a later non real-time render pass to get full quality final images at what ever resolution is required. But if this final non real-time render pass is not done the image will be sub-quality. But, if any creative work has been done, such as sky replacements, or selective correction using keying, I'm not sure the end result will be consistent with the work performed via the proxy images. Again, I much prefer to perform a full debayer and decode from the full 4K file to my desired working resolution before starting any online work. So no later surprises! I have also never found a need to use different debayer techniques for different shots. I may later 'treat' an image to improve the look - soft focus for a leading lady close-up - but this is better performed creatively and interactively with the image rather than via a lesser quality debayer I hope this helps. Steve Shaw - Via PDA>> To get a full debayer and decode is a non real-time process on any system.
Keep in mind this is because with the RED SDK, one cannot get access to the actual RAW data, and therefore the data output by the SDK is pre-demosaiced in software. This has some definite advantages, especially in the area of quality control (i.e., the software developer only has access to an image demosaicing method certified by RED, and not one that is self-developed, making sure that the end-customer is seeing exactly the image RED expects them to see, and does not have to take responsability for third-party methods), but the disadvantage is that users are not able to leverage other third-party manufacturer's methods to speed up demosaicing to real-time. One such case among others is IRIDAS' GPU-based demosaicing implementation that has the ability to demosaic 4K RAW data at up to 48fps and then do a proper down-sample to a full-resolution SDI-output in real-time on a Nvidia QuadroFX 5600 SDI. If I'm not mistaken, I believe these benchmarks were run with Dalsa-acquired footage, so there would be some overhead with RED-based RAW footage due to the decompression stage required to access REDCODE-based footage. But still, especially with a fast Core-i7 based machine, I think the potential for real-time 4K RAW demosaicing would be there if third-party solutions could be fully leveraged. Thanks, Jason Rodriguez Hi Jason, from what I understand you are correct in your comments. I was talking about RED debayering only, not other image formats.
I'd love to see third-party access to the raw RED data for alternative techniques.
I posted my original comment as there is some real misunderstanding as to how RED post-production works.
Interesting question though is how does debayering and decompression work on 2K captured images (not 4K shot and down sampled)? I've never shot RED 2K origination capture...
Steve Shaw
Light Illusion
steve@lightillusion.com
+44 (0)7765 400 908
www.lightillusion.com
Skype: shaw.clan
Jason wrote:
>>One such case among others is IRIDAS' GPU-based demosaicing implementation that has the ability to >>demosaic 4K RAW data at up to 48fps and then do a proper down-sample to a full-resolution SDI-output >>in real-time on a Nvidia QuadroFX 5600 SDI. If I'm not mistaken, I believe these benchmarks were run with >>Dalsa-acquired footage When I saw it, the 5600 hadn't been released yet and an older video card was being used, I believe bench was just south of 40fps on a middle of the road HP workstation. With the new card 48fps+ was supposedly possible. DALSA's hardware/workflow person was in the middle of building a buffed and tricked out Iridas system when the announcement of closure happened. I also remember a linux laptop that delivered real-time (24fps) 4K proxies. Illya Friedman
Copyright © CML. All rights reserved.