It is possible without rooms of programmers to correlate DPX frame numbers to time code frame numbers after all.
I would normally quote the person who tipped me to the following, but he does get bombarded, so with some license (and if he would like to jump in, please do), the DPX file header actually has copious space for time code to be embedded. All it takes is for the recording software and the post production devices to access the space in the header to embed, then extract it.
Lest anyone object that because the Viper technology was not implemented from the git-go with that functionality is as ridiculous as accusing film camera manufacturers of not implementing time code in their cameras originally. Someone had to make edge numbers work with time code. Why is anyone crying in their beer because the necessary access to an established file structure has not been implemented. Post houses, historically, have been the leaders in driving and even developing such technology bridges, and in fact, continue to do so for all kinds of new technologies. What I don't understand is all the whining about this particular issue, except that the acquisition technology doesn’t have sprocket holes.
> except that the acquisition technology doesn’t have sprocket holes.
George, are you reading the same CML list as I am? Because I don't see any whining at all. I simply see information being passed that points out that the current post path for 4:4:4 Viper material, when captured on the currently available devices, has not evolved to the point of being practical and simple to use, certainly not as practical or simple as video with time code and/or film with key numbers.
My point in all this was that to use such a system for a full length feature is, at this point in time, asking for a bit of trouble, regardless of how good the images might or might not be. That, and also to point out (or at least imply) that in a project in which all the backgrounds are CGI, that the real need for the talents of the cinematographer lie in the digital world, with the elements shot in the real world being only a small part of the overall picture. Nobody was comparing electronics and film in this conversation prior to you. And yes, I do think it's incumbent on Thomson to either develop or help to develop such a support path if they want the camera to succeed.
If you think it's so easy to create a workable post path for the Viper in Filmstream mode, you should do it. There are many here who would consider its use more seriously if such a path were in place. But the fact is that so far, it isn't. That's one of the reasons Thomson came up with the HDStream mode in the first place - to extend the potential audience for the camera by porting some of its advantages to recording technology that's already in place and well supported. Ya gotta walk before you can run, even in this technically impatient world.
IATSE Local 600
Yes I do read the same list you do. And my point was and continues to be that if everyone (the post production community) had passed on implementing electronic post production for film because there was originally no time code implementation in film cameras or any way to, secondarily, correlate edge numbers to time code none of us would even be having this discussion today.
I am pursuing solutions because I believe that if you are not part of the solution…Rather than passing by the finest 4:4:4 solution (2K) till "someone" fixes the problem...well I think you know the rest of what is on my mind. I do know this, this is a recording problem, and since almost all adequate 4:4:4 field recorders will have to be DDR or Optical Disk based, and because providers of those types of solutions are, typically, computer oriented, I see no reason why the whole DPX file (which does include headers) cannot be dealt with in the Read/Write process. It is just silly to presume otherwise.
GEORGE C. PALMER
George C. Palmer wrote:
>if everyone (the post production community) had passed on >implementing electronic post production for film because there was >originally no time code
Well, I was at Lorimar in the mid 80's when we started using non-linear electronic editing systems (Editdroid, Ediflex, Montage, you name it). Until Kodak, the primary manufacturer of film stock, stepped up to the plate with the introduction of Keykode, and Research In Motion and Evertz started manufacturing reading equipment for it, the video post facilities were not developing any methods for helping in that interfacing of film and video. Electronic post for television was done without keeping track of any film information for at least 3-4 years.
This was 1985, which only goes to show that it took quite a while for a working system, supported by the major suppliers to the industry, to exist. Automation of the system, via electronic readers, accepted file formats, conversion software, and electronic editing system support was necessary before it could be widely adopted for production use. While it's true that a lot of the particular steps and methodologies were invented by us as we went along, we relied primarily on manufacturers (especially Ediflex, Fuji, and Kodak) to supply the basic technologies and interfaces.
I think it's good that you are pursuing some solutions. Most of us are working cameramen, visual effects supervisors, colourists, and the like, and quite frankly, we're the users, not the developers. But no one department operates in a vacuum, and if a cameraman wants to use a device that isn't supported by a post pipeline, it's not going to get very far, regardless of how good it might be at photographing images.
Finally, the Viper is not a 2K solution. It is 1920x1080, same as every other HD camera. 2K would be 2048x1556. The finest, and at the moment, only 2K solution is film.
IATSE Local 600
Michael Most wrote:
>Well, I was at Lorimar in the mid 80's when we started using non-linear >electronic editing systems...
I was staff R&D at Pacific Video (now Laser Pacific) at the time. I don't recall the exact date, but I think that within that timeframe we were working on what we, at the time, called "VIDB" for "Vertical Interval Database". VIDB would record all sorts of data into several lines of the vertical interval, including film footage info, various time-codes, etc. I, at the time, designed hardware and software (cards that lived in a PC) to both encode and decode this information as well as generate visible windows on screen. I don't remember the sequence of events but, at one point management arranged for Evertz to make the hardware instead of PacVid doing it internally. This resulted in a set of cards in a stand-alone frame that would encode and decode the data. The deal must have had some sort of an expiration date because within years that concept and Evertz product line was in widespread use in post-production facilities.
The DPX file structure is a well known entity. Its header information is firmly attached to its file information. One encoder and one decoder in each recorder for time code-to and from-DPX header location.....
I am not a developer, I am someone who finds solution providers and attempts to convince them to solve problems.
1920...2048...I guess if you pronounce 1920 under the 2K bar, it must be so...
In 1946, they said television wasn't practical and only radio would last.
In 1966, when I started, much film was cut by hand and most television was live B&W, film, or kinescopes. It got better.
In 1986 they said video would never be good enough for film applications.
In 2002 they said film was dead, and they were wrong, but.......I'm sure not ready to write (2000-1920) Viper technology off yet
GEORGE C. PALMER
Martin Euredjian wrote:
>VIDB would record all sorts of data into several lines of the vertical >interval, including film footage info, various time-codes, etc.
Unless I'm mistaken (I might be), VIDB was, to some degree, connected to the Image Translation project as a way of having the video self-document the 3:2 pull-down cadence on a scene by scene basis. For those who aren't as old as Martin or me, Image Translation was the first 3:2 sensitive standards conversion process. It ran in non-real time (about 5-6 hours to do a 1 hour program, as I recall) and was pretty revolutionary in that it eliminated the motion artefacts common to NTSC-> PAL conversions at the time. Around 1988 or so, European buyers of US television programs were becoming pretty upset with the standards conversions used for shows that were increasingly being posted on video, and some were threatening to cancel some orders.
Viacom was one of the first production companies besides Lorimar to go to video posting, and they were Pacific's major customer at the time. They needed the problem solved or they would go back to film editing to placate their foreign customers.
In order to solve the problem and keep video post viable, Pacific developed Image Translation, which basically removed the 3:2 pulldown and ran the resulting 24 frames at 25 frames per second to create a PAL master on, at the time, one inch tape. The basic methodology was eventually adopted by a number of manufacturers and turned into more automated, off the shelf systems (TK 3:2, DEFT) that did their own automated cadence detection, but Pacific was the pioneer, as they have been on a number of fronts since. I don't think it's an exaggeration to say that 24p as a viable product and format would likely not have happened had it not been for Laser Pacific.
IATSE Local 600
Michael Most wrote :
>VIDB was, to some degree, connected to the Image Translation project >as a way of having the video self-document the 3:2 pulldown cadence
Yes, that was definitely a part of it. There was a lot more data in VIDB than just the 3:2 flags. My recollection of non-technical aspects of that solution are fuzzy enough that I don't know how much technical detail I am free to discuss for possible violation of IP/trade secret boundaries.
> I'm sure not ready to write ... Viper technology off yet
Who said anything about writing it off?
All I said is that it needs a post path developed in order to be practical, and that it's incumbent on the manufacturer to spearhead this effort. Nuts and bolts post production can't be experimental and be successful. People who are not engineers need to understand it, and information has to be passed between the "lab," the "negative cutter," the assistant editor, and the post producer. It needs to be simple enough that the guy you hire to work a graveyard shift to load, convert, and distribute dailies can understand it. And it has to be reliable enough that the numbers that go to the assistant editor are correct, as are the numbers he returns in the form of a final cut EDL. If this is not the case, the system is too complicated and will only be used on individual projects for demonstration purposes.
The Viper is far from the only game in town. Sony is improving their products, and they're doing it with the understanding that a complete system needs to be available in order for it to succeed. They didn't just come out with a 4:4:4 camera, they came out with a format and a VTR (both on board and studio) to support it. Complete with multi-channel digital sound and time code, so that none of what we've been discussing is even an issue. I like the Viper, and contrary to what you seem to think, would like to see it succeed. But they really need to think through what it's going to take to make that happen, and that includes post solutions that don't have to be invented by those who use it.
IATSE Local 600
I think we are at the core of this discussion now, and you will notice that I didn't mention S--- until someone else did.
Sony announced their 4:4:4 product at NAB 2003 but actually did not show it; what they showed was a very incomplete mockup. Its original specs call for fully processed 4:4:4 output from the camera recorded to compressed videotape. It is 4:4:4 but NOT what Viper (the camera) is; Sony hasn't even come up with an equivalent 4:4:4 camera as a result. Yes, its compressed videotape recorder will probably have multi-channel digital sound and time code and it will all be compressed, fully processed 4:4:4, NOT a Digital IP. If you call that an improvement, I don't; I call that a compromise on the original 4:4:4 uncompressed, dual link concept. I will be a complete, but compromised
acquisition/recording system. If I wanted that, I would probably suggest HDSDI, 4:2:2; the compromises of the "Sony 4:4:4 System" make that system a toss up with HDSDI, 4:2:2.
Glad you support the Viper concept, it, frankly, just doesn't sound like that from most of your comments here. Yes we all support the direct recording of T.C. as a near term goal. That will fix the whole enchilada. I have a feeling that if Sony's agenda did not include the long term proliferation of compressed videotape solutions, we would have seen an appropriate disk w/TC solution from them, but it probably would not work on anybody else’s camera system anyway.
And I wonder where Laser Pacific got the idea for frame segmentation anyway??? I am strongly guessing, it probably was not from Sony.
GEORGE C. PALMER
George C. Palmer wrote :
>...its compressed videotape recorder will probably have multi-channel >digital sound and time code and it will all be compressed, fully >processed 4:4:4,
I call it a significant improvement over the existing line of HDCam products. You can't criticize something for being inferior to something else that never existed in the first place.
> I will be a complete, but compromised acquisition/recording system.
You have to start with the possible and practical before you venture into the not-so-possible-with-current-technology and impractical. You can criticize all you want, but the likelihood is that Sony will sell far more of these systems than Thomson will Vipers. And not because it's better or worse, but because they were very smart in realizing that when you're introducing a new product, you sell it as part of a total solution.
> Glad you support the Viper concept…
If I didn't feel it has great value and potential, I wouldn't bother to comment at all. This entire thread started out as a reaction to the making of the feature "Red Riding Hood" using the Viper as the production camera, and primarily recording the HDStream output on a D5 recorder. Personally, I think it's rather telling that up until Thomson reconfigured the camera for log output in 4:2:2 HD, and coupled this with the D5 as a recording solution, the camera was never used for anything even approaching a full length feature. Do you think that could have anything to do with the lack of a practical recording solution and post path for the full FilmStream data? Naaahhhh.......
> And I wonder where Laser Pacific got the idea for frame segmentation >anyway???
Maybe you know something I don't about this subject, but as far as I know, Laser Pacific had nothing to do with coming up with the segmented recording format. That was Sony answering Laser Pacific's desire for a 24p capable VTR by reconfiguring existing hardware in a very clever way so that a product could be brought to market sooner rather than later.
I give Laser Pacific a hell of a lot of credit for many things, but they don't build VTR's, they just use them.
And on that note, I think I've said all I have to say on this topic..
IATSE Local 600
[George] The DPX file structure is a well known entity. Its header information is firmly attached to its file information…
[Lucas] If only it were that easy. There are so many ways this can get screwed up. Lots of apps mangle headers, or just ignore them and import only the image information…
[George] 1920...2048.... I guess if you pronounce 1920 under the 2K bar, it must be so......
[Lucas] A 2K DPX file is pretty well standardized at 2048 x 1556. That's what visual effects houses expect when the phrase "2K" is uttered. 1920 x 1080 is a little bigger than Academy, but 2048 x 1556 is Full Aperture.
[George] I'm sure not ready to write (2000-1920) Viper technology off yet.
[Lucas] It's fabulous technology, but there IS NOT a standardized, easy to use post path for the Viper. Every Viper production is a custom post path.
Epic Center Hollywood
I hadn't heard before of this time-code issue for doing the final digital conforming after offline editing creates an EDL -- so how did Joe's short film shot with the Viper handle this issue?
What are the VISIBLE differences between recording to 4:2:2 HD-D5 versus 4:4:4 uncompressed? Would there be any difference in apparent resolution in the film-out? Or are we only talking about color-correction flexibility and chroma-key matting issues? Compression artefacts?
David Mullen ACS
Cinematographer / L.A.
>Sony announced their 4:4:4 product at NAB 2003 but actually did not >show it; what they showed was a very incomplete mockup.
George, you may be reading the same CML as everyone, but I am not sure you were at the same NAB that I attended. Plus8/Sony DID show the SRW deck at NAB. They also showed the F950, which as far as I know is a dual link 4:4:4 camera just as the Viper is, except that the output is not log data.
Although I have fairly religious lately about shooting and posting uncompressed data, there is compression, and then there is compression.
The tone of your comment made it sound like an HD Cam level of compression is going in the SRW deck. This couldn't be further from the truth. The compression is MPEG 4, and fairly low. I did admire the 10 generation cascade split screen that Sony showed at NAB, and did see very few compression artefacts on the ten generation down side. Of course, as all manufacturers work hard to present their technology in its best light, we would want to build a few composites and see this taken out to film and screened fairly large in order to truly estimate the effect of the compression. I also suspect that it may not be objectionable if care is taken with the data following the recording on set.
I have recorded both a Sony F950 and a Viper into the SRW deck.... both through dual link and both at 4:4:4. The Viper went out as log data, and Jeff Cree rode herd on the Sony data. The results from both cameras were good enough to consider shooting a feature in this manner.
Heck, Dave Stump just did, and he recorded the 4:2:2 material to D5, which has a much higher compression ratio than the SRW. And from what I have heard, while he would have preferred to shoot the entire show at 4:4:4 onto disc, he was satisfied with the results from the tape.
George, as you know I do not work for either manufacturer, and you have been clear from the beginning that you have a slant toward one of the companies. I am posting this in fairness, as I think you are being a bit unfair in your comparison.
The 950 is a dual link 4:4:4 camera that offers ONLY processed output so NO Digital IP capability; no bias just the information that Sony published pre-NAB. AT NAB they didn't even show that on the floor, and apparently only Jeff Cree was allowed to even talk about anything other than the mockup they had on the show floor. I know, I took one of my clients to see it (I'm not so biased that I wouldn't expose my own clients to even Sony technology), and was told that only Jeff could talk about it by one for the salesmen.
Sony says HDCAM is about 2:1 compression; so is D5. If SRW is lower, Sony hasn't yet published a number lower than 2:1 (or any other number). The tone of my comment is simple; if an uncompressed, unprocessed, 4:4:4 RGB, Dual Link, Digital IP is available at the camera and can be recorded, why should I or any other thinking person choose to throw it away to ANY compression or the LACK of unprocessed camera output even if it does say Sony on it. Don't call that bias; that is just snobbish purism. And it's not just future technology; once the Time Code recording issue is resolved, and it will be in the short run, the point will be moot. However if compromise in your cup of tea why would I object to lesser processes; highest and best just isn't for everybody. Please note that your test did not compare Viper 4:4:4 Dual Link Unprocessed the processed Sony Processed Dual Link 4:4:4; that is where the differences lie. And, I'm curious who rode herd on the Viper?
And finally, sure I have a slant toward the Viper; it has to do with the reach for highest and best technology. Sony's perspective seems to attempt to propagate any videotape based system that is "good enough" to sell big numbers quickly, before their competitors can get their story out. Having recording technology is a good advantage, but videotape is NOT the high end recording technology of the future, and even Sony knows it. But if they can push enough out into the market and get the market "invested" in it, it obviates a customers motivation (creative and economic) for highest and best technology purchases
Look, the disk recording, DPX technology with T.C. is very, very close, and to allow ourselves to be stampeded by what is still not even market place Sony technology, simply allows Sony to hornswoggle us into delaying purchase of highest and best until they get a chance to baffle us with more "its good enough" when they finally get it done and ready to be marketed. And it's all done by creating Fear, Uncertainty, and Doubt in the market place as to the efficacy of the "highest and best" solutions. Are they doing it directly? No, probably not, except through Jeff C. and Larry.... But to write off my point of view as invalid or inaccurate because of your perception of my "bias" only serves the propagation of that FUD.
And you can be certain I'm not the only one on CML with a bias; look around you, and accuse them. My bias is at least an honest and informed bias.
GEORGE C. PALMER
> but I am not sure you were at the same NAB that I attended.
Well at the NAB that I was at Sony engineers/designers told me that the 4:4:4 low compression option would not be available for 18 months.
That's 4:4:4 at 2.5:1 compression.
There will be versions available before then that use higher compression.
I wrote at the time that I thought that this would be a big sales boost for the Viper.
I would love to use Viper on some of the commercials I shoot but it just isn't viable, as has been said, there is no standardised post process, every job is a one off and this just doesn't work in my world.
>there is no standardised post process, every job is a one off and this just >doesn't work in my world.
Words of wisdom to the manufacturers of disk solutions and post production solutions to the problems of 4:4:4; If you build it, we will come.
this thread has been very informative.
>Words of wisdom to the manufacturers of disk solutions and post >production solutions to the problems of 4:4:4; If you build it, we will >come…
Actually we are a manufacturer of post production solutions (uncompressed playback and color grading). I would like to set up a little closed discussion forum on our beta site to discuss the issues that were raised here.
I would like to hear your thoughts on an optimal workflow and see what we can do to make it happen.
>When you shoot on video and record on video (as…with the D5 >recordings), you also have an established methodology that relies on >time-code.....
The Director's Friend actually has some facility for recording LTC time-code and writing that into the DPX file. What it doesn't have at the moment is any usable way of exporting the files with that time-code written into the exported file...
We ran into this on a graduation film we shot this summer on the Viper, kindly supplied by Arri Media UK. It was a fantasy fairytale called The Happiness Thief, 15mins and entirely shot on stage with a lot of bluescreen work.
I got the system in a week before the shoot started to get my head around all the implications of the system. What I discovered is that although the camera is absolutely superb, the recording system needs a fair amount of work. After three software upgrades during the week and a lot of telephone discussion with Director's Friend in Germany, I thought we were in a place where the shoot and the post could happen without too many hitches. LTC time-code was going to be supplied from an external source (the Director's Friend has no time-code generator) and recorded and exported each night to DPX files on an external disk array.
So you can imagine my surprise when I discovered that the export functionality of the DF was broken with regards to recorded LTC time-code. What the problem seems to be is that Mungo, the DF capture software, works through a separate piece of video server software that doesn't allow DF the hooks into the file system. Whilst it is possible to use a piece of DVS software to export files with DPX time-code, it's an extremely unintuitive and time-consuming process that could not be considered in anyway shoot-friendly. Certainly you wouldn't want your clapper/loader doing it at the end of a long shooting day, as it could easily end in lost work, and tears and recriminations.
At the moment this is the biggest problem facing this form of digital acquisition. To make established post processes unworkable is a huge, huge problem, and Thomson need to get it resolved. I'm aware of a couple of recording systems that are coming down the 'pike, hopefully they will resolve this issue. Systems that make the conform of data files easy and quick are available, but if the data isn't there at source you have a big problem as Lucas and I have been discussing over the last month or so.
National Film and Television School