Color Bit Depth
The point
was made and the question asked that....
>what is the best way to get around Mach Banding when going
from >16bit 2k 4:4:4 images and downrezzing them to a 8bit HD(1920x1080
>24PsF) 4:2:2 D5 for Digital Cinema?
There is no real substitute for color bit depth. There are bandaids.
Dithering, dynamic rounding, etc. are bandaids. 8 bit color is a
relic of television, and is an outgrowth of the capabilities of
phosphorous based display technology. Since phosphorous monitors
are inexorably becoming archaic, we should consider obsoleting the
technology of 8 bit color along with it. Who was it that said that
Digital Cinema must abandon all links to Television? I think this
is part of what he meant.
Once again, let me urge; if we are going to re-invent the future,
we ought to remember to make it better.
Dave Stump ASC
VFX Supervisor/DP
LA, Calif.
David Stump wrote :
>8 bit color is a relic of television,
and is an outgrowth of the capabilities >of phosphorous based
display technology.
Yes, but most current LCD panels are 8 bit... at least the ones
now shipping.
Jeff Kreines
>Yes, but most current LCD panels
are 8 bit...at least the ones now >shipping.
Precisely my point. Now is the exact moment to "cut the cord".
Dave Stump ASC
VFX Supervisor/DP
LA, Calif.
Dave Stump writes :
>Since phosphorous monitors
are inexorably becoming archaic, we >should consider obsoleting
the technology of 8 bit color along with it.
Without disagreeing, on a practical level we find no end to the
need to make images "8-bit proof"; to make sure that if
they're taken through an 8-bit step, they won't look terrible. Otherwise
one will be more often than not disappointed with how they're presented.
You want it to look good on ATSC, DVD, etc.
Tim Sassoon
Sassoon Film Design
Tim Sassoon wrote :
>Without disagreeing, on a practical
level we find no end to the need to >make images "8-bit
proof";
Yes, very true. And it will remain true for a long long time.
>to make sure that if they're
taken through an 8-bit step
Ah, now that's a good one! Step indeed!
Jeff Kreines
> Precisely my point. Now is
the exact moment to "cut the cord".
---Precisely....agreed!
"All ties to television as we now know it.....must be irrevocably
CUT."
Martin Euridjian's statement to that effect needs to chiselled in
stone somewhere in a very visible place in every city where production
and post production play a major role in the economy.
Jeffery Haas
Freelance editor, camera operator
Dallas, Texas
>> most current LCD panels
are 8 bit
> Now is the exact moment to "cut the cord".
At the same time, 8 bits per channel is well below the human threshold
of color-difference perception. In other words, the average observer
cannot distinguish between a significant portion of the colors that
one is able to represent and display with a well tuned 8-bit-per-channel
RGB system.
The first generation 23 inch Apple HD Cinema Display has a delta-E
(measurement of perceivable color difference) of about 0.5DE. A
good CRT can't do much better than 1.0DE. The untrained observer
can see 1.3DE and above. A trained observer will see below that,
say, about 1.0DE.
In a nutshell, 8 bits per channel on a modern high-quality LCD --
properly driven -- can produce better images than the average human
visual system can "see". Here the word "see"
is in quotes because color vision is a psycho visual affair, not
a matter with many absolutes. What you "see" is as much
a function of what your brain decides to let you see as what the
eyes actually capture.
Having said that, 12 bit per channel LCD's look amazing.
Martin Euredjian
eCinema Systems, Inc.
www.ecinemasys.com
Martin Euredjian wrote :
>...In other words, the average
observer cannot distinguish between a >significant portion of
the colors that one is able to represent and display >with a
well tuned 8-bit-per-channel RGB system.
I heard recently at Cinegear CML Saigon Annies (where all truths
are expounded over salt broiled shrimp and beer which in turn makes
said statements seem more true) that studies show the average threshold
equals around 11 to 12 bits per channel. That there's a small segment
of the population that can differentiate colors at the 14 bit level,
but most of the bell curve rolls off just under 12.
I bet the 8 bit is fine under less than ideal viewing conditions.
Maybe on seeing the test patterns on a large, decent display and
in a darkened room, the viewers can actually distinguish them better
? I just cannot imagine that 8 bit is at our ideal threshold.
I admit I only absorbed perhaps 20% of the Saigon Annie discussion
on color space, but even that much was fascinating (not to mention
the politics involved!).
Our industry does need to come to some consensus with regards to
these issues so we can adopt large, future proof color space and
more interchangeable file formats for post. It would certainly help
DP's going into projects where we never have time to test thoroughly
the workflow that in the end provides us with its own version of
the colors to be projected theatrically. Heck, often we can't get
an answer on where the cameras are coming from and where the filmout
will be until 2 weeks out and tech scouting !
Funniest quote at Annie's: "The studios say, 'if 12 bits per
channel looks great, then 16 bits must look better. We want all
our stuff in 16 bit!' But for what ? They want to pay 8 bit prices."
Mark Doering-Powell
LA based DP
>At the same time, 8 bits per
channel is well below the human threshold >of color-difference
perception
While this may be a commonly held belief, it does not hold up to
rigorous testing.
In October 03, we (under request from DCI) did a comprehensive discrimination
threshold test using theatrically projected images - at 40, 4, and
0.4 cd/m^2 using a gamma of 2.6. These were projected images on
a 45' screen at the DCL in Hollywood. We tested approximately 75
subjects, from both "expert viewers" and non experts.
The test patterns created steps in luminance corresponding to 1
count in 8,9,10,11 and 12 bits. The viewer was asked to identify
the orientation of the test pattern, to verify if he really saw
the steps.
The results of the experiment will be published in the September
04 SMPTE journal, but can be summarized by the following :
>* Gamma 2.6 best follows the threshold of human
perceptual discrimination
for luminance at cinema levels
>* All viewers could see 8 bits all the time
* Almost all observers could see steps at
10 bits
* Some observers (40%) could see steps at
11 bits
* Essentially no observers (1%) could see
steps at 12 bits.
* Observers learned to see steps and became more
sensitive with repeated viewing
These differences calculated to very small DeltaE* - in the smallest
case, the threshold of discrimination was 0.05 DE* (if I remember
correctly), much smaller than previously thought to be visible.
This was the experiment that caused DCI to adopt a 12 bit (gamma
2.6) recommendation.
If the display does not exhibit contouring at 8 bits, this will
be caused by several factors :
* Noise - (either in source or display) dithering
will hide contours very effectively
* Environment - environmental contamination of
the display (e.g. lights on around display)
Note that if you translate this into linear space, it requires a
lot more bits.
Matt Cowan
Why doesn't digital cinema use 10 or 12 bit 4:4:4 such as the Sony
HDCAM SR as it's master rather than the inferior D5 source? It is
the Sony format just too new?
Jamie Tosi
Digital Media Specialist
Santa Monica, CA
Matt Cowan wrote:
>Observers learned to see
steps and became more sensitive with >repeated viewing
And once viewers learn to see artefacts (like digital compression
or noise reduction artefacts) they become pickier, and the bar is
raised.
Up to the limits of the eyes and brain, that is...
Jeff "2-bits ain't worth 2-bits anymore" Kreines
Matt Cowan wrote :
>Note that if you translate
this into linear space, it requires a lot more >bits.
So that 12-bits was actually 12-bit log or 12-bit linear?
Just curious to know if 10-bit Log DPX files are going to be enough
for the digital projection standards.
Jason Rodriguez
Post Production Artist
Virginia Beach, VA
>>At the same time, 8 bits
per channel is well below the human >>threshold of color-difference
perception.
>While this may be a commonly held belief, it does not hold up
to >rigorous.
Rigorous? How's 10,000 test subjects with the testing being conducted
by the Munsell Color Laboratory in Rochester NY by the Color Scientists
that literally wrote all the books?
I don't have the time to do it right now. I'll post a few images
later.
>In October 03, we (under request
from DCI) did a comprehensive >discrimination threshold test
using theatrically projected images - at 40, >4, and 0.4 cd/m^2
using a gamma of 2.6.
I'm not sure what this tested. What does projector gamma have to
do with color perception?
>steps in luminance corresponding
to 1 count in 8,9,10,11 and 12 bits.
Luminance!? So, you were just going up and down the L axis in CIELAB?
Of course you'll get better delta-E's!
Martin Euredjian
eCinema Systems, Inc.
Probably important to go back to part of what I said. I find that
people can read through a post quickly and miss any subtleties or
precision in the language :
>the average observer cannot
distinguish between a significant portion >of the colors that
one is able to represent and display with a well tuned >8-bit-per-channel
RGB system.
This does NOT mean that ALL color differences representable in 8
bit RGB are tough to see. That is NOT what I said, right?
We've all seen good examples of 8 bits gone bad. I remember an old
Alpha Image video switcher that had the ability to change the output
processing from 8 bit truncate to 8 bit rounded. Huge difference
for some passages, none for others. With truncation some finely
shaded backgrounds had all sorts of very visible banding. It just
so happened that the color differences were large enough that almost
anyone could pick them off.
Personally, I think that 8 bits per channel --with the right processing
-- is excellent for viewing by the masses.
>8 bits per channel on a modern
high-quality LCD -- properly driven-- >can produce better images
than the average human visual system can >"see".
Again, qualifiers: "high-quality LCD", "properly
driven", "can produce", "average"
>color vision is a psycho visual affair, not a matter with many
absolutes.
No need to explore that further right?
Martin Euredjian
eCinema Systems, Inc.
Matt Cowan wrote :
>So that 12-bits was actually 12-bit log or 12-bit linear?
No it was 12 bit gamma *corrected* linear of '2.6' which is neither,
and I've no doubt you'll need to know the range of luminance this
was tested over, the conditions of viewing (average screen luminance,
surround, environment, distance from screen, size of pattern, size
of feature within the pattern, how noise free was the system, how
were the images projected on screen, was the screen perforated or
not (a killer of sharpness) etc etc.) Was it a colour based assessment
or just a tone scale.
>Just curious to know if 10-bit
Log DPX files are going to be enough for >the digital projection
standards.
In the absence of noise, 10-bit Log DPX files are not enough, according
to conversations I've had with somebody involved, based upon the
tests carried out.
FWIW, I do find it a little strange that some of the proposed systems
and methods and the corresponding tests are always hard to get the
details on. They are all carried out as you would like to be, but
obviously not everybody can be there at the same time. If there
was truly freely available scientifically based (i.e. peer reviewed,
mostly bias free) set of procedures etc, I think there would be
less of an issue with some of the standards proposed as you could
if so inclined go and do the same test yourselves
(Note : I'm probably in a position to know some of the people involved
so I'd like to point out its not a slight on individuals more the
crazy way our industry works)
As somebody mentioned earlier, studios want 16 bit for 8 bit prices,
well I'd say they also want 4K images for less than 2K prices, it
is the economics of today. So what is really important is to provide
a migration path to this 'ideal' world of the "new standard".
its quite funny the number of clients who want 4K, must have 4K,
etc but then you say, have you got 3-4 times the money? or 3-4 times
the schedule.
These are of course things that change overtime.
| Kevin Wheatley, Cinesite (Europe) Ltd | Nobody thinks this
| Senior Technology | My employer for certain |
| And Network Systems Architect | Not even myself |
>Kevin Wheatley asked for details
of the discrimination threshold test.
Here are the design parameters of the test. I suggest that anyone
interested read the entire paper in the SMPTE journal when it was
published in September. The delay in publication relates to lead
times for print. This work was presented at the SMPTE conference
in November (30 days after the test was complete) and presented
again in January at a DC-28 technology meeting in January 04.
1/. The experiment was designed and performed by
myself, Glenn Kennel (formerly of Kodak now with TI - and principal
developer of the Cineon format). Dr. Tom Maier, Kodak, Brad Walker,
TI, and under the guidance of the SMPTE DC 28 color ad hoc group.
2/. The intent of the experiment was to determine
what the human is capable of seeing, in a theatrical context. That
means dark surround, large screen images with a range of viewing
distances relating to "normal" theatres. This drove the
choices for experimental design
3/. The basic experimental parameters were calculated
from the threshold model developed by Barten (P. G. J. Barten, Contrast
Sensitivity of the Human Eye and Its Effects on Image Quality, SPIE-The
International Society for Optical Engineering: Bellingham, WA, 1999.)
4/. This test was for tone scale (luminance) not
color.
5/. The test measured luminance modulation (in
cd/m^2) as opposed to bitdepth. Bitdepth was calculated afterward,
and could be related to any desired coding scheme. Gamma 2.6 was
chosen because it required the minimum number of bits of any of
the "uniform" schemes.
6/. The test pattern was a low modulation square
wave of 16 pixel duration, with 13 repetitions in a test square
of 208 pixels square on the screen. Pixel size was 0.213" square
on the screen
7/. Subjects were grouped at viewing distances
of 27, 41, 57, and 77' from the screen. (you can calculate the spatial
frequency of the pattern....)
8/. Luminance’s tested were approximately 75%,
7.5% and 7% of cinema brightness
9/. Projection was from an internal pattern loaded
into a DLP Cinema projector, providing 16 bit quantization in pattern
luminance. The noise in this environment is quite low.
10/. Surround of the pattern was 90% of the luminance
of the pattern itself, filling the theatre screen. Theatre surround
was "normal theatrical"- dark.
11/. When we calculated the log coding equivalent,
we needed an extra bit to avoid visibly quantised thresholds. (it
appears at the bright end of the tone scale)
All this said, relatively small amounts of noise provide masking
of contour artefacts. Current digital cinema releases are coded
in 8 bit MPEG (at 65 Mbits) and most of the time, contouring isn't
visible. (but sometimes, it is.)
Matt Cowan.
Martin,
Please give me a one or two line explanation of ODE. I realize it
is probably vastly more complex than one or two lines of exposition,
but I believe I have a basic grasp (that would definitely benefit
from some ODE 101).
Brent Reynolds
Producer / DP
August Moon Productions
Tampa, FL
P.S. - Of course, anyone else feel free to answer - it's just that
Martin brought it up - so I thought I would ask him.
Martin Snashall sent this to me, which I thought might be of interest.
(He's referring to the Abekas A64 and A84, both of which he designed.)
Well, things come back round to haunt us again. The "rounding"
and "truncation" Martin saw on the Alpha image switcher
was originally put on the A64, consequently the A84, and Alpha image
pinched it from there!
Quantel had this thing they called "dynamic rounding",
which despite all their protestations about it being something they
had invented, was just dither. The basics of dither is that is you
multiply an 8 bit signal by an 8 bit control, then you get a 16
bit result, which then needs to be taken back down to 8 bits for
display. What dither does it to take the 8 bits you would throw
away, and add a random number to those 8 bits. If the result produces
an overflow, then you add one to the 8 bit display.
What this does is to produce an output that over time is closer
to the actual result, rather than just the truncated result. This
technique is another one that seems to have been lost in this computer
age, as a fixed input frame, when, say, faded to 72%, would not
produce a fixed output frame, as nearly all the pixels will be dithering
by 1 bit. In fact, adding noise usually produces a better result
visually.
In many cases this is more important and visible in a YUV system,
especially on blue colors. The reason is that the bits have very
uneven weight in YUV, as shown by the blue vector. If you have a
fairly saturated blue, then a single bit change in the luminance
can alter then output blue level by some 4% (this is due to blue
only contributing 11% to the luminance signal), assuming that the
U & V stay the same.
Jeff Kreines
Kinetta
Jeff Kreines writes :
>In fact, adding noise usually
produces a better result visually.
Which unfortunately reduces compression efficiency. And keep in
mind that one only needs to apply .5% noise to make up the difference
between 8 and 16bpc - hardly visible, but can make a 30% larger
file. There may be some value in adding noise at the projector -
unmentioned in DCI, though the idea's been around for a while.
Tim Sassoon
Sassoon Film Design
> Please give me a one or two
line explanation of ODE.
Here's a simple one. I may be able to elaborate later...I have back-to-back
meetings and very little time.
Take a 50% gray card, continuous tone. Now, print the same 50% gray
as a newspaper would (by using black dots on white paper). You'd
have 50% of the surface area covered with black dots and the other
50% would be white.
The difference from 50% gray to either black or white is 50 delta-E.
This also opens the door to another subject. As you know, if you
are sufficiently far away from both of the above cards you'll see
exactly the same 50% gray.
The CIELAB color space has a lot of advantages in terms of uniformly
describing color differences. Imagine a centre trunk that represents
the "L" portion, the brightness component. As you move
up and down you move from black to white. At any level you may choose,
say L=50, moving horizontally away from centre produces greater
color saturation. One can draw small ellipses at any horizontal
level to enclose all colors that fall below the threshold of human
color-difference perception.
Anything within the ellipse looks the same, even though a laboratory
instrument would prove otherwise. Differences greater than the ellipse
can be seen by the average observer. As you get farther and farther
away from centre (more saturated colors) the threshold increases
(the ellipse gets larger). This means that the average delta-E is
smaller as you get closer to less saturated colors and greater for
more saturated colors. Or, if you will, one can see smaller changes
with black and white than with saturated colors.
Here's another topic to research and become familiar with : Metamerism.
Do a Google/Yahoo search to open-up a world of confusion.
Martin Euredjian
eCinema Systems, Inc.
FYI, it is generally (but not universally) accepted that a human
can differentiate a 1 percent difference in luminance. Hence the
problem that, in a 12 bit linear system for example, you have a
1 percent difference between step 100 and 101 (which is approximately
a 41:1 ratio with step 4095) but a 4 percent difference between
step 25 and 26. The former is generally not visible but the latter
most certainly is.
That is why the low lights tend to be problematic and why we use
non-linear systems where possible.
Tom Tcimidis
> reduces compression efficiency.
> .5% noise can make a 30% larger file
I knew the first part (and I always talk about it) but never bothered
to quantify it in some way. Have you run any test over a wide range
of frames?
Martin Euredjian
eCinema Systems, Inc.
Thanks Martin,
I was thinking along the right lines, but your explanation was a
big help.
Metamerism - huh?
Whenever get a little too big for my bitches and think I actually
I know something. I just come here and ask a question.
I'm not really a DP, I'm a monk and this forum reminds me of my
vow of humility.
Brent Reynolds
DP / Monk
Tampa FL
And in fact dithering is SOP in digital (audio) mixing, and for
the same reasons.
The (endless) debate there is over not if to do it, but how to do
it.
Sam Wells
> 4. This test was for tone
scale (luminance) not color.
Makes sense. I was talking color. Of course, the threshold for greyscale
discrimination is lower. I imagine it's part of the "survival
in the wild" business.
Martin Euredjian
eCinema Systems, Inc.
Martin Euredjian writes:
>I knew the first part (and
I always talk about it) but never bothered to >quantify it in
some way. Have you run any test over a wide range of >frames?
Well, to be more precise, wouldn't it be .39% noise (100/256) to
cover the 8-9 bit gap? Which (and it must be calculated in 16-bit
space) covers the rest as well. In practice, .5% noise (uniform
or gaussian) tends to look more like the original 10+bit, which
may be more a software artifact than anything else. I just did a
test to quantify it, first using Marcie in log, then with a horizontal
ramp, both to .jp2 at 90% quality out of Photoshop CS with the Orphanage
log ICC profile. What I found, unsurprisingly, is that the amount
of difference changes with the compression ratio. At 90% it was
about 10%, at 80% it went up to a 25% differential, etc. I was probably
being excessively emphatic before, but there is a real difference.
Marcie has quite a bit of flat color in the LAD patch, etc.
Dave Stumps writes :
>Also, grossly clipped areas
can solarize, and even invert in color. Very >difficult to fix
in post
Not to mention chroma fringing on the highlight. IMHO digital origination
cannot replace film until there are cameras with similar highlight
response. It should be our primary issue.
Tim Sassoon
Sassoon Film Design
>...the Orphanage log ICC profile...
Which, thanks for reminding me, has a home now:
http://www.theorphanage.com/tech/
Stu Maschwitz,
CTO, The O, SF/LA
>Whenever get a little too big
for my bitches and think I actually I know >something. I just
come here and ask a question.
>I'm not really a DP, I'm a monk and this forum reminds me of
my vow of >humility
---Amen...this list is better than a four year curriculum at USC
Film School.
Jeffery Haas
freelance editor, camera operator
Dallas, TX
Kevin Wheatley wrote :
>In the absence of noise, 10-bit
Log DPX files are not enough, according >to conversations I've
had with somebody involved, based upon the tests >carried out.
Most digital cameras as well as film have a certain amount of noise
though, so this should be fine.
I mean right now aren't most DI's being done with a 10-bit log scale,
and even HDCAM-SR with the Genesis will only be 10-bit log.
I'm assuming that banding will only be a problem when digital projection
is in full swing, since film projection and film-outs should mask
any banding artefacts, correct?
Jason Rodriguez
Post Production Artist
Virginia Beach, VA
>Metamerism - huh?
Two objects may appear to be (and/or reproduce) the same colour
under one light source but different under a different light source.
It's to do with the fact that all visual and recording systems encode
a continuous spectrum in terms of the proportion of three primary
colours. But those colours aren't always the same: they may each
occupy a narrow or broad part of the spectrum depending on the system,
and may therefore be affected by different profiles of light source.
Dominic Case
Atlab Australia
>That is why the low lights
tend to be problematic and why we use non->linear systems where
possible.
And one of the reasons for perceptually-uniform color spaces, like
CIELAB (uniform-ish), being more useful.
Martin Euredjian
eCinema Systems, Inc.
Dominic Cased writes:
>Metamerism - huh?
Two objects may appear to be (and/or reproduce) the same colour
under one light source but different under a different light source.
The brownish sheen that deep blacks from some ink-jet printers unintentionally
make is an example. It brings up the point that we're thankfully
working both for and in a transmissive medium digitally. IMHO prepress
color can be much more difficult. It's pretty hard to simulate a
foil stamp or a gloss varnish on-screen without a lighting context.
Tim Sassoon
Sassoon Film Design
Jeff Kreines writes:
>Martin Snashall sent this to
me,....
>.... The basics of dither is that is you multiply an 8 bit signal
by an 8 bit >control, then you get a 16 bit result, which then
needs to be taken back >down to 8 bits for display.
Dithering is no news to audio folks. It's always been the preferred
method of bit-depth conversion and level adjustment, and results
in more natural-sounding digital audio, with less audible granularity.
With that said, some dithering algorithms are better than others.
Most of them these days are pretty good, but it wasn't always that
way.
Dan Drasin
Producer/DP
Marin County, CA
Jason Rodriguez wrote :
>I mean right now aren't most
DI's being done with a 10-bit log scale, >and even HDCAM-SR with
the Genesis will only be 10-bit log.
As far as "most DI's" I wouldn't know, but from the standpoint
of 10bit log files produced from a film scanner today, your probably
talking about a CCD A/D of '14 bits linear' which is sampling the
film via passing light through it, which is not a linear capture
device with regards to scene exposure due to the non-linear response
of the film to intensity. (After that you generally 'log' the data
in the electronics, subtract sensor noise, do some temperature based
cancellation and other things like a matrix to compensate for the
imperfect nature of filters and so on, to get a picture)
The digital camera approach is more straightforward and gives you
a truer number of bits of linear, i.e. there is no optical compression
of the tone scale before the CCDs, but you may have only 12 bits
depending on your sensor. you then perform similar electronic calculations
to get your self a picture.
What this means is that for a given amount of dynamic range and
A/D bit depth, and the same amount of noise and high enough level
of electronic bit depth for processing, I'd expect to see more bands
in a straight CCD capture system.
This is not to say that the film compression is ideal (perceptually
perfectly uniform), or that the pictures can't look great out of
either (or terrible as hell for that matter)
>I'm assuming that banding will
only be a problem when digital >projection is in full swing,
since film projection and film-outs should >mask any banding
artefacts, correct?
Yes, although the grain from 5242 (and laser recorders in general)
and 2383 print won't mask everything, we have, in the last 11 or
so years, had 3 or 4 problem sequences for VFX work where banding
was visible even though minimal grading was done (basically scan
and then record). Some of these occurred on older intermediate and
print stocks than are currently used, these would have had "more
grain" than today. I would say that it is probably a very small
percent where it becomes a problem, rather than a 'look'.
Moving to digital projection will certainly make the percentage
of problems shots increase for this reason amongst many
| Kevin Wheatley, Cinesite (Europe) Ltd | Nobody thinks this |
| Senior Technology | My employer for certain |
| And Network Systems Architect | Not even myself |
Copyright © CML. All rights reserved.