[topic: does white balancing increase the size of pixel data numbers that need be stored in a raw file? Does white balance drive a camera maker to store 14 bits of raw data on a camera with sensors distinguishing far fewer than 16,384 different light levels?]
Russell: camera raw files store values that are strictly a function of how many photons were detected by the sensor during the exposure period.
Iliah: Not at all. Raw files are affected by noise, cross-talk, leaks,...they can't store more then they can,
Yes, pixels can always be overexposed, overtopping whatever maximum number they can report, and sensor defects and sensor aging could also end up affecting the raw file data. But why are we talking about this? Just to make it sound like I don't know as much as you about because there's
some topic I haven't mentioned? Would it discredit your arguments about when white balancing is applied, if I pointed out that you never mentioned the affects of cosmic rays on raw data?
Iliah: and finally, they are affected by in-camera pre-cooking, analogue or digital.
Yes, raw files might reflect subtracted out biases and/or dark frames, mapped out bad pixels, etc etc. But am quibbling with your mistaken post that white balancing coeffiecients are baked into the raw file pixel data numbers, at times inflating those numbers beyond the inherent sensor dynamic range.
Russell: White balance is a post-processing specification
Iliah: Not always. In many cases certain pre-balancing is applied before raw is recorded.
Seems like you're just searching for vague, non-negate-able things to say, be they relevant or not. The topic here is when is
white balancing applied, which is the kind of balancing you said that the per-pixel raw file data has to reflect with larger numbers than the original photon counts?
Iliah: Photon count is in the past, processing pipeline moved forward applying certain conversion coefficients and gains.
Yes, a raw file could store photon counts that have been affected by non-linearity coefficients and gains. But the topic is whether or not your statement was true, that a sensor that can only distinguish 4096 levels of light, needs a greater-than-12-bit-raw file that stores numbers greater than 4095,
because of white balancing . Which is just a mistake you've made, because white balancing as we all know is something that we set and apply long after the raw file is created, it's not something "baked into" the raw file.
Will go further and say that all of us, including you, would not call something a "raw file" if the white balance has already been applied to the stored raw brightness levels.
I don't think that all your points are wrong just because you make a mistake now and then. The more people contribute,
do anything , the more mistakes they make, it's a cost of doing business. And you've got a ton of knowledge and good points, and valuable examples, like your D3 13 bit photo, very cool. And it's sure easy to find tons of mistaken things I've written on this forum.
But I am quibbling with you about your practice of bringing up lots of little-bit-right things that don't change a basic error (not unlike your occasional tendency to avoid acknowledging when you've written unnecessarily vague thread titles), which is unnecessarily (since our livelihoods aren't at stake) confusing to readers.
Russell: So I do not agree with your implication that white balance multiplier coefficients...make us need to record extra levels of pixel information in the original raw file.
Iliah: Once again, we are past photon count stage.
Yes, there's dark frame subtractions, non-linearity coefficients, etc etc perhaps to be applied before storing raw data. But
we're not past the white balance stage before the raw data is stored, hence white balancing multipliers don't cause any raw file data numbers to grow beyond 12 bits in length, as you stated. The things you are talking about
correct the exact numbers between 0 and 4095 that need be stored for a 12-bit sensor, they don't increase the number of different light levels the sensor can usefully/accurately distinguish.
Iliah: To the matter, what is your explanation of incorrect colour and colour blotches in shadows?
Hmm, a new matter. Well, righting incorrect colors (i.e. unbalanced R,G,B responses to weak signals, caused primarily by the different amounts of energy-versus-noise the R,G,B pixel filters deliver to their respective sensors in dark areas) in shadows are something that would cause camera maker to store, say, a 104 in the raw file when the uncorrected photon count from the sensor is 100. Maker wouldn't bother storing a value like 100.3.
Correcting subtle shadow color errors don't drive camera makers to burden raw files with larger, slower, more precise numbers. Because of the reality that, in the shadows,
where the noise levels approach or exceed the signal levels , there is no meaning to increasing the precision of recorded signal levels, because
that's the place where you have much less precise knowledge of the "true" signal anyway .
Iliah: You can check that directly. Take some shots at base ISO and ISO 1600, normal exposure and 4 stops underexposure, and examine the shadow values.
If a sensor can distinguish 4096 levels of brightness, 12 bits of brightness, and the camera maker chose to write out a raw file storing 32 bits of brightness (levels 1 to 2 billion) for each pixel, the extra stored bits would not get rid of color blotches, and certainly not get rid of incorrect shadow color balance(see above). The blotches come from a sensor that is barely responding at the low end to delicate changes in incoming weak signals, which unresponsiveness no amount of increased raw file bits can meaningfully overcome. Blotches can be broken up in raw post-processing, there's no advantage to injecting false shadow variances and precision into the raw file.
Tripod manual focus.