Re: High MP count? Bring it in

The newer Sony's 24 MP sensor has smaller pixels than the old 16MP sensor. Wouldn't that make 14-bit even more useless?
Sony do not provide raw linear data from the sensor anymore. You can study the details and see how many effective bits are in fact provided. With the lossy compression Sony are using raw levels are sacrificed mostly in highlights and somewhat in midtones, but not in shadows.
In other words, is there a correlation between pixel size and bits required to capture data?
The correlation exists between maximum charge the pixel can hold, noise levels, and bits required.

--
http://www.libraw.org/
 
The newer Sony's 24 MP sensor has smaller pixels than the old 16MP sensor. Wouldn't that make 14-bit even more useless?
Sony do not provide raw linear data from the sensor anymore. You can study the details and see how many effective bits are in fact provided. With the lossy compression Sony are using raw levels are sacrificed mostly in highlights and somewhat in midtones, but not in shadows.
In other words, is there a correlation between pixel size and bits required to capture data?
The correlation exists between maximum charge the pixel can hold, noise levels, and bits required.
My question was directed at the other poster.
 
The newer Sony's 24 MP sensor has smaller pixels than the old 16MP sensor. Wouldn't that make 14-bit even more useless?
Sony do not provide raw linear data from the sensor anymore. You can study the details and see how many effective bits are in fact provided. With the lossy compression Sony are using raw levels are sacrificed mostly in highlights and somewhat in midtones, but not in shadows.
In other words, is there a correlation between pixel size and bits required to capture data?
The correlation exists between maximum charge the pixel can hold, noise levels, and bits required.
My question was directed at the other poster.
My answer was on a public forum to a question asked on a public forum.

--
http://www.libraw.org/
 
Iliah: Do you think last 2 bits in 12 bit Sony raws contain more information? Are you willing to go for 10-bit raw with Sony cameras?

Good question, that is more interesting than the ones that most of us have been quibbling about. Now DxOmark shows us that the Nikon D3 and The Nex 5 have an identical "Dynamic Range" of 12.2 EVs at their base ISO. (By ISO 1600, the Nex 5 is down to 9.66 EV dynamic range, while the D3 stays above 10 EV until ISO 3200, impressive.)

Plus, you have shown pretty clearly that there is at least some information in a D3 raw file pushed out to the 13th bit, not sure at what ISO. Wouldn't say there's info in that 13th bit worth increasing the raw file size 17% to store, but I do get your idea that there's something there. So, on balance, despite my general skepticism about "more info is always better" arguments, your demo plus the DxOmark info tells me that there is indeed a strong case to be made for hanging on to 12 bits.

And according to DxOmark what's good for the D3 is good for the nearly-identical-dynamic-range Nex 5. So my answer to your question is that I definitely would not go for only-10-bit raw files on our Nex cameras (or on a D3 below ISO 3200). If it was an easy option to set and forget, I would be happy to have an option for the Nex to store 12 bits below ISO 1600, and just read out and store 10 bits at ISO 1600 and above. But that would lead to some trickiness in all raw software for the cam, and if we only get one raw format per camera, would want 12 bits.

Would further guess that this typical, usefully-super-shadow-pushed backlit photo probably takes advantage of those bits 11 and 12 of the Nex raw file. No HDR here, this is just what the Nex 5 and other late-model large sensor cameras can do:





There's a tiny bit of detail in the hair and shoulder highlights of this image, but also in the darker fabric of the lady's top, even in the shadows. It's real hard to capture this kind of (unfortunately common) scene delicately, without everything about your equipment and post-processing just right and pushed to the limit.

Indeed am regularly getting shadow detail out of the Nex 5 that my older stuff (now consigned to a rock pile) couldn't dream of.

Hmm my work would really best be done with a Nikon D3x or Pentax K5, if shadow detail was my only priority. But am spoiled by the price, convenience, large, simplified controls--and tripod stability and hence higher average sharpness that comes along with really small, light cameras on modest tripods. Would rather carry around the nice, cheap Adorama 52 inch folding reflector, and look a little harder for best light in the field, than bother changing cameras or having more than one body style or heavier tripod to deal with.
 
Plus, you have shown pretty clearly that there is at least some information in a D3 raw file pushed out to the 13th bit, not sure at what ISO.
ISO 800.
Wouldn't say there's info in that 13th bit worth increasing the raw file size 17%
The demo was pre white balance, to reflect bits. As soon as white balance is applied, under this light the blue from 13th bit goes up to 11th and 10th bits bit. Under different lighting it will be red going up to 12th and 11th bits.
I would be happy to have an option for the Nex to store 12 bits below ISO 1600, and just read out and store 10 bits at ISO 1600 and above.
You will discover it is often vice versa. The higher ISO is aznd the deeper shadows are the more bits you may need.

--
http://www.libraw.org/
 
You will discover it is often vice versa. The higher ISO is aznd the deeper shadows are the more bits you may need.
Plus Maxwell's demon to process them. ROTFL :D
Why do I have a feeling you are familiar with Frank Dashwood more than with that demon? Anyway I like cats more. They are useful not only for the thought experiments but for spectrography too.

--
http://www.libraw.org/
 
RussellInCincinnati [about Iliah's demo of Nikon D3 raw files with boosted brightness to show non-random least significant bits of raw data]:Wouldn't say there's info in that 13th bit worth increasing the raw file size 17% [i.e. from 12 bits to 14 bits stored in D3 raw file]

Iliah: The demo was pre white balance, to reflect bits. As soon as white balance is applied, under this light the blue from 13th bit goes up to 11th and 10th bits bit. Under different lighting it will be red going up to 12th and 11th bits.

Hmm, camera raw files store values that are strictly a function of how many photons were detected by the sensor during the exposure period. And those photon counts are not affected by "white balance". White balance is a post-processing specification, a set of numbers suggesting one multiplier or weight to give to the red photons, another multiplier for the green photon count, and another factor to apply to the blue photon count. Speaking here as someone who has written a raw file-to-TIFF converter program (in humble Visual Basic 6), for the early adopters of the Minolta Dimage 7 camera in 2001, who wanted something more straightforward than the Dimage viewer.

Now, those weights or multipliers may end up , after being applied to the sensor photon counts, manufacturing sets of R,G,B brightness values that range from zero to some number like 65000 that requires, say, 14 to 16 bits to represent accurately. So agreed you need a lot of bits to store every possible color-balanced viewable image color specification in a final, viewable (say TIF) image file .

But none of this means the sensor raw file has to think about such numbers. If a sensor can only store photon counts between "essentially black level" and "4095 times brighter than essentially black level" without overflowing, you only need store 12 bits of raw data per pixel to record those different levels.

Now that maximum "4095" level, after going through white balance, might be multiplied in the final image to represent a number like "65000 times brighter than perfect black" (where your higher bit counts might come into play). But the definition of a raw file is that you are storing light signal strengths encountered at each pixel, not possibly-wider-ranging final perceived brightnesses. And I believe that there are not many more than 4096 different levels that a Nikon D3 can distinguish at each pixel, regardless of ISO amplifier circuitry or white balance post-processing specs.

So I do not agree with your implication that white balance multiplier coefficients stored in the header of a raw file, that will be applied to raw photon counts in post-processing, make us need to record extra levels of pixel information in the original raw file.

RussellInCincinnati: I would be happy to have an option for the Nex to store 12 bits below ISO 1600, and just read out and store 10 bits at ISO 1600 and above.

Iliah: You will discover it is often vice versa. The higher ISO is aznd the deeper shadows are the more bits you may need.


But higher ISO just changes the brightness meaning of the, say, 4096 different levels of light a pixel may be able to distinguish that are separable from noise. Amplifying (i.e. ISO-changing) or color-balancing (post processing weighting) does not change the number of brightness levels a given sensor pixel can distinguish above noise.
 
camera raw files store values that are strictly a function of how many photons were detected by the sensor during the exposure period.
Not at all. Raw files are affected by noise, cross-talk, leaks, they can't store more then they can, and finally, they are affected by in-camera pre-cooking, analogue or digital.
White balance is a post-processing specification
Not always. In many cases certain pre-balancing is applied before raw is recorded.
a set of numbers suggesting one multiplier or weight to give to the red photons, another multiplier for the green photon count, and another factor to apply to the blue photon count
Photon count is in the past, processing pipeline moved forward applying certain conversion coefficients and gains.
So I do not agree with your implication that white balance multiplier coefficients stored in the header of a raw file, that will be applied to raw photon counts in post-processing, make us need to record extra levels of pixel information in the original raw file.
Once again, we are past photon count stage.

To the matter, what is your explanation of incorrect colour and colour blotches in shadows?
higher ISO just changes the brightness meaning of the, say, 4096 different levels of light a pixel may be able to distinguish
You can check that directly. Take some shots at base ISO and ISO 1600, normal exposure and 4 stops underexposure, and examine the shadow values.

--
http://www.libraw.org/
 
Iliah: Do you think last 2 bits in 12 bit Sony raws contain more information? Are you willing to go for 10-bit raw with Sony cameras?

Good question, that is more interesting than the ones that most of us have been quibbling about. Now DxOmark shows us that the Nikon D3 and The Nex 5 have an identical "Dynamic Range" of 12.2 EVs at their base ISO. (By ISO 1600, the Nex 5 is down to 9.66 EV dynamic range, while the D3 stays above 10 EV until ISO 3200, impressive.)
I think that the question of preserving only 10 raw bits was somewhat facetious, as we all know that for most current DSLR's with a black read noise that is greater than 11.6 stops below the bright clipping limit we need more than that (when considered on a per photosite basis*. Also, as Emile (ejmartin) has stated, just boosting a raw converted image from an arbitrary camera is also not conclusive as to the need for greater bit depths since one can pull data from a one bit depth raw if it is oversampled enough ; the raw demosiacing and original size of the image must also be taken into account: if an image is reduced in size enough then the effect is that of the new smaller size of the image having a greater bit depth. Thus, to make any comment about Iliah's posed Nikon D3 ISO 800 image, we need to know the zoom of the view, the raw converter used, and the raw conversion settings, all of which could affect the result.

In fact, one can demonstrate that much less than commonly assumed bit depths need to be preserved in order to be able to see very low contrast detail. I did a study about six months ago concerning this, in which I used simulated grey level images to demonstrate why DxOMark can state that a camera such as the Pentax K-x with a raw bit depth of only 12 bits can have a Dynamic Range (DR) of 12.2 stops. It turns out that a raw bit depth of 12 is sufficient to express a DR of up to about 13.7 stops. The links to my posts concerning this are:

http://forums.dpreview.com/forums/read.asp?forum=1018&message=38110821
http://forums.dpreview.com/forums/read.asp?forum=1018&message=38110831
http://forums.dpreview.com/forums/read.asp?forum=1018&message=38110884
http://forums.dpreview.com/forums/read.asp?forum=1018&message=38131489

with the link to the post giving the links to the source code for the program used to generate the simulated images and to the development system(s) that could be used to modify or compile it for programmers at:

http://forums.dpreview.com/forums/read.asp?forum=1018&message=38111497

For those who don't want to read through those multiple posts, a summary of the results are as follows:

These images are as from a grey level camera with standard tone response curve applied (no 'S' curve boosts), and show a continuous gradient from black to full bright across the range boosted sufficiently so that our human vision can make out the very small differences in raw readings of 0.29 Least Significant Bit (LSB) levels and back down towards dark vertically, with a two by two pixel checker board pattern of only 0.29 of the (simulated) raw LSB levels applied only to the upper right quadrant. Click the image once or use the "Original" link to get a 100% zoom view of the image, and you will see the faint checker board pattern, showing that one can see detail from boosted images at much about 1.8 stops below the LSB level:





For doubters that say "yes but the ability to see that texture will depend where it is in the range of that LSB, here is a "worst case" where the non gradient background is such that the added pattern will have the least effect, still showing a very faint pattern (again click once for 100%):





and here is the "best case" where the flat background level is at the half bit level, thus showing a much stronger texture:





which just serves to prove that this 0.29 step LSB step is the limit of how far this can be pushed for full 100% zoom views .

I believe that if Iliah's D3 ISO 800 example is a 100% view, part of the reason we can see detail after a 13 stop EV boost, along with the effect to which Emile refers, is due to this effect as the black read noise of the D3 at ISO 800 limits the DR to about 11.36 stops although the raw conversion processing (or viewing a reduced size) may be reducing the noise somewhat.
Plus, you (Iliah - GBG) have shown pretty clearly that there is at least some information in a D3 raw file pushed out to the 13th bit (ISO 800 - GBG). Wouldn't say there's info in that 13th bit worth increasing the raw file size 17% to store, but I do get your idea that there's something there. So, on balance, despite my general skepticism about "more info is always better" arguments, your demo plus the DxOmark info tells me that there is indeed a strong case to be made for hanging on to 12 bits.
Yes, there is a very good argument for hanging onto at least 12 bits for raws, but not for 14 as I tried to demonstrate here.

My conclusion? I don't see the worth of the extra storage and write time requirment to preserve those extra two bits going from 12 bits to 14 when one will never notice their absence for any current sensor, and where they only even potentially would be of use for ISO's of below ISO 400 and only when using huge boosts of the deep shadow levels for levels below about 11 stops below the bright clipping limit. Even if sensors are developed that have a black read noise of the minimum practical limit of about three times as low as the best current sensors, the extra data would only be useful in a very limited number of types of images so as again not to make the extra storage space and especially the increased write times worth while.

Regards, GordonBGood
 
GordonBGood wrote:

Nice to see you here :) I haven't read the entire post, but two simple questions:

(1) What's the relationship between pixel size and bits required for RAW.

(2) Do we need more bits or less bits as higher ISO goes up? Someone quoted Thorn Hogan who claims 14-bit is more useful at high ISO, which seems "strange" to say least.
 
GordonBGood wrote:

Nice to see you here :) I haven't read the entire post, but two simple questions:

(1) What's the relationship between pixel size and bits required for RAW.
I don't believe I can really answer that, as the number of bits required is actually more related to the level of the black read noise as a ratio to the full well capacity. Say, for example, that a new 48 MP sensor had a full well electron capacity at base ISO sensitivity of about 14,189 electrons and further development was done on the technology so that the black read noise was equivalent to one electron, then the bit depth required would still be 12 bits. This is irrespective of the actual sensor (and thus the individual photosite) size; it just depends on this ratio. In order to actually require 14 bits, the full well capacity would need to be 56756 for that same one electron black read noise for a DR of about 15.3 stops as per an engineering definition. Again, the number says more than the actual use, as those two extra LSB's would only contain meaningful data for very dark levels of the lowest two "stops" of ISO sensitivity.
(2) Do we need more bits or less bits as higher ISO goes up? Someone quoted Thorn Hogan who claims 14-bit is more useful at high ISO, which seems "strange" to say least.
One needs less bit depth as ISO sensitivity goes up. The lower bits will contain more and more random data as ISO sensitivity gain goes up and could be eliminated (by rounding, not truncation), with the required dithering to avoid banding being added by just injecting randomness back into the bits when they are added back in.

Much as many find what Thom writes to be useful, I don't think that a high level of technical expertise is one of his main abilities. I think that he is likely reasoning that one needs to preserve the bits to avoid the seeing a banding or stepping in the noise patterns, but neglecting that re-injecting dithering is possible.

Regards, GordonBGood
 
Much as many find what Thom writes to be useful, I don't think that a high level of technical expertise is one of his main abilities. I think that he is likely reasoning that one needs to preserve the bits to avoid the seeing a banding or stepping in the noise patterns, but neglecting that re-injecting dithering is possible.
No, I think Thorn is just going by what he sees in D300, without recognizing the improvement is that case is due to other factors (reading out the sensor more slowly).

One more question: headofdestiny posted this link to K-5

http://www.luminous-landscape.com/forum/index.php?topic=49200.msg409770#msg409770

with an assertion that K-5 benifits from 14-bit ...

Any comment on that example in that link?
 
[topic: does white balancing increase the size of pixel data numbers that need be stored in a raw file? Does white balance drive a camera maker to store 14 bits of raw data on a camera with sensors distinguishing far fewer than 16,384 different light levels?]

Russell: camera raw files store values that are strictly a function of how many photons were detected by the sensor during the exposure period.

Iliah: Not at all. Raw files are affected by noise, cross-talk, leaks,...they can't store more then they can,


Yes, pixels can always be overexposed, overtopping whatever maximum number they can report, and sensor defects and sensor aging could also end up affecting the raw file data. But why are we talking about this? Just to make it sound like I don't know as much as you about because there's some topic I haven't mentioned? Would it discredit your arguments about when white balancing is applied, if I pointed out that you never mentioned the affects of cosmic rays on raw data?

Iliah: and finally, they are affected by in-camera pre-cooking, analogue or digital.

Yes, raw files might reflect subtracted out biases and/or dark frames, mapped out bad pixels, etc etc. But am quibbling with your mistaken post that white balancing coeffiecients are baked into the raw file pixel data numbers, at times inflating those numbers beyond the inherent sensor dynamic range.

Russell: White balance is a post-processing specification

Iliah: Not always. In many cases certain pre-balancing is applied before raw is recorded.


Seems like you're just searching for vague, non-negate-able things to say, be they relevant or not. The topic here is when is white balancing applied, which is the kind of balancing you said that the per-pixel raw file data has to reflect with larger numbers than the original photon counts?

Iliah: Photon count is in the past, processing pipeline moved forward applying certain conversion coefficients and gains.

Yes, a raw file could store photon counts that have been affected by non-linearity coefficients and gains. But the topic is whether or not your statement was true, that a sensor that can only distinguish 4096 levels of light, needs a greater-than-12-bit-raw file that stores numbers greater than 4095, because of white balancing . Which is just a mistake you've made, because white balancing as we all know is something that we set and apply long after the raw file is created, it's not something "baked into" the raw file.

Will go further and say that all of us, including you, would not call something a "raw file" if the white balance has already been applied to the stored raw brightness levels.

I don't think that all your points are wrong just because you make a mistake now and then. The more people contribute, do anything , the more mistakes they make, it's a cost of doing business. And you've got a ton of knowledge and good points, and valuable examples, like your D3 13 bit photo, very cool. And it's sure easy to find tons of mistaken things I've written on this forum.

But I am quibbling with you about your practice of bringing up lots of little-bit-right things that don't change a basic error (not unlike your occasional tendency to avoid acknowledging when you've written unnecessarily vague thread titles), which is unnecessarily (since our livelihoods aren't at stake) confusing to readers.

Russell: So I do not agree with your implication that white balance multiplier coefficients...make us need to record extra levels of pixel information in the original raw file.
Iliah: Once again, we are past photon count stage.

Yes, there's dark frame subtractions, non-linearity coefficients, etc etc perhaps to be applied before storing raw data. But we're not past the white balance stage before the raw data is stored, hence white balancing multipliers don't cause any raw file data numbers to grow beyond 12 bits in length, as you stated. The things you are talking about correct the exact numbers between 0 and 4095 that need be stored for a 12-bit sensor, they don't increase the number of different light levels the sensor can usefully/accurately distinguish.

Iliah: To the matter, what is your explanation of incorrect colour and colour blotches in shadows?

Hmm, a new matter. Well, righting incorrect colors (i.e. unbalanced R,G,B responses to weak signals, caused primarily by the different amounts of energy-versus-noise the R,G,B pixel filters deliver to their respective sensors in dark areas) in shadows are something that would cause camera maker to store, say, a 104 in the raw file when the uncorrected photon count from the sensor is 100. Maker wouldn't bother storing a value like 100.3.

Correcting subtle shadow color errors don't drive camera makers to burden raw files with larger, slower, more precise numbers. Because of the reality that, in the shadows, where the noise levels approach or exceed the signal levels , there is no meaning to increasing the precision of recorded signal levels, because that's the place where you have much less precise knowledge of the "true" signal anyway .

Iliah: You can check that directly. Take some shots at base ISO and ISO 1600, normal exposure and 4 stops underexposure, and examine the shadow values.

If a sensor can distinguish 4096 levels of brightness, 12 bits of brightness, and the camera maker chose to write out a raw file storing 32 bits of brightness (levels 1 to 2 billion) for each pixel, the extra stored bits would not get rid of color blotches, and certainly not get rid of incorrect shadow color balance(see above). The blotches come from a sensor that is barely responding at the low end to delicate changes in incoming weak signals, which unresponsiveness no amount of increased raw file bits can meaningfully overcome. Blotches can be broken up in raw post-processing, there's no advantage to injecting false shadow variances and precision into the raw file.

Tripod manual focus.



 
Especially like your comment about injecting variances into data, you're talking about breaking up color blotches in dark areas, which however valuable, there is no advantage to doing before the raw data is stored (and which certainly violates the spirit and point of "raw data").

And of course as you have rather splendidly spelled out, there is no necessary relation between the number of bits stored in raw data and the range of light levels those bits may represent in a given image. A camera maker may choose to store only 2 bits of data at each pixel, with a zero indicating black, a 1 indicating 1 photon, a 2 indicating cloudy bright day on a piece of notebook paper, and a 3 representing the brightness inside of the sun's nuclear reactions.
 

Keyboard shortcuts

Back
Top