Less Dynamic Range When You Shoot FF Camera in Super-35 Mode?

TheOwl360

Forum Enthusiast
Messages
482
Reaction score
345
Location
Seattle, WA, US
Do full-frame cameras in the Sony lineup lose dynamic range when they are shot in super-35 mode?

If so, why?

Thanks in advance for any constructive feedback.
 
Last edited:
Do full-frame cameras in the Sony lineup lose dynamic range when they are shot in super-35 mode?

If so, why?

Thanks in advance for any constructive feedback.
I'm surprised that some many responces indicates, that you will not loose dynamic range in crop (super 35) mode. I'm far from being expert on the topic, but just from the logical point of view:

What is the difference between FF and APSC sensor: area size

What is the difference of dynamic range between FF and APSC sensor: roughly about 1EV

What is happening in crop mode: reducing sensor area to the size of APSC sensor

So my only conclusion is: YES, you will loose dynamic range in super-35 mode. It's the same as you would pull the A6400 out of the drawer and use it. Results from photonstophotos.net confirm that.

9b6205304c5c4663860d035e1af4a85b.jpg
people confuse what dynamic range actually is. in my opinion it has nothing to do with noise, but capturing the extremes of light. but for the same fov image then the FF has less noise but the testers use noise as the way of measuring dynamic range which is wrong imop.
Looks like you don’t understand it after all. Noise is exactly the thing that prevents detail from being detected in dark areas of the scene. Not sure what you think it is, if not noise?
i shot a pro shoot with my a6300 last week for kicks and didnt notice any drop in dr at all. in fact the images were spectacular even compared to my a7r2

Ds
Sure. That camera would have more than enough DR for the average “pro shoot”. But that doesn’t make your previous sentences right. ;-)
how do you figure that ? black is black and white is white. or an i missing something ? i shoot black costumes on white backgrounds.
Even black velvet still has sufficient reflectance (ca 1%) to be easily visible in an otherwise properly exposed shot. Similarly non-overexposed whites retain at least some structure. So there should be no pure whites nor pure blacks.

Of course this may not be visible on your computer display. Typical computer displays (and JPGs in general) can only reproduce 8bit or less. Since your a6300 has more than 10bit (at baes ISO), you may not notice any impact of DR at all.
my monitor is 10 bit ,but "bits" have nothing to do with DR.

Ds
0000 1000 0100 1100 0010 1010 0110 1110 0001

whats this ?

--
The confusion starts when the scientists can't agree amongst themselves. Henry F
 
how do you figure that ? black is black and white is white. or an i missing something ? i shoot black costumes on white backgrounds.
Even black velvet still has sufficient reflectance (ca 1%) to be easily visible in an otherwise properly exposed shot. Similarly non-overexposed whites retain at least some structure. So there should be no pure whites nor pure blacks.

Of course this may not be visible on your computer display. Typical computer displays (and JPGs in general) can only reproduce 8bit or less. Since your a6300 has more than 10bit (at baes ISO), you may not notice any impact of DR at all.
my monitor is 10 bit ,but "bits" have nothing to do with DR.
Call them stops, if you like. In computer parlor information is counted in bits (or bytes, which is exactly 8 bits). One additional bit is what is needed to store the information for one additional stop of DR.
 
Do full-frame cameras in the Sony lineup lose dynamic range when they are shot in super-35 mode?

If so, why?

Thanks in advance for any constructive feedback.
I'm surprised that some many responces indicates, that you will not loose dynamic range in crop (super 35) mode. I'm far from being expert on the topic, but just from the logical point of view:

What is the difference between FF and APSC sensor: area size

What is the difference of dynamic range between FF and APSC sensor: roughly about 1EV

What is happening in crop mode: reducing sensor area to the size of APSC sensor

So my only conclusion is: YES, you will loose dynamic range in super-35 mode. It's the same as you would pull the A6400 out of the drawer and use it. Results from photonstophotos.net confirm that.

9b6205304c5c4663860d035e1af4a85b.jpg
people confuse what dynamic range actually is. in my opinion it has nothing to do with noise, but capturing the extremes of light. but for the same fov image then the FF has less noise but the testers use noise as the way of measuring dynamic range which is wrong imop.
Looks like you don’t understand it after all. Noise is exactly the thing that prevents detail from being detected in dark areas of the scene. Not sure what you think it is, if not noise?
i shot a pro shoot with my a6300 last week for kicks and didnt notice any drop in dr at all. in fact the images were spectacular even compared to my a7r2

Ds
Sure. That camera would have more than enough DR for the average “pro shoot”. But that doesn’t make your previous sentences right. ;-)
how do you figure that ? black is black and white is white. or an i missing something ? i shoot black costumes on white backgrounds.
Even black velvet still has sufficient reflectance (ca 1%) to be easily visible in an otherwise properly exposed shot. Similarly non-overexposed whites retain at least some structure. So there should be no pure whites nor pure blacks.

Of course this may not be visible on your computer display. Typical computer displays (and JPGs in general) can only reproduce 8bit or less. Since your a6300 has more than 10bit (at baes ISO), you may not notice any impact of DR at all.
my monitor is 10 bit ,but "bits" have nothing to do with DR.

Ds
0000 1000 0100 1100 0010 1010 0110 1110 0001

whats this ?
Why can you accept that your "monitor is 10 bit", when counting DR gradation information in bits is an issue for you?
 
Of course, cropping reduces dynamic range. This should be intuitive. Look at a grayscale card from across the room. Roll a sheet of paper into a tube look through it to “crop” your vision if you want to. Now you are just using a small potion of your retina to see the chart. Can you tell the two darkest tones apart from each other? How about the two brightest tones? If you still can, you are not far enough. Now bring the chart to normal reading distance. Now you are using all of your retina, you should be able to see much finer gradation of tones in shadows and highlights.
 
how do you figure that ? black is black and white is white. or an i missing something ? i shoot black costumes on white backgrounds.
Even black velvet still has sufficient reflectance (ca 1%) to be easily visible in an otherwise properly exposed shot. Similarly non-overexposed whites retain at least some structure. So there should be no pure whites nor pure blacks.

Of course this may not be visible on your computer display. Typical computer displays (and JPGs in general) can only reproduce 8bit or less. Since your a6300 has more than 10bit (at baes ISO), you may not notice any impact of DR at all.
my monitor is 10 bit ,but "bits" have nothing to do with DR.
Call them stops, if you like. In computer parlor information is counted in bits (or bytes, which is exactly 8 bits). One additional bit is what is needed to store the information for one additional stop of DR.
thats not correct at all. so i suppose film has unlimited stops of DR because film is sort of analogue. Have a read and educate yourself to the difference.


Ds
 
Do full-frame cameras in the Sony lineup lose dynamic range when they are shot in super-35 mode?

If so, why?

Thanks in advance for any constructive feedback.
I'm surprised that some many responces indicates, that you will not loose dynamic range in crop (super 35) mode. I'm far from being expert on the topic, but just from the logical point of view:

What is the difference between FF and APSC sensor: area size

What is the difference of dynamic range between FF and APSC sensor: roughly about 1EV

What is happening in crop mode: reducing sensor area to the size of APSC sensor

So my only conclusion is: YES, you will loose dynamic range in super-35 mode. It's the same as you would pull the A6400 out of the drawer and use it. Results from photonstophotos.net confirm that.

9b6205304c5c4663860d035e1af4a85b.jpg
people confuse what dynamic range actually is. in my opinion it has nothing to do with noise, but capturing the extremes of light. but for the same fov image then the FF has less noise but the testers use noise as the way of measuring dynamic range which is wrong imop.
Looks like you don’t understand it after all. Noise is exactly the thing that prevents detail from being detected in dark areas of the scene. Not sure what you think it is, if not noise?
i shot a pro shoot with my a6300 last week for kicks and didnt notice any drop in dr at all. in fact the images were spectacular even compared to my a7r2

Ds
Sure. That camera would have more than enough DR for the average “pro shoot”. But that doesn’t make your previous sentences right. ;-)
how do you figure that ? black is black and white is white. or an i missing something ? i shoot black costumes on white backgrounds.
Even black velvet still has sufficient reflectance (ca 1%) to be easily visible in an otherwise properly exposed shot. Similarly non-overexposed whites retain at least some structure. So there should be no pure whites nor pure blacks.

Of course this may not be visible on your computer display. Typical computer displays (and JPGs in general) can only reproduce 8bit or less. Since your a6300 has more than 10bit (at baes ISO), you may not notice any impact of DR at all.
my monitor is 10 bit ,but "bits" have nothing to do with DR.

Ds
0000 1000 0100 1100 0010 1010 0110 1110 0001

whats this ?
Why can you accept that your "monitor is 10 bit", when counting DR gradation information in bits is an issue for you?
because it has nothing to do with measuring DR.


--
The confusion starts when the scientists can't agree amongst themselves. Henry F
 
Of course, cropping reduces dynamic range. This should be intuitive. Look at a grayscale card from across the room. Roll a sheet of paper into a tube look through it to “crop” your vision if you want to. Now you are just using a small potion of your retina to see the chart. Can you tell the two darkest tones apart from each other? How about the two brightest tones? If you still can, you are not far enough. Now bring the chart to normal reading distance. Now you are using all of your retina, you should be able to see much finer gradation of tones in shadows and highlights.
the pixel size determines the colour accuracy and total dr , how many of them just defines grain at a certain viewing distance . it doesnt determine what is the brightest and darkest extremes of the image (DR) your analogy describes pixel size not light recording from brightest to darkest. and btw veiling glare and pixel bleeding will be the limiting factor anyway.

Ds
 
Dspider wrote

0000 1000 0100 1100 0010 1010 0110 1110 0001

whats this ?
In the context of this discussion, this looks like a 9-pixel monochrome image stored with 4-bit depth and it appears to have between 3 and 4 stops of dynamic range assuming zero noise.
Good assumption :-) i was seeing if i could still remember binary logic from 40 years ago :-) when i used to make illegal radio scanners for PLL circuits in radio transceivers :-)

Ds
 
how do you figure that ? black is black and white is white. or an i missing something ? i shoot black costumes on white backgrounds.
Even black velvet still has sufficient reflectance (ca 1%) to be easily visible in an otherwise properly exposed shot. Similarly non-overexposed whites retain at least some structure. So there should be no pure whites nor pure blacks.

Of course this may not be visible on your computer display. Typical computer displays (and JPGs in general) can only reproduce 8bit or less. Since your a6300 has more than 10bit (at baes ISO), you may not notice any impact of DR at all.
my monitor is 10 bit ,but "bits" have nothing to do with DR.
Call them stops, if you like. In computer parlor information is counted in bits (or bytes, which is exactly 8 bits). One additional bit is what is needed to store the information for one additional stop of DR.
Not quite. If linearly encoded, each additional bit can encode roughly one more stop of DR.

If nonlinearly encoded (sRGB gamma, log encoding, etc), you can encode many more stops of DR in a perceptually lossless manner than with linear encoding.

Of course then you start going down the rabbit hole of different DR metrics - PDR (normalized for viewing area) vs. EDR (per photosite), and also your definition of range - most notably there are a lot of different definitions of the "floor" for a DR measurement. Cropping will not change EDR but will change PDR.

One for digital encoding is to meausure the ratio of the peak to the smallest possible encoded value. 8-bit sRGB can achieve quite an impressive number here.

Another is to define the "floor" as where your quantization error is less than the threshold of human vision, which is roughly somewhere between 2% and 3% of the luminance of the adjacent value. 1/log2(1.03) = 23 code values per EV, 1/log2(1.02) = 35 code values per EV. By this metric, sRGB does pretty poorly, and it is possible to do MUCH better even with 8-bit encoding.

--
Context is key. If I have quoted someone else's post when replying, please do not reply to something I say without reading text that I have quoted, and understanding the reason the quote function exists.
 
Last edited:
how do you figure that ? black is black and white is white. or an i missing something ? i shoot black costumes on white backgrounds.
Even black velvet still has sufficient reflectance (ca 1%) to be easily visible in an otherwise properly exposed shot. Similarly non-overexposed whites retain at least some structure. So there should be no pure whites nor pure blacks.

Of course this may not be visible on your computer display. Typical computer displays (and JPGs in general) can only reproduce 8bit or less. Since your a6300 has more than 10bit (at baes ISO), you may not notice any impact of DR at all.
my monitor is 10 bit ,but "bits" have nothing to do with DR.
Call them stops, if you like. In computer parlor information is counted in bits (or bytes, which is exactly 8 bits). One additional bit is what is needed to store the information for one additional stop of DR.
thats not correct at all. so i suppose film has unlimited stops of DR because film is sort of analogue.
It would have unlimited stops, if it were infinitely precise. Which it isn't.
Sorry, nothing in there I don't already know.
 
how do you figure that ? black is black and white is white. or an i missing something ? i shoot black costumes on white backgrounds.
Even black velvet still has sufficient reflectance (ca 1%) to be easily visible in an otherwise properly exposed shot. Similarly non-overexposed whites retain at least some structure. So there should be no pure whites nor pure blacks.

Of course this may not be visible on your computer display. Typical computer displays (and JPGs in general) can only reproduce 8bit or less. Since your a6300 has more than 10bit (at baes ISO), you may not notice any impact of DR at all.
my monitor is 10 bit ,but "bits" have nothing to do with DR.
Call them stops, if you like. In computer parlor information is counted in bits (or bytes, which is exactly 8 bits). One additional bit is what is needed to store the information for one additional stop of DR.
Not quite. If linearly encoded, each additional bit can encode roughly one more stop of DR.
So we agree on what I intended to say. I'm sorry if I confused anyone.
If nonlinearly encoded (sRGB gamma, log encoding, etc), you can encode many more stops of DR in a perceptually lossless manner than with linear encoding.
"Perceptually lossless" is still lossy in terms of information content. You could do the very same in analog formats, e.g. what Dolby did for Auto tapes.
Of course then you start going down the rabbit hole of different DR metrics - PDR (normalized for viewing area) vs. EDR (per photosite), and also your definition of range - most notably there are a lot of different definitions of the "floor" for a DR measurement. Cropping will not change EDR but will change PDR.
I know. Given enough resolution an EDR of merely one stop could be used to reach almost arbitrarily high PDRs. As is used e.g. for fairly accurate 1-bit DAC audio circuits with >=16 bit effective DR.

As a side note, the relative size of the local area used to calculate PDR doesn't seem standardized. At least so far I have only seen ad-hoc definitions.
One for digital encoding is to meausure the ratio of the peak to the smallest possible encoded value. 8-bit sRGB can achieve quite an impressive number here.
I'd use the smallest encodable step, not the smallest absolute value. I don't like dividing by zero, but maybe that's just me.
Another is to define the "floor" as where your quantization error is less than the threshold of human vision, which is roughly somewhere between 2% and 3% of the luminance of the adjacent value.
Interesting. I haven't seen anyone defining DR in terms of human perception. You got a link for that "2% and 3%" figure? I'd like to see the list of caveats:-)

In this line of argument there is also shot noise, that at higher EVs will overwhelm all other types of noises in a modern camera, while at low EVs heat noise and read noise will overwhelm both your 3% human accuracy figure (which likely isn't accurate at those levels, anyway) and the linear encoding precision.

Of course none of that is relevant for calculating DR. As you probably know, the floor normally used is the technical noise floor.
1/log2(1.03) = 23 code values per EV, 1/log2(1.02) = 35 code values per EV. By this metric, sRGB does pretty poorly, and it is possible to do MUCH better even with 8-bit encoding.
No one said that linear encoding is efficient in terms of only encoding what humans can perceive (or is meaningful in terms of underlying noise). Besides I assume by sRGB you mean JPEG/sRGB (which has even less than the 8bit EDR). E.g. TIFF/sRGB comes in 16bit flavours, which may be better than human vision across the range.
 
how do you figure that ? black is black and white is white. or an i missing something ? i shoot black costumes on white backgrounds.
Even black velvet still has sufficient reflectance (ca 1%) to be easily visible in an otherwise properly exposed shot. Similarly non-overexposed whites retain at least some structure. So there should be no pure whites nor pure blacks.

Of course this may not be visible on your computer display. Typical computer displays (and JPGs in general) can only reproduce 8bit or less. Since your a6300 has more than 10bit (at baes ISO), you may not notice any impact of DR at all.
my monitor is 10 bit ,but "bits" have nothing to do with DR.
Call them stops, if you like. In computer parlor information is counted in bits (or bytes, which is exactly 8 bits). One additional bit is what is needed to store the information for one additional stop of DR.
Not quite. If linearly encoded, each additional bit can encode roughly one more stop of DR.
So we agree on what I intended to say. I'm sorry if I confused anyone.
If nonlinearly encoded (sRGB gamma, log encoding, etc), you can encode many more stops of DR in a perceptually lossless manner than with linear encoding.
"Perceptually lossless" is still lossy in terms of information content. You could do the very same in analog formats, e.g. what Dolby did for Auto tapes.
Of course then you start going down the rabbit hole of different DR metrics - PDR (normalized for viewing area) vs. EDR (per photosite), and also your definition of range - most notably there are a lot of different definitions of the "floor" for a DR measurement. Cropping will not change EDR but will change PDR.
I know. Given enough resolution an EDR of merely one stop could be used to reach almost arbitrarily high PDRs. As is used e.g. for fairly accurate 1-bit DAC audio circuits with >=16 bit effective DR.
Yup. Sigma-delta modulation (or delta-sigma, I've seen both orderings used) is impressive. I've actually done sigma-delta modulation to improve bit depth of an LED system - it was using software PWM on an 8-bit AVR microcontroller
As a side note, the relative size of the local area used to calculate PDR doesn't seem standardized. At least so far I have only seen ad-hoc definitions.
Yup another thing that isn't standardized, so it's only valid to compare PDRs with well defined metrics that can either be readily converted/adjusted for, or all were measured the same way. (For example I trust Bill Claff's metric for the most part, with the caveat that he uses the camera manufacturer's ISO rating for his X axis which can invalidate results when comparing across brands.)
One for digital encoding is to meausure the ratio of the peak to the smallest possible encoded value. 8-bit sRGB can achieve quite an impressive number here.
I'd use the smallest encodable step, not the smallest absolute value. I don't like dividing by zero, but maybe that's just me.
That's actually what I meant, thanks for the clarification.
Another is to define the "floor" as where your quantization error is less than the threshold of human vision, which is roughly somewhere between 2% and 3% of the luminance of the adjacent value.
Interesting. I haven't seen anyone defining DR in terms of human perception. You got a link for that "2% and 3%" figure? I'd like to see the list of caveats:-)

In this line of argument there is also shot noise, that at higher EVs will overwhelm all other types of noises in a modern camera, while at low EVs heat noise and read noise will overwhelm both your 3% human accuracy figure (which likely isn't accurate at those levels, anyway) and the linear encoding precision.

Of course none of that is relevant for calculating DR. As you probably know, the floor normally used is the technical noise floor.
I used to have a link, I'll try and dig it up later this week, but I last saw it referenced in a white paper describing the HLG standard. I saw a separate research paper that came to a roughly similar conclusion as far as "typical" banding threshold. And yes, there are caveats - such as the threshold changing with absolute luminance, and the obvious color vs. luminance stuff. https://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-BT.2390-3-2017-PDF-E.pdf is the first link I found which touches on the subject, but doesn't discuss sRGB/Rec709 limitations with 8-bit as clearly as another reference I once found.

AHA - Found it! - https://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP309.pdf - they use 2% in their document, which gives just over 5 stops for Rec709 with 8-bit legal range luma.
1/log2(1.03) = 23 code values per EV, 1/log2(1.02) = 35 code values per EV. By this metric, sRGB does pretty poorly, and it is possible to do MUCH better even with 8-bit encoding.
No one said that linear encoding is efficient in terms of only encoding what humans can perceive (or is meaningful in terms of underlying noise). Besides I assume by sRGB you mean JPEG/sRGB (which has even less than the 8bit EDR). E.g. TIFF/sRGB comes in 16bit flavours, which may be better than human vision across the range.
I thought I'd implied 8-bit with the rest of the sentence. :) For video, legal range luma makes things potentially even worse.
 
I used to have a link, I'll try and dig it up later this week, but I last saw it referenced in a white paper describing the HLG standard. I saw a separate research paper that came to a roughly similar conclusion as far as "typical" banding threshold. And yes, there are caveats - such as the threshold changing with absolute luminance, and the obvious color vs. luminance stuff. https://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-BT.2390-3-2017-PDF-E.pdf is the first link I found which touches on the subject, but doesn't discuss sRGB/Rec709 limitations with 8-bit as clearly as another reference I once found.

AHA - Found it! - https://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP309.pdf - they use 2% in their document, which gives just over 5 stops for Rec709 with 8-bit legal range luma.
Thx, interesting stuff. The 2% figure apparently comes from an old book:
  1. Schreiber, W. F., 1992. Fundamentals of Electronic Imaging Systems, Third Edition
Which probably has it from still older research. Mah.
 
I used to have a link, I'll try and dig it up later this week, but I last saw it referenced in a white paper describing the HLG standard. I saw a separate research paper that came to a roughly similar conclusion as far as "typical" banding threshold. And yes, there are caveats - such as the threshold changing with absolute luminance, and the obvious color vs. luminance stuff. https://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-BT.2390-3-2017-PDF-E.pdf is the first link I found which touches on the subject, but doesn't discuss sRGB/Rec709 limitations with 8-bit as clearly as another reference I once found.

AHA - Found it! - https://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP309.pdf - they use 2% in their document, which gives just over 5 stops for Rec709 with 8-bit legal range luma.
Thx, interesting stuff. The 2% figure apparently comes from an old book:
  1. Schreiber, W. F., 1992. Fundamentals of Electronic Imaging Systems, Third Edition
Which probably has it from still older research. Mah.
Yeah. I've seen another more recent paper roughly re-confirming things in the 1-3ish percent range, including verifying that it does get more lenient when things are dim, but I have forgotten the Google search terms used to find it and have been unable to find it again.
 
how do you figure that ? black is black and white is white. or an i missing something ? i shoot black costumes on white backgrounds.
Even black velvet still has sufficient reflectance (ca 1%) to be easily visible in an otherwise properly exposed shot. Similarly non-overexposed whites retain at least some structure. So there should be no pure whites nor pure blacks.

Of course this may not be visible on your computer display. Typical computer displays (and JPGs in general) can only reproduce 8bit or less. Since your a6300 has more than 10bit (at baes ISO), you may not notice any impact of DR at all.
my monitor is 10 bit ,but "bits" have nothing to do with DR.
Call them stops, if you like. In computer parlor information is counted in bits (or bytes, which is exactly 8 bits). One additional bit is what is needed to store the information for one additional stop of DR.
Not quite. If linearly encoded, each additional bit can encode roughly one more stop of DR.
So we agree on what I intended to say. I'm sorry if I confused anyone.
If nonlinearly encoded (sRGB gamma, log encoding, etc), you can encode many more stops of DR in a perceptually lossless manner than with linear encoding.
"Perceptually lossless" is still lossy in terms of information content. You could do the very same in analog formats, e.g. what Dolby did for Auto tapes.
Of course then you start going down the rabbit hole of different DR metrics - PDR (normalized for viewing area) vs. EDR (per photosite), and also your definition of range - most notably there are a lot of different definitions of the "floor" for a DR measurement. Cropping will not change EDR but will change PDR.
I know. Given enough resolution an EDR of merely one stop could be used to reach almost arbitrarily high PDRs. As is used e.g. for fairly accurate 1-bit DAC audio circuits with >=16 bit effective DR.
Yup. Sigma-delta modulation (or delta-sigma, I've seen both orderings used) is impressive. I've actually done sigma-delta modulation to improve bit depth of an LED system - it was using software PWM on an 8-bit AVR microcontroller
As a side note, the relative size of the local area used to calculate PDR doesn't seem standardized. At least so far I have only seen ad-hoc definitions.
Yup another thing that isn't standardized, so it's only valid to compare PDRs with well defined metrics that can either be readily converted/adjusted for, or all were measured the same way. (For example I trust Bill Claff's metric for the most part, with the caveat that he uses the camera manufacturer's ISO rating for his X axis which can invalidate results when comparing across brands.)
have you ever sent bill images for a camera test ? if you havent then you dont understand the testing or the results. btw human visual examination will find the faults with the testing :-) not the software.
One for digital encoding is to meausure the ratio of the peak to the smallest possible encoded value. 8-bit sRGB can achieve quite an impressive number here.
I'd use the smallest encodable step, not the smallest absolute value. I don't like dividing by zero, but maybe that's just me.
That's actually what I meant, thanks for the clarification.
Another is to define the "floor" as where your quantization error is less than the threshold of human vision, which is roughly somewhere between 2% and 3% of the luminance of the adjacent value.
Interesting. I haven't seen anyone defining DR in terms of human perception. You got a link for that "2% and 3%" figure? I'd like to see the list of caveats:-)

In this line of argument there is also shot noise, that at higher EVs will overwhelm all other types of noises in a modern camera, while at low EVs heat noise and read noise will overwhelm both your 3% human accuracy figure (which likely isn't accurate at those levels, anyway) and the linear encoding precision.

Of course none of that is relevant for calculating DR. As you probably know, the floor normally used is the technical noise floor.
I used to have a link, I'll try and dig it up later this week, but I last saw it referenced in a white paper describing the HLG standard. I saw a separate research paper that came to a roughly similar conclusion as far as "typical" banding threshold. And yes, there are caveats - such as the threshold changing with absolute luminance, and the obvious color vs. luminance stuff. https://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-BT.2390-3-2017-PDF-E.pdf is the first link I found which touches on the subject, but doesn't discuss sRGB/Rec709 limitations with 8-bit as clearly as another reference I once found.

AHA - Found it! - https://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP309.pdf - they use 2% in their document, which gives just over 5 stops for Rec709 with 8-bit legal range luma.
1/log2(1.03) = 23 code values per EV, 1/log2(1.02) = 35 code values per EV. By this metric, sRGB does pretty poorly, and it is possible to do MUCH better even with 8-bit encoding.
No one said that linear encoding is efficient in terms of only encoding what humans can perceive (or is meaningful in terms of underlying noise). Besides I assume by sRGB you mean JPEG/sRGB (which has even less than the 8bit EDR). E.g. TIFF/sRGB comes in 16bit flavours, which may be better than human vision across the range.
I thought I'd implied 8-bit with the rest of the sentence. :) For video, legal range luma makes things potentially even worse.
 

Keyboard shortcuts

Back
Top