HDR standards - DR vs DRR (dynamic range resolution)

quadrox

Senior Member
Messages
1,311
Solutions
1
Reaction score
790
Location
DK
Definitions:
DR - dynamic range - number of stops between darkest and brightes value
DRR - dynamic range resolution - smallest possible difference between values

Topic: It makes sense to say e.g. that a given sensor has 13 stops of dynamic range and we know both the DR and DRR - this works well for raw images (at least theoretically).

I have always assumed that e.g. an sRGB JPEG image also has a well defined DR and DRR, meaning an ideal monitor that is capable of displaying 8bit sRGB images faithfully should always have the same number of stops between pure black and pure white, even if I may increase the overall brightness of the image.

Now we are finally getting some image standards with more bit-depths, e.g. HDR10 and whatever else they are called. These sport 10 bits instead of only 8 bits, but where do these extra bits go? Do they extend the DR, and the DRR stays the same? Do they affect both? Is this even well defined?

I know that even before HDR standards we have had 10bit monitors, but as far as I know these did not feature expanded DR as such, but instead offered more DRR and expanded range for colors. Now, if we want to combine this approach of expanded color spaces with the idea of expanding the DR, don't we need more than 10bits? Why is this not a thing? Indeed, why is the industry (monitors and image standards) lagging so far behind our ability to capture images with high fidelity?
 
Last edited:
Definitions:
DR - dynamic range - number of stops between darkest and brightes value
Typically, the darkest resolved level is taken to be the noise floor. With white noise, it is spatial frequency independent, which is good.
DRR - dynamic range resolution - smallest possible difference between values
I have not seen DDR before. You are talking about noise and quantization errors, I guess. Quantization errors matter mostly near the deep bottom of the range; and very often, they do not matter at all because the noise is much stronger.
Topic: It makes sense to say e.g. that a given sensor has 13 stops of dynamic range and we know both the DR and DRR - this works well for raw images (at least theoretically).

I have always assumed that e.g. an sRGB JPEG image also has a well defined DR and DRR, meaning an ideal monitor that is capable of displaying 8bit sRGB images faithfully should always have the same number of stops between pure black and pure white, even if I may increase the overall brightness of the image.
The number of stops between pure black and whatever, not pure black, is infinity. Next, 8-bit JPEGs have the capacity to encode around 11 stops per wikipedia (easy to compute but I have not done it) because of the nonlinear encoding. Once it gets displayed on your monitor, it creates an image with a lower DR, indeed, which may or may not correspond to the tonality of the original scene.
Now we are finally getting some image standards with more bit-depths, e.g. HDR10 and whatever else they are called. These sport 10 bits instead of only 8 bits, but where do these extra bits go? Do they extend the DR, and the DRR stays the same? Do they affect both? Is this even well defined?
I am not familiar with the HDR10 specs but I own an ipad which can play it. It only does it on images or videos specially encoded. What I see is the ability to display very bright highlights as a part of an otherwise "normally" rendered image. So roughly speaking, it offers an extended highlight room without messing up with the midtones and without raising the overall brightness. I find it to be a gimmick since I cannot use it for my photos. Canon has some HDR support, I believe, but I was never able to convert a manually created HDR to this format, so I gave up.
I know that even before HDR standards we have had 10bit monitors, but as far as I know these did not feature expanded DR as such, but instead offered more DRR and expanded range for colors. Now, if we want to combine this approach of expanded color spaces with the idea of expanding the DR, don't we need more than 10bits? Why is this not a thing? Indeed, why is the industry (monitors and image standards) lagging so far behind our ability to capture images with high fidelity?
 
Definitions:
DR - dynamic range - number of stops between darkest and brightes value
DRR - dynamic range resolution - smallest possible difference between values

Topic: It makes sense to say e.g. that a given sensor has 13 stops of dynamic range and we know both the DR and DRR - this works well for raw images (at least theoretically).

I have always assumed that e.g. an sRGB JPEG image also has a well defined DR and DRR, meaning an ideal monitor that is capable of displaying 8bit sRGB images faithfully should always have the same number of stops between pure black and pure white, even if I may increase the overall brightness of the image.

Now we are finally getting some image standards with more bit-depths, e.g. HDR10 and whatever else they are called. These sport 10 bits instead of only 8 bits, but where do these extra bits go? Do they extend the DR, and the DRR stays the same? Do they affect both? Is this even well defined?

I know that even before HDR standards we have had 10bit monitors, but as far as I know these did not feature expanded DR as such, but instead offered more DRR and expanded range for colors. Now, if we want to combine this approach of expanded color spaces with the idea of expanding the DR, don't we need more than 10bits? Why is this not a thing? Indeed, why is the industry (monitors and image standards) lagging so far behind our ability to capture images with high fidelity?
The issue in the past was the display

sRGB at display level has a contrast ratio of 1000:1 this is related to gamma encoding

an HDR display has over 16 stops depending on the standard so it makes sense to use HEIF images that support 10 bits encoding and can support HDR images

your old joeg images are tone mapped they are still less than 10 stops on the display

heif is a great format it was about time
 
Definitions:
DR - dynamic range - number of stops between darkest and brightes value
DRR - dynamic range resolution - smallest possible difference between values

Topic: It makes sense to say e.g. that a given sensor has 13 stops of dynamic range and we know both the DR and DRR - this works well for raw images (at least theoretically).

I have always assumed that e.g. an sRGB JPEG image also has a well defined DR and DRR, meaning an ideal monitor that is capable of displaying 8bit sRGB images faithfully should always have the same number of stops between pure black and pure white, even if I may increase the overall brightness of the image.

Now we are finally getting some image standards with more bit-depths, e.g. HDR10 and whatever else they are called. These sport 10 bits instead of only 8 bits, but where do these extra bits go? Do they extend the DR, and the DRR stays the same? Do they affect both? Is this even well defined?

I know that even before HDR standards we have had 10bit monitors, but as far as I know these did not feature expanded DR as such, but instead offered more DRR and expanded range for colors. Now, if we want to combine this approach of expanded color spaces with the idea of expanding the DR, don't we need more than 10bits? Why is this not a thing? Indeed, why is the industry (monitors and image standards) lagging so far behind our ability to capture images with high fidelity?
I've never seen the term DRR used before, but it *does* factor into some definitions of dynamic range - just not described using that specific term.

Some definitions of dynamic range is the number of stops between the brightest value and the darkest "usable" value, with "usable" defined as the human banding perception threshold. This is the definition the BBC used to state that the usable dynamic range of Rec. 709 video is only 5.27 stops - see the end of section 2 of http://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP309.pdf

Edit: Actually, in that case, DRR as you define it (smallest possible delta) isn't relevant. sRGB has quite high perceptual DRR as it unnecessarily wastes lots of code values near the upper end of the transfer function encoding deltas in luminance that are not perceptible.

In the case of HDR10 and HLG (both of which are 10-bit systems that use a logarithmic transfer function for at least part of the range), the extra bits go into DRR, which extends the BBC's definition of DR used in the whitepaper above

Interestingly, in the case of HLG, the usable dynamic range is increased even when recording 8-bit video, as the Rec. 709 and sRGB gamma curves waste a lot of code values at the top end of the range.

This whole thing gets muddied when you start talking about encoding a scene into an image - as nearly all cameras and RAW processing software will apply an S-curve prior to gamma encoding to compress the dynamic range of the scene. This S-curve happens to translate to more efficient use of 8-bit code values near the upper end of the range. In normal SDR displays, this S-curve is not "undone" for display, while with HLG on a compatible display, it is.

Also muddying the waters further is chrominance resolution - a common complaint regarding Sony's S-Gamut/S-Gamut3 is that they are too wide for 8-bit recording, leading to noticeable quantization errors.

--
Context is key. If I have quoted someone else's post when replying, please do not reply to something I say without reading text that I have quoted, and understanding the reason the quote function exists.
 
Last edited:
Definitions:
DR - dynamic range - number of stops between darkest and brightest value
Darkest value is zero. Log(0)=?

So it is better to define DR as the ratio of the maximum acceptable signal to the minimum acceptable signal in a given context

https://www.strollswithmydog.com/engineering-dynamic-range-photography/

So what's a minimum acceptable signal in photography? Say SNR = x. Next question, how should you encode it for efficiency of storage/speed? Lossy or lossless?

10 bit sub-encodings today are lossy. What encoding/ gamma function do you prefer?

Therein lies the answer to your question.

Jack
 
Last edited:
Definitions:
DR - dynamic range - number of stops between darkest and brightes value
DRR - dynamic range resolution - smallest possible difference between values

Topic: It makes sense to say e.g. that a given sensor has 13 stops of dynamic range and we know both the DR and DRR - this works well for raw images (at least theoretically).

I have always assumed that e.g. an sRGB JPEG image also has a well defined DR and DRR, meaning an ideal monitor that is capable of displaying 8bit sRGB images faithfully should always have the same number of stops between pure black and pure white, even if I may increase the overall brightness of the image.

Now we are finally getting some image standards with more bit-depths, e.g. HDR10 and whatever else they are called. These sport 10 bits instead of only 8 bits, but where do these extra bits go? Do they extend the DR, and the DRR stays the same? Do they affect both? Is this even well defined?

I know that even before HDR standards we have had 10bit monitors, but as far as I know these did not feature expanded DR as such, but instead offered more DRR and expanded range for colors. Now, if we want to combine this approach of expanded color spaces with the idea of expanding the DR, don't we need more than 10bits? Why is this not a thing? Indeed, why is the industry (monitors and image standards) lagging so far behind our ability to capture images with high fidelity?
I've never seen the term DRR used before, but it *does* factor into some definitions of dynamic range - just not described using that specific term.

Some definitions of dynamic range is the number of stops between the brightest value and the darkest "usable" value, with "usable" defined as the human banding perception threshold. This is the definition the BBC used to state that the usable dynamic range of Rec. 709 video is only 5.27 stops - see the end of section 2 of http://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP309.pdf


Edit: Actually, in that case, DRR as you define it (smallest possible delta) isn't relevant. sRGB has quite high perceptual DRR as it unnecessarily wastes lots of code values near the upper end of the transfer function encoding deltas in luminance that are not perceptible.

In the case of HDR10 and HLG (both of which are 10-bit systems that use a logarithmic transfer function for at least part of the range), the extra bits go into DRR, which extends the BBC's definition of DR used in the whitepaper above

Interestingly, in the case of HLG, the usable dynamic range is increased even when recording 8-bit video, as the Rec. 709 and sRGB gamma curves waste a lot of code values at the top end of the range.

This whole thing gets muddied when you start talking about encoding a scene into an image - as nearly all cameras and RAW processing software will apply an S-curve prior to gamma encoding to compress the dynamic range of the scene. This S-curve happens to translate to more efficient use of 8-bit code values near the upper end of the range. In normal SDR displays, this S-curve is not "undone" for display, while with HLG on a compatible display, it is.

Also muddying the waters further is chrominance resolution - a common complaint regarding Sony's S-Gamut/S-Gamut3 is that they are too wide for 8-bit recording, leading to noticeable quantization errors.
 
Definitions:
DR - dynamic range - number of stops between darkest and brightes value
DRR - dynamic range resolution - smallest possible difference between values

Topic: It makes sense to say e.g. that a given sensor has 13 stops of dynamic range and we know both the DR and DRR - this works well for raw images (at least theoretically).

I have always assumed that e.g. an sRGB JPEG image also has a well defined DR and DRR, meaning an ideal monitor that is capable of displaying 8bit sRGB images faithfully should always have the same number of stops between pure black and pure white, even if I may increase the overall brightness of the image.

Now we are finally getting some image standards with more bit-depths, e.g. HDR10 and whatever else they are called. These sport 10 bits instead of only 8 bits, but where do these extra bits go? Do they extend the DR, and the DRR stays the same? Do they affect both? Is this even well defined?

I know that even before HDR standards we have had 10bit monitors, but as far as I know these did not feature expanded DR as such, but instead offered more DRR and expanded range for colors. Now, if we want to combine this approach of expanded color spaces with the idea of expanding the DR, don't we need more than 10bits? Why is this not a thing? Indeed, why is the industry (monitors and image standards) lagging so far behind our ability to capture images with high fidelity?
I've never seen the term DRR used before, but it *does* factor into some definitions of dynamic range - just not described using that specific term.

Some definitions of dynamic range is the number of stops between the brightest value and the darkest "usable" value, with "usable" defined as the human banding perception threshold. This is the definition the BBC used to state that the usable dynamic range of Rec. 709 video is only 5.27 stops - see the end of section 2 of http://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP309.pdf

Edit: Actually, in that case, DRR as you define it (smallest possible delta) isn't relevant. sRGB has quite high perceptual DRR as it unnecessarily wastes lots of code values near the upper end of the transfer function encoding deltas in luminance that are not perceptible.

In the case of HDR10 and HLG (both of which are 10-bit systems that use a logarithmic transfer function for at least part of the range), the extra bits go into DRR, which extends the BBC's definition of DR used in the whitepaper above

Interestingly, in the case of HLG, the usable dynamic range is increased even when recording 8-bit video, as the Rec. 709 and sRGB gamma curves waste a lot of code values at the top end of the range.

This whole thing gets muddied when you start talking about encoding a scene into an image - as nearly all cameras and RAW processing software will apply an S-curve prior to gamma encoding to compress the dynamic range of the scene. This S-curve happens to translate to more efficient use of 8-bit code values near the upper end of the range. In normal SDR displays, this S-curve is not "undone" for display, while with HLG on a compatible display, it is.

Also muddying the waters further is chrominance resolution - a common complaint regarding Sony's S-Gamut/S-Gamut3 is that they are too wide for 8-bit recording, leading to noticeable quantization errors.
don’t confuse display ability with actual standard ability

you can get a 1000:1 contrast ratio out of a tv and same from a jpeg on a good screen

gamma encoding will ensure the 9.96 stops are there

What the camera can or not is actually irrelevant as the display will only show what it can and the jpeg is made for a sdr display
It's pretty clear that you did not bother to read what I wrote, nor did you bother to read the BBC whitepaper that I linked to, since it quite clearly derived that sRGB/Rec709 gamma encoding with only 8 bits leads to only 5.27 stops of dynamic range where the Weber fraction is below the Schrieber limit for human banding perception, regardless of display or camera capability.
 
Definitions:
DR - dynamic range - number of stops between darkest and brightes value
DRR - dynamic range resolution - smallest possible difference between values

Topic: It makes sense to say e.g. that a given sensor has 13 stops of dynamic range and we know both the DR and DRR - this works well for raw images (at least theoretically).

I have always assumed that e.g. an sRGB JPEG image also has a well defined DR and DRR, meaning an ideal monitor that is capable of displaying 8bit sRGB images faithfully should always have the same number of stops between pure black and pure white, even if I may increase the overall brightness of the image.

Now we are finally getting some image standards with more bit-depths, e.g. HDR10 and whatever else they are called. These sport 10 bits instead of only 8 bits, but where do these extra bits go? Do they extend the DR, and the DRR stays the same? Do they affect both? Is this even well defined?

I know that even before HDR standards we have had 10bit monitors, but as far as I know these did not feature expanded DR as such, but instead offered more DRR and expanded range for colors. Now, if we want to combine this approach of expanded color spaces with the idea of expanding the DR, don't we need more than 10bits? Why is this not a thing? Indeed, why is the industry (monitors and image standards) lagging so far behind our ability to capture images with high fidelity?
I've never seen the term DRR used before, but it *does* factor into some definitions of dynamic range - just not described using that specific term.

Some definitions of dynamic range is the number of stops between the brightest value and the darkest "usable" value, with "usable" defined as the human banding perception threshold. This is the definition the BBC used to state that the usable dynamic range of Rec. 709 video is only 5.27 stops - see the end of section 2 of http://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP309.pdf

Edit: Actually, in that case, DRR as you define it (smallest possible delta) isn't relevant. sRGB has quite high perceptual DRR as it unnecessarily wastes lots of code values near the upper end of the transfer function encoding deltas in luminance that are not perceptible.

In the case of HDR10 and HLG (both of which are 10-bit systems that use a logarithmic transfer function for at least part of the range), the extra bits go into DRR, which extends the BBC's definition of DR used in the whitepaper above

Interestingly, in the case of HLG, the usable dynamic range is increased even when recording 8-bit video, as the Rec. 709 and sRGB gamma curves waste a lot of code values at the top end of the range.

This whole thing gets muddied when you start talking about encoding a scene into an image - as nearly all cameras and RAW processing software will apply an S-curve prior to gamma encoding to compress the dynamic range of the scene. This S-curve happens to translate to more efficient use of 8-bit code values near the upper end of the range. In normal SDR displays, this S-curve is not "undone" for display, while with HLG on a compatible display, it is.

Also muddying the waters further is chrominance resolution - a common complaint regarding Sony's S-Gamut/S-Gamut3 is that they are too wide for 8-bit recording, leading to noticeable quantization errors.
don’t confuse display ability with actual standard ability

you can get a 1000:1 contrast ratio out of a tv and same from a jpeg on a good screen

gamma encoding will ensure the 9.96 stops are there

What the camera can or not is actually irrelevant as the display will only show what it can and the jpeg is made for a sdr display
It's pretty clear that you did not bother to read what I wrote, nor did you bother to read the BBC whitepaper that I linked to, since it quite clearly derived that sRGB/Rec709 gamma encoding with only 8 bits leads to only 5.27 stops of dynamic range where the Weber fraction is below the Schrieber limit for human banding perception, regardless of display or camera capability.
Gamma encoding for Rec709 manages 10 stops of dynamic range regardless of 8 or 10 bits as that is linked to the display characteristics minimum black 0.1 nits max 100 nits contrast ratio 1000:1

Standard dynamic range consumer television (8 bit video, e.g. DVD, SD and HD DVB) only
2
supports about 6 stops of dynamic range, as discussed below. Professional SDR video (10 bits)
supports about 10 stops


The issue with that article is the definition of useable dynamic range vs real display technology that is not CRT. Since years we have Tv that have deep blacks and most of the dynamic range is in the darks not in the highlights so the idea that a current Tv only resolve 1 nit is prehistoric all of the HDR capable display have better blacks and as result have more than 100:1 contrast ratio.

It is true that eventually you may see banding in some cases but it is really rare and the concept of 5.27 DR is pure scaremongery no display has a contrast ratio of 38:1 to push people to HDR. It did not really work SDR content when well produced is fine

The 5% idea presented in that paper assumes some really basic display even worse than a CRT that has 2% in reality displays are way lower than that regardless of HDR or SDR input

The other interesting part is that later in the paper it goes and say a display with 0.01 nits and 2000 nits has a DR of 17.6 so you don't see banding. However the point is that display does not exist and majority have very deep blacks but not necessarily are very bright.

My Tv has 700 nits peak but pretty much total black. It is the total black that makes the DR go up according to this logic but actually real blacks also show in 8 bits until a very very pushed clip comes in with no banding of any sort

So if the display is capable of 0.01 nits will be also capable of the same 0.01 nits in SDR so the contrast ratio is actually 10000:1 so that SDR has now 13.28 stops?

Of course not the logic applied to the case the author wants to push forward is very much twisted. The formula depend on the bit depth but also from the threshold and that depends on the characteristic of the display and quality of the blacks

In fact a poor display that takes 10 bit input will show posterisation sooner than a very good display with very good blacks at 8 bits input

BBC wanted to make the case for HLG but HLG is the poor man HDR based on knee compression a concept that has been around for years. Traditionally knee will kick in at 80-90% in the HLG implementation starts at 40-50% but is not that good to handle and yet it does not solve the issue of the football stadium half in strong light and half in the shade. If you watched a game in HLG you would see that is still looks rough

Eventually HLG has not picked up in broadcasting and all online streaming uses Dolby vision or HDR10 that are based on a real logaritmic transfer function

--
instagram http://instagram.com/interceptor121
My flickr sets http://www.flickr.com/photos/interceptor121/
Youtube channel http://www.youtube.com/interceptor121
Underwater Photo and Video Blog http://interceptor121.com
Deer Photography workshops https://interceptor121.com/2021/09/26/2021-22-deer-photography-workshops-in-woburn/
If you want to get in touch don't send me a PM rather contact me directly at my website/social media
 
Last edited:
sRGB at display level has a contrast ratio of 1000:1
How is the input signal coded, and what is SNR of that input signal? What is the quantization error? How do you measure the contrast ratio?
this is related to gamma encoding
Displays reverse the gamma for a natural look. Nature is linear. Gamma is used to perform lossy compression.
heif is a great format it was about time
No, not at least until substandard HEIC codings are phased out and compatibility issues are solved.

HEIC may turn out not as good as you think :)))

--
http://www.libraw.org/
 
Last edited:
sRGB at display level has a contrast ratio of 1000:1
How is the input signal coded, and what is SNR of that input signal? What is the quantization error? How do you measure the contrast ratio?
this is related to gamma encoding
Displays reverse the gamma for a natural look. Nature is linear. Gamma is used to perform lossy compression.
heif is a great format it was about time
No, not at least until substandard HEIC codings are phased out and compatibility issues are solved.

HEIC may turn out not as good as you think :)))
I am not sure whats the point of those observations

you are mixing up quality of input with output

Jpeg images are 8 bits sRGB so they can be displayed. Not linear either

The envelop of heif is based on superior compression and higher bit depth in a larger color space

so at identical starting point heif will do better with smaller files and have less quantisation error

if you have a device that can take both just try it

And of course there are good or bad implementation and variants like it happened with jpeg but it is a step forward

--
instagram http://instagram.com/interceptor121
My flickr sets http://www.flickr.com/photos/interceptor121/
Youtube channel http://www.youtube.com/interceptor121
Underwater Photo and Video Blog http://interceptor121.com
Deer Photography workshops https://interceptor121.com/2021/09/26/2021-22-deer-photography-workshops-in-woburn/
If you want to get in touch don't send me a PM rather contact me directly at my website/social media
 
Last edited:
It's pretty clear that you did not bother to read what I wrote, nor did you bother to read the BBC whitepaper that I linked to, since it quite clearly derived that sRGB/Rec709 gamma encoding with only 8 bits leads to only 5.27 stops of dynamic range where the Weber fraction is below the Schrieber limit for human banding perception, regardless of display or camera capability.
It's true that you might be able to see this banding when displaying an artificially synthesized step wedge. However, you don't see these discrete levels when displaying a camera image because of the photon shot noise
 
Last edited:
It's pretty clear that you did not bother to read what I wrote, nor did you bother to read the BBC whitepaper that I linked to, since it quite clearly derived that sRGB/Rec709 gamma encoding with only 8 bits leads to only 5.27 stops of dynamic range where the Weber fraction is below the Schrieber limit for human banding perception, regardless of display or camera capability.
It's true that you might be able to see this banding when displaying an artificially synthesized step wedge. However, you don't see these discrete levels when displaying a camera image because of the photon shot noise
The BBC white paper is an attempt to increase adoption of HDR based on the initial position that threshold is 5% that is false for any decent display out there

On photography side people are quite happy to argue that jpeg has more that 10 stops of dynamic range except Jpeg are encoded in sRGB that has the same gamut and gamma of rec709 HDTV

In fact tv are darker than a standard office monitor with poor blacks

It is quite funny how things can shift for convenient but actually on a good monitor you will have a contrast ratio of 1000:1 with peak brightness of 100 and minimum of 0.1 quite easily without having to spend thousands and be able to see a nice jpeg as well as nice SDR video that for all intents and purposes are the same thing except a few bits reserved for handling of video

And then dithering happens with the help of nature you are not doing computer graphics and composing gradients
 
Definitions:
DR - dynamic range - number of stops between darkest and brightes value
DRR - dynamic range resolution - smallest possible difference between values

Topic: It makes sense to say e.g. that a given sensor has 13 stops of dynamic range and we know both the DR and DRR - this works well for raw images (at least theoretically).

I have always assumed that e.g. an sRGB JPEG image also has a well defined DR and DRR, meaning an ideal monitor that is capable of displaying 8bit sRGB images faithfully should always have the same number of stops between pure black and pure white, even if I may increase the overall brightness of the image.

Now we are finally getting some image standards with more bit-depths, e.g. HDR10 and whatever else they are called. These sport 10 bits instead of only 8 bits, but where do these extra bits go? Do they extend the DR, and the DRR stays the same? Do they affect both? Is this even well defined?

I know that even before HDR standards we have had 10bit monitors, but as far as I know these did not feature expanded DR as such, but instead offered more DRR and expanded range for colors. Now, if we want to combine this approach of expanded color spaces with the idea of expanding the DR, don't we need more than 10bits? Why is this not a thing? Indeed, why is the industry (monitors and image standards) lagging so far behind our ability to capture images with high fidelity?
That's actually not true

Displays are more capable than consumer cameras

Consumer cameras really resolve less than 30 bits of colour and do not have 14 stops of DR when you look at pixels 1:1

Tvs commercially avaialble have in excess of 16 stops DR and support 10 bits 422 subsampling

The issue that you have is that cameras color spaces do not fit easily in a display colour space so you may have extra information that does not fit when you comform it

Currently the limitation on photography is jpeg, is an old format based on sRGB color space and rec709 gamma. This is used for display and printing and is smaller than what a camera and a display can offer

If you keep the same rec709 color space but have additional bit depth you increase gradations but not dynamic range. So ultimately 10 bits video is good for grading not necessarily makes any different for consumption

If you have a 10 bits image file that supports a wider gamut and a different gamma than sRGB you are able to display HDR images in HLG or HDR10 and you will see that actually the limit is the camera not the display as long as you have a good display
 
Consumer cameras really resolve less than 30 bits of colour and do not have 14 stops of DR when you look at pixels 1:1
30 bits of colour? It's not clear to me what you mean by this.
 
Consumer cameras really resolve less than 30 bits of colour and do not have 14 stops of DR when you look at pixels 1:1
30 bits of colour? It's not clear to me what you mean by this.
displays are RGB so each pixel has bit depth x number of channels. 10x3=30

If you look on our display properties you will see either 24 or 30 bits you will not see 8 or 10 bits

When you look at colors you are looking at a formed image that you can see in an RGB space not a raw file so the measure goes up by a factor of 3

Not necessarily a camera will resolve the 3 channels equally well

--
instagram http://instagram.com/interceptor121
My flickr sets http://www.flickr.com/photos/interceptor121/
Youtube channel http://www.youtube.com/interceptor121
Underwater Photo and Video Blog http://interceptor121.com
Deer Photography workshops https://interceptor121.com/2021/09/26/2021-22-deer-photography-workshops-in-woburn/
If you want to get in touch don't send me a PM rather contact me directly at my website/social media
 
Last edited:
Consumer cameras really resolve less than 30 bits of colour and do not have 14 stops of DR when you look at pixels 1:1
30 bits of colour? It's not clear to me what you mean by this.
displays are RGB so each pixel has bit depth x number of channels. 10x3=30

If you look on our display properties you will see either 24 or 30 bits you will not see 8 or 10 bits

When you look at colors you are looking at a formed image that you can see in an RGB space not a raw file so the measure goes up by a factor of 3

Not necessarily a camera will resolve the 3 channels equally well
Sure, I understand what you're saying about displays but you said consumer cameras resolve less than 30 bits of colour, which is what I don't fully understand.
 
Consumer cameras really resolve less than 30 bits of colour and do not have 14 stops of DR when you look at pixels 1:1
30 bits of colour? It's not clear to me what you mean by this.
displays are RGB so each pixel has bit depth x number of channels. 10x3=30

If you look on our display properties you will see either 24 or 30 bits you will not see 8 or 10 bits

When you look at colors you are looking at a formed image that you can see in an RGB space not a raw file so the measure goes up by a factor of 3

Not necessarily a camera will resolve the 3 channels equally well
Sure, I understand what you're saying about displays but you said consumer cameras resolve less than 30 bits of colour, which is what I don't fully understand.
Not many measures of color sensitivity are actually made

When you look at who makes them (dxomark) you can see that no camera actually exceeds 27 bits and only some APSC models get over 24 bits

Even full frame camera may have pretty low color sensitivity

I shoot HDR since a few years and I can confirm that when you look at an HDR image most of the difference is in the brightness some colors are a bit more vibrant but is not so overwhelming as you may think

Which in turn means that 10 bits is pretty good to contain what a camera can deliver although of course the camera may deliver colors that fall off the display gamut even if the display gamut is overall wider

There is a reason pictures have been on jpeg for a long time in effect even when you look in an wider gamut and higher DR you will not necessarily see benefits on all coordinates but mostly on brightness and tonal range essentially the monochrome part of your image
 
sRGB at display level has a contrast ratio of 1000:1
How is the input signal coded, and what is SNR of that input signal? What is the quantization error? How do you measure the contrast ratio?
this is related to gamma encoding
Displays reverse the gamma for a natural look. Nature is linear. Gamma is used to perform lossy compression.
heif is a great format it was about time
No, not at least until substandard HEIC codings are phased out and compatibility issues are solved.

HEIC may turn out not as good as you think :)))
I am not sure whats the point of those observations

you are mixing up quality of input with output
Are you sure you know the topic well enough to make statements instead of asking questions?

--
http://www.libraw.org/
 
Last edited:
It's pretty clear that you did not bother to read what I wrote, nor did you bother to read the BBC whitepaper that I linked to, since it quite clearly derived that sRGB/Rec709 gamma encoding with only 8 bits leads to only 5.27 stops of dynamic range where the Weber fraction is below the Schrieber limit for human banding perception, regardless of display or camera capability.
It's true that you might be able to see this banding when displaying an artificially synthesized step wedge. However, you don't see these discrete levels when displaying a camera image because of the photon shot noise
As I mention in my earlier post, everything gets muddied by the fact that most cameras compress their dynamic range with an S-curve, so that the reduced capabilities of an 8-bit sRGB display are mitigated.

Almost no camera on the market uses a standard sRGB transfer function for JPEG output or vanilla Rec. 709 transfer function for SDR video output - they all assume that the display will be limited in capability and tonemap appropriately with an S-curve.

HLG, meanwhile, does not have a built-in S-curve. When acting in its "fallback" mode (displayed on a regular Rec. 709 display), the logarithmic transfer function does behave similar to the knee of an S-curve. On a fully HLG-capable display, it's impressive what can be done with only 8-bit video and no further tonemapping.

(Sadly, HLG's fallback mode isn't *that* great since it lacks the "toe" of many S-curves, and displaying Rec. 2020 content on a Rec. 709 display without appropriate gamut mapping - which is what WILL happen in a "backwards compatibility fallback" scenario is not pretty.)
 
It's pretty clear that you did not bother to read what I wrote, nor did you bother to read the BBC whitepaper that I linked to, since it quite clearly derived that sRGB/Rec709 gamma encoding with only 8 bits leads to only 5.27 stops of dynamic range where the Weber fraction is below the Schrieber limit for human banding perception, regardless of display or camera capability.
It's true that you might be able to see this banding when displaying an artificially synthesized step wedge. However, you don't see these discrete levels when displaying a camera image because of the photon shot noise
The BBC white paper is an attempt to increase adoption of HDR based on the initial position that threshold is 5% that is false for any decent display out there
Based on what? You could have a display with infinite dynamic range - it doesn't matter if you feed it 8-bit Rec. 709, based on the concept of "Garbage in, garbage out".
 

Keyboard shortcuts

Back
Top