DR and bit depth.

Great Bustard

Forum Pro
Messages
28,369
Solutions
17
Reaction score
34,046
Am I correct in thinking that the DR (dynamic range) is the number of stops from the noise floor to full saturation and the bit depth is the number of distinguishable steps within the DR?

For example, let's say we record a photo with 14 bits/pixel and the DR is 12 stops. Does this mean that the two lowest bits are noise and the 12 upper bits (bit depth) are steps, resulting in 2^12 = 4096 distinguishable steps within the DR? Also, what is the tonal range? Am I confusing bit depth with tonal range, or, more likely, just confused, period?
 
Last edited:
Am I correct in thinking that the DR (dynamic range) is the number of stops from the noise floor to full saturation
This is how it is defined to be, at some reference size. Not that this is what it is. Also, there are three channels...
and the bit depth is the number of distinguishable steps within the DR?
No, it is the number of bits, in its usual meaning.
For example, let's say we record a photo with 14 bits/pixel and the DR is 12 stops. Does this mean that the two lowest bits are noise and the 12 upper bits (bit depth) are steps, resulting in 2^12 = 4096 distinguishable steps within the DR?
Not quite. First, noise at pixel level and noise at a reference size are different things. Next, the "noise floor" as a threshold of distinguishable signal is a bit of arbitrary choice. I have posted an image before with "features" about two stops below the noise floor which are still distinguishable.
Also, what is the tonal range?
Whatever you want it to be. "Range" usually refers to the set of all possible values (of a map, for example). DXOmark define it as the number of distinguishable gray levels without saying how we determine when they are distinguishable. They have a definition of color sensitivity here which is related. This is their definition. It suggests that the criterion for distinguishability is the noise level.
Am I confusing bit depth with tonal range, or, more likely, just confused, period?
 
Am I correct in thinking that the DR (dynamic range) is the number of stops from the noise floor to full saturation and the bit depth is the number of distinguishable steps within the DR?

For example, let's say we record a photo with 14 bits/pixel and the DR is 12 stops. Does this mean that the two lowest bits are noise and the 12 upper bits (bit depth) are steps, resulting in 2^12 = 4096 distinguishable steps within the DR? Also, what is the tonal range? Am I confusing bit depth with tonal range, or, more likely, just confused, period?
For those more scientifically inclined, we can define sensor DR as the range over which the exposure-referred SNR* goes from unity, grows, and then drops like a rock when the image saturates, back down to unity again.

On a practical basis this is about the same range of light described by GB above,

*exposure-referred SNR or SNR-H, is the signal measured in light (H), and the noise measured in equivalent light. I usually use # photons for measuring light.
 
Am I correct in thinking that the DR (dynamic range) is the number of stops from the noise floor to full saturation
Yes, or rather is it the log2 of the ratio of the saturation level to SNR=1 (Noise floor if you like).

EV = log2(Ssat/Sfloor)
and the bit depth is the number of distinguishable steps within the DR?
Number of quantised steps. They are not distinguishable if noise is larger than the quantisation step.
For example, let's say we record a photo with 14 bits/pixel and the DR is 12 stops. Does this mean that the two lowest bits are noise and the 12 upper bits (bit depth) are steps, resulting in 2^12 = 4096 distinguishable steps within the DR?
There are no 'steps' in that sense unless the data is noiseless. See below...
Also, what is the tonal range? Am I confusing bit depth with tonal range, or, more likely, just confused, period?
The steps in real data (multiple pixels) are not quantised, they are dithered by noise. Its only the individual pixels that are quantised.

Bit depth defines the quantisation level of the signal in each pixel. Signal is quantised (there are only 14 bits or 16384 levels per pixel).

Noise is an average so it is not quantised. An average of many quantised pixels can be a fraction.

Because Sfloor is defined by noise, it is not quantised.

So DR is can be 12.7 EV or 15 EV, even with a 14 bit ADC. This is because SNR=1 can theoretically occur at average signals of <1 AD unit.

log2(16383/0.5) = 15 EV, so it implies the average signal at the noise floor is 0.5 AD units.

All this means is that the average pixel value is 0.5, so pixels themselves would mostly be 0 and 1, with some outliers at 2 or even 3.

log2(16383/2.5) = 12.7 EV, so the average signal at the noise floor is 2.5 AD units.

So, if the DR on a 14-bit ADC is 12.7 EV, it means that we reach the noise floor 12.7 stops below saturation - irrespective of the bit depth. So yes, the other stops/bits are garbage.

Bit-depth does matter because lower quantisation reduces DR by increasing quantisation noise, so using a 12 bit ADC will reduce DR, but still not limit it to 12 EV.

Tonal range (should really be tonal DEPTH) is the number of noise limited greyscale steps between the Sfloor and Ssat.

This is generally a lot lower than the bit depth. For a typical FF camera its about 500-600 levels looking at DXO data. That's because we have to account for noise at each level. Human vision can distinguish about the same number on a typical display (shot noise affects human vision too) but most displays are 8-bit (256 levels) so we can see banding if the noise is not high enough to dither the steps.

Colour depth is the number of noise limited colours. A 14-bit ADC could theoretically define 2^(42) or 4.4 trillion colours. Noise limits that to about 23 million, an 8-bit display is limited to 17 million (roughly) and human vision can distinguish about 5 million - or less on a RGB display.

--
"A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." Antoine de Saint-Exupery
 
Last edited:
Am I correct in thinking that the DR (dynamic range) is the number of stops from the noise floor to full saturation and the bit depth is the number of distinguishable steps within the DR?

For example, let's say we record a photo with 14 bits/pixel and the DR is 12 stops. Does this mean that the two lowest bits are noise and the 12 upper bits (bit depth) are steps, resulting in 2^12 = 4096 distinguishable steps within the DR?
Does this imply that to correctly display an image with a 12-bit DR, a monitor with a static contrast ratio of at least 4096:1 would be required?

If not, why?
 
Austinian wrote: Does this imply that to correctly display an image with a 12-bit DR, a monitor with a static contrast ratio of at least 4096:1 would be required?
Well, if everything is ideal then yes.
If not, why?
In practice, for most of us, there currently are no monitors with such a contrast ratio that are good for graphic art work. Once calibrated, most today are lucky to approach 20% that, say 9 stops for a good average. That's one of the reasons why tone mapping is a critical step in image raw conversion.

Jack
 
Last edited:
Austinian wrote: Does this imply that to correctly display an image with a 12-bit DR, a monitor with a static contrast ratio of at least 4096:1 would be required?
Well, if everything is ideal then yes.
Thank you; I've been wondering about this for some time.
If not, why?
In practice, for most of us, there currently are no monitors with such a contrast ratio that are good for graphic art work. Once calibrated, most today are lucky to approach 20% that, say 9 stops for a good average. That's one of the reasons why tone mapping is a critical step in image raw conversion.
I've seen VA panels with measured contrast ratios higher than 5000:1; I take it they are disqualified because of viewing angle problems.

If the burn-in problems of OLED on PC monitors were solved, perhaps that would be the display tech of choice.
 
Austinian wrote: Does this imply that to correctly display an image with a 12-bit DR, a monitor with a static contrast ratio of at least 4096:1 would be required?
Well, if everything is ideal then yes.
Not sure it would help. I am not sure to what extent human vision can adapt to that sort of range without a shift in viewpoint... It only takes a bright white highlight in our field of view to hide a lot of the shadow tones.
If not, why?
In practice, for most of us, there currently are no monitors with such a contrast ratio that are good for graphic art work. Once calibrated, most today are lucky to approach 20% that, say 9 stops for a good average. That's one of the reasons why tone mapping is a critical step in image raw conversion.

Jack
 
Austinian wrote: Does this imply that to correctly display an image with a 12-bit DR, a monitor with a static contrast ratio of at least 4096:1 would be required?
Well, if everything is ideal then yes.
Not sure it would help. I am not sure to what extent human vision can adapt to that sort of range without a shift in viewpoint... It only takes a bright white highlight in our field of view to hide a lot of the shadow tones.
When I look at images on my now old 9G 50" Kuro plasma fed at 3x10bits they look fantastic, much better than on my modern, calibrated, high-end photography monitor. The Kuro was measured at 33000:1 when first installed by a pro 10+ years ago, although I never used it as bright as that. I believe it produces CR in the high thousands in my typical viewing conditions.

The problem is that one then begins to start minimally processing images for that output device - but they can't be shared or printed or viewed anywhere else because then they look flat.

Jack
 
Last edited:
That's one of the reasons why tone mapping is a critical step in image raw conversion.

Jack
Could you pls elaborate?

I sometimes use RawTherapee and I know tone mapping is one of its features, but I seldom like its results; it tends to look artificial, mostly.

Most often I use ACR because of its speed and simple UI (colours can be weird but are editable). Are you implying that tone mapping is done "behind the scenes" in ACR (like some other things like sharpening and noise suppression)?

Or do I simply not understand what you mean....?
 
Am I correct in thinking that the DR (dynamic range) is the number of stops from the noise floor to full saturation
Yes, or rather is it the log2 of the ratio of the saturation level to SNR=1 (Noise floor if you like).

EV = log2(Ssat/Sfloor)
and the bit depth is the number of distinguishable steps within the DR?
Number of quantised steps. They are not distinguishable if noise is larger than the quantisation step.
For example, let's say we record a photo with 14 bits/pixel and the DR is 12 stops. Does this mean that the two lowest bits are noise and the 12 upper bits (bit depth) are steps, resulting in 2^12 = 4096 distinguishable steps within the DR?
There are no 'steps' in that sense unless the data is noiseless. See below...
Also, what is the tonal range? Am I confusing bit depth with tonal range, or, more likely, just confused, period?
The steps in real data (multiple pixels) are not quantised, they are dithered by noise. Its only the individual pixels that are quantised.

Bit depth defines the quantisation level of the signal in each pixel. Signal is quantised (there are only 14 bits or 16384 levels per pixel).

Noise is an average so it is not quantised. An average of many quantised pixels can be a fraction.

Because Sfloor is defined by noise, it is not quantised.

So DR is can be 12.7 EV or 15 EV, even with a 14 bit ADC. This is because SNR=1 can theoretically occur at average signals of <1 AD unit.

log2(16383/0.5) = 15 EV, so it implies the average signal at the noise floor is 0.5 AD units.

All this means is that the average pixel value is 0.5, so pixels themselves would mostly be 0 and 1, with some outliers at 2 or even 3.

log2(16383/2.5) = 12.7 EV, so the average signal at the noise floor is 2.5 AD units.

So, if the DR on a 14-bit ADC is 12.7 EV, it means that we reach the noise floor 12.7 stops below saturation - irrespective of the bit depth. So yes, the other stops/bits are garbage.

Bit-depth does matter because lower quantisation reduces DR by increasing quantisation noise, so using a 12 bit ADC will reduce DR, but still not limit it to 12 EV.

Tonal range (should really be tonal DEPTH) is the number of noise limited greyscale steps between the Sfloor and Ssat.

This is generally a lot lower than the bit depth. For a typical FF camera its about 500-600 levels looking at DXO data. That's because we have to account for noise at each level. Human vision can distinguish about the same number on a typical display (shot noise affects human vision too) but most displays are 8-bit (256 levels) so we can see banding if the noise is not high enough to dither the steps.

Colour depth is the number of noise limited colours. A 14-bit ADC could theoretically define 2^(42) or 4.4 trillion colours. Noise limits that to about 23 million, an 8-bit display is limited to 17 million (roughly) and human vision can distinguish about 5 million - or less on a RGB display.
First of all, thank you (and everyone else!) for your response. Your post seems to directly address my questions, but my cognitive limitations are fogging my understanding of the above. Perhaps the above will be more clear if I phrase my question in terms of an example.

Consider a pixel with a saturation limit of 65536 electrons and an electronic noise of 2 electrons. Then the per-pixel engineering DR is log2 (65536/2) = 15 stops, yes? If using a 14 bit ADC, the bit depth is 14 bits, if using a 12 bit ADC, the bit depth is 12 bits, right? So the bit depth is determined by the ADC (assuming, of course, that the image file has at least the same number of bits/pixel)?

Then, with regards to tonal depth, we have to account for photon noise, yes? That is, DR only accounts for the noise floors (electronic noise on the low end and PRNU at the high end) but not the photon noise, right?

Lastly, color depth requires that we include a minimum of three pixels into the calculation, but I'm not sure (in terms of calculating) how noise affects this measure. Is it fair to say that it is closely related to the tonal depth?
 
Austinian wrote: Does this imply that to correctly display an image with a 12-bit DR, a monitor with a static contrast ratio of at least 4096:1 would be required?
Well, if everything is ideal then yes.
If not, why?
In practice, for most of us, there currently are no monitors with such a contrast ratio that are good for graphic art work. Once calibrated, most today are lucky to approach 20% that, say 9 stops for a good average. That's one of the reasons why tone mapping is a critical step in image raw conversion.

Jack
And of course images are often stored in only 8 bits (jpeg), which also makes tone mapping important to be able to represent a DR of more than 8 bits (though in that case the inverse tone curve is applied later).
 
That's one of the reasons why tone mapping is a critical step in image raw conversion.

Jack
Could you pls elaborate?

I sometimes use RawTherapee and I know tone mapping is one of its features, but I seldom like its results; it tends to look artificial, mostly.
I didn't say it was easy. There is a lot of perceptual stuff happening so the compression is not seamless. On the other hand the way its been performed until recently (in-camera and in-converter) is just an RGB curve that's quite suboptimal but what we are used to, so most people are ok with that out of habit.

It's an area of active research so stay tuned.

Jack
 
Last edited:
Am I correct in thinking that the DR (dynamic range) is the number of stops from the noise floor to full saturation
Yes, or rather is it the log2 of the ratio of the saturation level to SNR=1 (Noise floor if you like).

EV = log2(Ssat/Sfloor)
and the bit depth is the number of distinguishable steps within the DR?
Number of quantised steps. They are not distinguishable if noise is larger than the quantisation step.
For example, let's say we record a photo with 14 bits/pixel and the DR is 12 stops. Does this mean that the two lowest bits are noise and the 12 upper bits (bit depth) are steps, resulting in 2^12 = 4096 distinguishable steps within the DR?
There are no 'steps' in that sense unless the data is noiseless. See below...
Also, what is the tonal range? Am I confusing bit depth with tonal range, or, more likely, just confused, period?
The steps in real data (multiple pixels) are not quantised, they are dithered by noise. Its only the individual pixels that are quantised.

Bit depth defines the quantisation level of the signal in each pixel. Signal is quantised (there are only 14 bits or 16384 levels per pixel).

Noise is an average so it is not quantised. An average of many quantised pixels can be a fraction.

Because Sfloor is defined by noise, it is not quantised.

So DR is can be 12.7 EV or 15 EV, even with a 14 bit ADC. This is because SNR=1 can theoretically occur at average signals of <1 AD unit.

log2(16383/0.5) = 15 EV, so it implies the average signal at the noise floor is 0.5 AD units.

All this means is that the average pixel value is 0.5, so pixels themselves would mostly be 0 and 1, with some outliers at 2 or even 3.

log2(16383/2.5) = 12.7 EV, so the average signal at the noise floor is 2.5 AD units.

So, if the DR on a 14-bit ADC is 12.7 EV, it means that we reach the noise floor 12.7 stops below saturation - irrespective of the bit depth. So yes, the other stops/bits are garbage.

Bit-depth does matter because lower quantisation reduces DR by increasing quantisation noise, so using a 12 bit ADC will reduce DR, but still not limit it to 12 EV.

Tonal range (should really be tonal DEPTH) is the number of noise limited greyscale steps between the Sfloor and Ssat.

This is generally a lot lower than the bit depth. For a typical FF camera its about 500-600 levels looking at DXO data. That's because we have to account for noise at each level. Human vision can distinguish about the same number on a typical display (shot noise affects human vision too) but most displays are 8-bit (256 levels) so we can see banding if the noise is not high enough to dither the steps.

Colour depth is the number of noise limited colours. A 14-bit ADC could theoretically define 2^(42) or 4.4 trillion colours. Noise limits that to about 23 million, an 8-bit display is limited to 17 million (roughly) and human vision can distinguish about 5 million - or less on a RGB display.
First of all, thank you (and everyone else!) for your response. Your post seems to directly address my questions, but my cognitive limitations are fogging my understanding of the above. Perhaps the above will be more clear if I phrase my question in terms of an example.

Consider a pixel with a saturation limit of 65536 electrons and an electronic noise of 2 electrons. Then the per-pixel engineering DR is log2 (65536/2) = 15 stops, yes?
Not quite, read the assigned homework ;-)
 
Am I correct in thinking that the DR (dynamic range) is the number of stops from the noise floor to full saturation
Yes, or rather is it the log2 of the ratio of the saturation level to SNR=1 (Noise floor if you like).

EV = log2(Ssat/Sfloor)
and the bit depth is the number of distinguishable steps within the DR?
Number of quantised steps. They are not distinguishable if noise is larger than the quantisation step.
For example, let's say we record a photo with 14 bits/pixel and the DR is 12 stops. Does this mean that the two lowest bits are noise and the 12 upper bits (bit depth) are steps, resulting in 2^12 = 4096 distinguishable steps within the DR?
There are no 'steps' in that sense unless the data is noiseless. See below...
Also, what is the tonal range? Am I confusing bit depth with tonal range, or, more likely, just confused, period?
The steps in real data (multiple pixels) are not quantised, they are dithered by noise. Its only the individual pixels that are quantised.

Bit depth defines the quantisation level of the signal in each pixel. Signal is quantised (there are only 14 bits or 16384 levels per pixel).

Noise is an average so it is not quantised. An average of many quantised pixels can be a fraction.

Because Sfloor is defined by noise, it is not quantised.

So DR is can be 12.7 EV or 15 EV, even with a 14 bit ADC. This is because SNR=1 can theoretically occur at average signals of <1 AD unit.

log2(16383/0.5) = 15 EV, so it implies the average signal at the noise floor is 0.5 AD units.

All this means is that the average pixel value is 0.5, so pixels themselves would mostly be 0 and 1, with some outliers at 2 or even 3.

log2(16383/2.5) = 12.7 EV, so the average signal at the noise floor is 2.5 AD units.

So, if the DR on a 14-bit ADC is 12.7 EV, it means that we reach the noise floor 12.7 stops below saturation - irrespective of the bit depth. So yes, the other stops/bits are garbage.

Bit-depth does matter because lower quantisation reduces DR by increasing quantisation noise, so using a 12 bit ADC will reduce DR, but still not limit it to 12 EV.

Tonal range (should really be tonal DEPTH) is the number of noise limited greyscale steps between the Sfloor and Ssat.

This is generally a lot lower than the bit depth. For a typical FF camera its about 500-600 levels looking at DXO data. That's because we have to account for noise at each level. Human vision can distinguish about the same number on a typical display (shot noise affects human vision too) but most displays are 8-bit (256 levels) so we can see banding if the noise is not high enough to dither the steps.

Colour depth is the number of noise limited colours. A 14-bit ADC could theoretically define 2^(42) or 4.4 trillion colours. Noise limits that to about 23 million, an 8-bit display is limited to 17 million (roughly) and human vision can distinguish about 5 million - or less on a RGB display.
First of all, thank you (and everyone else!) for your response. Your post seems to directly address my questions, but my cognitive limitations are fogging my understanding of the above. Perhaps the above will be more clear if I phrase my question in terms of an example.

Consider a pixel with a saturation limit of 65536 electrons and an electronic noise of 2 electrons. Then the per-pixel engineering DR is log2 (65536/2) = 15 stops, yes?
A pixel can't have any noise. Noise is the RMS standard deviation of the variation BETWEEN pixels.

So, change the question to a millions pixels with an average of 65536 electrons and an RMS deviation of 2 electrons...
If using a 14 bit ADC, the bit depth is 14 bits,
Per pixel, yes.
if using a 12 bit ADC, the bit depth is 12 bits, right? So the bit depth is determined by the ADC (assuming, of course, that the image file has at least the same number of bits/pixel)?
Yes. And unless you convert it to a JPEG, it will. 16 bit conversions to TIFF just leave the low order bits at zero until some calculation is done on the data.
Then, with regards to tonal depth, we have to account for photon noise, yes? That is, DR only accounts for the noise floors (electronic noise on the low end and PRNU at the high end) but not the photon noise, right?
Yes. Minimum separation between any signal S and the next signal (higher or lower) is the total noise in that signal. Shot, PRNU, dark, read. Because Shot and PRNU increase with signal the spacing gets wider as we get higher, but the human gamma response reverses the effect to some extent.

As an aside, high end noise is largely irrelevant.

65536/2 is 32768 and log 2 is 15 EV

If there is 500e of noise at the top end...

65036/2 is 32518 and log2 is 14.989 EV - virtually identical. This is why adding a 10-bit offset to Canon images doesn't have much effect on DR.

65536 - 1024 (-10 bits) = 14.977 EV

But adding 2e to the noise floor has a dramatic effect

65536/4 = 16384 = 14 bits

Effectively a 1 stop increase in the noise floor (2e) has the same effect as a 1 stop decrease in Ssat. In other words, we would have to raise ISO by a stop and reduce Ssat to 32768 to get the same effect.
Lastly, color depth requires that we include a minimum of three pixels into the calculation, but I'm not sure (in terms of calculating) how noise affects this measure. Is it fair to say that it is closely related to the tonal depth?
Yes. There is a separate tonal depth for each colour channel. Multiply them together and you get a first approximation. However, it's not a very good one because a number of those colours are the same colour, so you have to do some volumetric 3-D integration on the 3D extent of the colour noise to work out how many are unique.

Note, the noise in each channel will not be the same...
 
Austinian wrote: Does this imply that to correctly display an image with a 12-bit DR, a monitor with a static contrast ratio of at least 4096:1 would be required?
Well, if everything is ideal then yes.
Not sure it would help. I am not sure to what extent human vision can adapt to that sort of range without a shift in viewpoint... It only takes a bright white highlight in our field of view to hide a lot of the shadow tones.
When I look at images on my now old 9G 50" Kuro plasma fed at 3x10bits they look fantastic, much better than on my modern, calibrated, high-end photography monitor. The Kuro was measured at 33000:1 when first installed by a pro 10+ years ago, although I never used it as bright as that. I believe it produces CR in the high thousands in my typical viewing conditions.

The problem is that one then begins to start minimally processing images for that output device - but they can't be shared or printed or viewed anywhere else because then they look flat.

Jack
OK. Reading around the subject, it seems our near field accommodation is up to 13 EV, or about 8000:1 so it would certainly be better than a typical PC display.

Any more than that and the bottom or top stops are indistinguishable.

It does rather depend on the average ambient brightness though. This only applies to an average luminance of middle grey viewed in dark conditions...

Most graphics monitors are designed to produce artwork for TV, film and print, so I guess the need for 15 stops is less of an issue.
 
First of all, thank you (and everyone else!) for your response. Your post seems to directly address my questions, but my cognitive limitations are fogging my understanding of the above. Perhaps the above will be more clear if I phrase my question in terms of an example.

Consider a pixel with a saturation limit of 65536 electrons and an electronic noise of 2 electrons. Then the per-pixel engineering DR is log2 (65536/2) = 15 stops, yes?
A pixel can't have any noise.
Of course, it can, in the context GB was using this term. More than that, different pixels can have different noise levels.
Noise is the RMS standard deviation of the variation BETWEEN pixels.
Here we go! Finally down to the question what noise is!

I tend to like variation between pixels, this gives me detail. :-)

 
First of all, thank you (and everyone else!) for your response. Your post seems to directly address my questions, but my cognitive limitations are fogging my understanding of the above. Perhaps the above will be more clear if I phrase my question in terms of an example.

Consider a pixel with a saturation limit of 65536 electrons and an electronic noise of 2 electrons. Then the per-pixel engineering DR is log2 (65536/2) = 15 stops, yes?
A pixel can't have any noise.
Of course, it can, in the context GB was using this term.
How.
More than that, different pixels can have different noise levels.
Noise is the RMS standard deviation of the variation BETWEEN pixels.
Here we go! Finally down to the question what noise is!
You could just try looking it up.
I tend to like variation between pixels, this gives me detail. :-)
How nice for you.
 
First of all, thank you (and everyone else!) for your response. Your post seems to directly address my questions, but my cognitive limitations are fogging my understanding of the above. Perhaps the above will be more clear if I phrase my question in terms of an example.

Consider a pixel with a saturation limit of 65536 electrons and an electronic noise of 2 electrons. Then the per-pixel engineering DR is log2 (65536/2) = 15 stops, yes?
A pixel can't have any noise.
Of course, it can, in the context GB was using this term.
How.
Keep taking measurements of a single pixel and you get a random process in time.
More than that, different pixels can have different noise levels.
Noise is the RMS standard deviation of the variation BETWEEN pixels.
Here we go! Finally down to the question what noise is!
You could just try looking it up.
Says something about sound and noise and that they are indistinguishable!
I tend to like variation between pixels, this gives me detail. :-)
How nice for you.
 
Last edited:

Keyboard shortcuts

Back
Top