Perception, reality and a signal below the noise...

Someone was discussing to buy Hasselblad X1D or Fujifilm GFX. One of the comments was:

'The X1D cameras just have a different look. It might be the massive color information that the camera is said to have in its firmware. It might be something in the way a "raw" file is generated. It might be the treatment of colors in the camera profile that Phocus has or that Adobe obtained from Hasselblad for Lightroom. Maybe one can wrestle an X1D raw file to look like a Fuji GFX file, but if you just start on the X1D raw in Phocus or even Lightroom, you typically have something of the camera "look" in most shots when you are done.'

That is a quite definitive statement. But do have cameras different color?

So, I downloaded DPReviews studio test images for both the X1D and the GFX 100, generated DCP Profiles for both with LumaRiver Profile Designer developed the Studio Test image with consistent exposure in Lightroom and analysed the ColorChecker colors using Babelcolor 'PatchTool'.

The color differences where like this:

The colors are split squares, I don't recall which side is wich, but they look pretty similar to me...
The colors are split squares, I don't recall which side is wich, but they look pretty similar to me...

So, it seems both cameras are capable of producing exactly the same color.
on this chart...
That is true.

But, it is a decent indication that the sensors can produce same colors.

Best regards

Erik



--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic uses to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
Someone was discussing to buy Hasselblad X1D or Fujifilm GFX. One of the comments was:

'The X1D cameras just have a different look. It might be the massive color information that the camera is said to have in its firmware. It might be something in the way a "raw" file is generated. It might be the treatment of colors in the camera profile that Phocus has or that Adobe obtained from Hasselblad for Lightroom. Maybe one can wrestle an X1D raw file to look like a Fuji GFX file, but if you just start on the X1D raw in Phocus or even Lightroom, you typically have something of the camera "look" in most shots when you are done.'

That is a quite definitive statement. But do have cameras different color?

So, I downloaded DPReviews studio test images for both the X1D and the GFX 100, generated DCP Profiles for both with LumaRiver Profile Designer developed the Studio Test image with consistent exposure in Lightroom and analysed the ColorChecker colors using Babelcolor 'PatchTool'.

The color differences where like this:

The colors are split squares, I don't recall which side is wich, but they look pretty similar to me...
The colors are split squares, I don't recall which side is wich, but they look pretty similar to me...

So, it seems both cameras are capable of producing exactly the same color.
Obviously, different raw developer may produce different colors and they apply hue twists and tone curves that are different.

This is one of the test samples of the X1DII from DPReviews, processed with Hasselblad's own Phocus with WB and 'exposure' ('L'-value in Lab) approximately the same. There are differences. But, i don't think that would be a reason to choose one over the other.
This is one of the test samples of the X1DII from DPReviews, processed with Hasselblad's own Phocus with WB and 'exposure' ('L'-value in Lab) approximately the same. There are differences. But, i don't think that would be a reason to choose one over the other.

Best regards

Erik
Indeed. Far too many variables in the process chain to attribute colour differences to a single factor, but comparing studio comparisons in DPR, which are all done with the Adobe Standard profile in ACR, most cameras are remarkably similar these days.

However, the biggest difference seems to be on the blue-green axis. Different blue response?

Out-of-camera colours, or different raw converters, can make quite a difference. I have to say, I always find ACR a little yellow on pale skin tones with every camera I own.

Phocus did a good job here, but whether it's accurate or not is another matter. I used to use C1 for portraits until I came up with a profile for ACR that I liked.

--
"A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." Antoine de Saint-Exupery
 
Last edited:
Someone was discussing to buy Hasselblad X1D or Fujifilm GFX. One of the comments was:

'The X1D cameras just have a different look. It might be the massive color information that the camera is said to have in its firmware. It might be something in the way a "raw" file is generated. It might be the treatment of colors in the camera profile that Phocus has or that Adobe obtained from Hasselblad for Lightroom. Maybe one can wrestle an X1D raw file to look like a Fuji GFX file, but if you just start on the X1D raw in Phocus or even Lightroom, you typically have something of the camera "look" in most shots when you are done.'

That is a quite definitive statement. But do have cameras different color?

So, I downloaded DPReviews studio test images for both the X1D and the GFX 100, generated DCP Profiles for both with LumaRiver Profile Designer developed the Studio Test image with consistent exposure in Lightroom and analysed the ColorChecker colors using Babelcolor 'PatchTool'.

The color differences where like this:

The colors are split squares, I don't recall which side is wich, but they look pretty similar to me...
The colors are split squares, I don't recall which side is wich, but they look pretty similar to me...

So, it seems both cameras are capable of producing exactly the same color.
on this chart...
That is true.

But, it is a decent indication that the sensors can produce same colors.
If my understanding is right, you used the same chart for profiling. Is so, this makes things even worse. The result you see has nothing to do with the sensors; it produces similar colors because this is what the software was designed to do, having a cheat sheet. They could have written the code so that you would get those colors from a B&W sensor as well.

If producing the same colors was so simple, Adobe would have done it, right, in one of their profiles instead of this?
 
Last edited:
Someone was discussing to buy Hasselblad X1D or Fujifilm GFX. One of the comments was:

'The X1D cameras just have a different look. It might be the massive color information that the camera is said to have in its firmware. It might be something in the way a "raw" file is generated. It might be the treatment of colors in the camera profile that Phocus has or that Adobe obtained from Hasselblad for Lightroom. Maybe one can wrestle an X1D raw file to look like a Fuji GFX file, but if you just start on the X1D raw in Phocus or even Lightroom, you typically have something of the camera "look" in most shots when you are done.'

That is a quite definitive statement. But do have cameras different color?

So, I downloaded DPReviews studio test images for both the X1D and the GFX 100, generated DCP Profiles for both with LumaRiver Profile Designer developed the Studio Test image with consistent exposure in Lightroom and analysed the ColorChecker colors using Babelcolor 'PatchTool'.

The color differences where like this:

The colors are split squares, I don't recall which side is wich, but they look pretty similar to me...
The colors are split squares, I don't recall which side is wich, but they look pretty similar to me...

So, it seems both cameras are capable of producing exactly the same color.
on this chart...
That is true.

But, it is a decent indication that the sensors can produce same colors.
If my understanding is right, you used the same chart for profiling. Is so, this makes things even worse. The result you see has nothing to do with the sensors; it produces similar colors because this is what the software was designed to do, having a cheat sheet. They could have written the code so that you would get those colors from a B&W sensor as well.

If producing the same colors was so simple, Adobe would have done it, right, in one of their profiles instead of this?
Hi,

Yes and no. One thing is that I have actually made a large set of tests.

But, the page you refer to has three fundamental errors:
  • The first is that photographs of skin color at best provide a metameric match. A combination of dyes that provide similar stimulus to the skin tones, but don't have a spectral match.
  • The second is that DPReview has found out that they portrait prints are subject to significant fading. So, they change between shooting occasions.
  • The third is that I presume that they are glossy and subject to glare, depending on positioning of the camera and lights.
A way to improve my comparison would be to use the normal ColorChecker as a 'learning set' and use another reference as 'evaluation set'. Such an evaluation set could be ColorChecker GS.

Unfortunately, I don't have access to a good set of ColorChecker SG shots Hasselblad X1D

--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic uses to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
Someone was discussing to buy Hasselblad X1D or Fujifilm GFX. One of the comments was:

'The X1D cameras just have a different look. It might be the massive color information that the camera is said to have in its firmware. It might be something in the way a "raw" file is generated. It might be the treatment of colors in the camera profile that Phocus has or that Adobe obtained from Hasselblad for Lightroom. Maybe one can wrestle an X1D raw file to look like a Fuji GFX file, but if you just start on the X1D raw in Phocus or even Lightroom, you typically have something of the camera "look" in most shots when you are done.'

That is a quite definitive statement. But do have cameras different color?

So, I downloaded DPReviews studio test images for both the X1D and the GFX 100, generated DCP Profiles for both with LumaRiver Profile Designer developed the Studio Test image with consistent exposure in Lightroom and analysed the ColorChecker colors using Babelcolor 'PatchTool'.

The color differences where like this:

So, it seems both cameras are capable of producing exactly the same color.
on this chart...
That is true.

But, it is a decent indication that the sensors can produce same colors.
If my understanding is right, you used the same chart for profiling. Is so, this makes things even worse. The result you see has nothing to do with the sensors; it produces similar colors because this is what the software was designed to do, having a cheat sheet. They could have written the code so that you would get those colors from a B&W sensor as well.

If producing the same colors was so simple, Adobe would have done it, right, in one of their profiles instead of this?
Hi,

Yes and no. One thing is that I have actually made a large set of tests.

But, the page you refer to has three fundamental errors:
  • The first is that photographs of skin color at best provide a metameric match. A combination of dyes that provide similar stimulus to the skin tones, but don't have a spectral match.
That is one of my points about the test you did. Those color patches are made of dyes spanning a very low dimensional space; it has been discussed here. I never said that those portraits represent color differences when shooting real people; I just pointed them out as examples of color differences.
  • The second is that DPReview has found out that they portrait prints are subject to significant fading. So, they change between shooting occasions.
Does the color chart change as well? You can see similar differenced there. Same for the color wheel.
  • The third is that I presume that they are glossy and subject to glare, depending on positioning of the camera and lights.
That can hardly explain the same effects elsewhere, see above. And even if it could, it would cast a deep shadow on your experiment based on the same images.
A way to improve my comparison would be to use the normal ColorChecker as a 'learning set' and use another reference as 'evaluation set'. Such an evaluation set could be ColorChecker GS.
The other references should be an year worth of shooting in various conditions.
 
Someone was discussing to buy Hasselblad X1D or Fujifilm GFX. One of the comments was:

'The X1D cameras just have a different look. It might be the massive color information that the camera is said to have in its firmware. It might be something in the way a "raw" file is generated. It might be the treatment of colors in the camera profile that Phocus has or that Adobe obtained from Hasselblad for Lightroom. Maybe one can wrestle an X1D raw file to look like a Fuji GFX file, but if you just start on the X1D raw in Phocus or even Lightroom, you typically have something of the camera "look" in most shots when you are done.'

That is a quite definitive statement. But do have cameras different color?

So, I downloaded DPReviews studio test images for both the X1D and the GFX 100, generated DCP Profiles for both with LumaRiver Profile Designer developed the Studio Test image with consistent exposure in Lightroom and analysed the ColorChecker colors using Babelcolor 'PatchTool'.

The color differences where like this:

So, it seems both cameras are capable of producing exactly the same color.
on this chart...
That is true.

But, it is a decent indication that the sensors can produce same colors.
If my understanding is right, you used the same chart for profiling. Is so, this makes things even worse. The result you see has nothing to do with the sensors; it produces similar colors because this is what the software was designed to do, having a cheat sheet. They could have written the code so that you would get those colors from a B&W sensor as well.

If producing the same colors was so simple, Adobe would have done it, right, in one of their profiles instead of this?
Hi,

Yes and no. One thing is that I have actually made a large set of tests.

But, the page you refer to has three fundamental errors:
  • The first is that photographs of skin color at best provide a metameric match. A combination of dyes that provide similar stimulus to the skin tones, but don't have a spectral match.
That is one of my points about the test you did. Those color patches are made of dyes spanning a very low dimensional space; it has been discussed here. I never said that those portraits represent color differences when shooting real people; I just pointed them out as examples of color differences.
That was discussed to some extent when Fujifilm owners compared the GFX 50S and the GFX 50S. Color rendition was very different on the portrait prints. The GFX 50S was tested with the 63 mm and the GFX 50R was tested with 120 mm. That makes a difference in shooting distance and moves specular reflections.

DPReview found out that the prints were fading and replaced them. They didn't find a good solution for that.
  • The second is that DPReview has found out that they portrait prints are subject to significant fading. So, they change between shooting occasions.
Does the color chart change as well? You can see similar differenced there. Same for the color wheel.
As I noted, I used profiles generated the same way. DPReviews uses profiles that Adobe generated.
  • The third is that I presume that they are glossy and subject to glare, depending on positioning of the camera and lights.
That can hardly explain the same effects elsewhere, see above. And even if it could, it would cast a deep shadow on your experiment based on the same images.
No, it would not. As the only comparison I made is on the ColorChecker. Here, I don't know how much fading it has. I would recall XRite suggests replacing it after two years.
A way to improve my comparison would be to use the normal ColorChecker as a 'learning set' and use another reference as 'evaluation set'. Such an evaluation set could be ColorChecker GS.
The other references should be an year worth of shooting in various conditions.
You cannot do any controlled experiments under various conditions.

What I have seen here was that my samples from the ColorChecker were essentially indentical. So, all the math and experimental conditions resulted in similar images from the both X1D and GFX.

That doesn't mean that they reproduce the ColorChecker perfectly:

This is a straight screen capture, so it distorts colors a bit.
This is a straight screen capture, so it distorts colors a bit.

Comparing the GFX100 with the reference values we have a CIEDE 2000 difference of 2.77 on average 6.04 max.

But comparing the X1D and the GFX 100, the figures are average 0.82 and max 1.6.

So the conversions between the GFX 100 and the X1D are very close.

Now, would we compare the GFX 100 to say the Phase One IQ3100MP, that probably has a different CFA design, the results would be different, although using the same methods:

On the left: Fuji GFX 100 compared Phase One IQ3100MP, on the right Fuji GFX 100 compared to Hasselblad X1D.
On the left: Fuji GFX 100 compared Phase One IQ3100MP, on the right Fuji GFX 100 compared to Hasselblad X1D.

Hasselblad's Phocus has a reproduction setting and can calibrate using the ColorChecker.

Left here is Hasselblad Repro setting with Phocus's built on profile compared to ColorChecker reference data. Right is Phocus Repro, calibrated from the test target.
Left here is Hasselblad Repro setting with Phocus's built on profile compared to ColorChecker reference data. Right is Phocus Repro, calibrated from the test target.

Delta E values are average 2.7 and max 6.38 on the left, with 1.18 and 1.98 on the right.

Is that a camera based difference or is the ColorChecker DPReview uses faded?

We can compare Phocus repro setting with their default rendition 'Factory' and see how much color the 'factory' setting adds.

Quite a lot. The change between reproduction and 'Fcatory' setting is 3.85 DE 2000 on average and 6.79 at most. Much of that is the tone curve, but I guess it is also a lot of color rendition intent.
Quite a lot. The change between reproduction and 'Fcatory' setting is 3.85 DE 2000 on average and 6.79 at most. Much of that is the tone curve, but I guess it is also a lot of color rendition intent.

Let's finally compare the Factory rendition from Phocus with the rendition we got Lightroom using Lumariver:

The differences here are quite significant. That pretty much indicates much of the color difference is coming from Phocus.
The differences here are quite significant. That pretty much indicates much of the color difference is coming from Phocus.

Thats all from me for now, going on a photo trip for a week...

Best regards

Erik

--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic uses to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
Last edited:
Someone was discussing to buy Hasselblad X1D or Fujifilm GFX. One of the comments was:

'The X1D cameras just have a different look. It might be the massive color information that the camera is said to have in its firmware. It might be something in the way a "raw" file is generated. It might be the treatment of colors in the camera profile that Phocus has or that Adobe obtained from Hasselblad for Lightroom. Maybe one can wrestle an X1D raw file to look like a Fuji GFX file, but if you just start on the X1D raw in Phocus or even Lightroom, you typically have something of the camera "look" in most shots when you are done.'

That is a quite definitive statement. But do have cameras different color?

So, I downloaded DPReviews studio test images for both the X1D and the GFX 100, generated DCP Profiles for both with LumaRiver Profile Designer developed the Studio Test image with consistent exposure in Lightroom and analysed the ColorChecker colors using Babelcolor 'PatchTool'.

The color differences where like this:

So, it seems both cameras are capable of producing exactly the same color.
on this chart...
That is true.

But, it is a decent indication that the sensors can produce same colors.
If my understanding is right, you used the same chart for profiling. Is so, this makes things even worse. The result you see has nothing to do with the sensors; it produces similar colors because this is what the software was designed to do, having a cheat sheet. They could have written the code so that you would get those colors from a B&W sensor as well.

If producing the same colors was so simple, Adobe would have done it, right, in one of their profiles instead of this?
Hi,

Yes and no. One thing is that I have actually made a large set of tests.

But, the page you refer to has three fundamental errors:
  • The first is that photographs of skin color at best provide a metameric match. A combination of dyes that provide similar stimulus to the skin tones, but don't have a spectral match.
That is one of my points about the test you did. Those color patches are made of dyes spanning a very low dimensional space; it has been discussed here. I never said that those portraits represent color differences when shooting real people; I just pointed them out as examples of color differences.
That was discussed to some extent when Fujifilm owners compared the GFX 50S and the GFX 50S. Color rendition was very different on the portrait prints. The GFX 50S was tested with the 63 mm and the GFX 50R was tested with 120 mm. That makes a difference in shooting distance and moves specular reflections.

DPReview found out that the prints were fading and replaced them. They didn't find a good solution for that.
And yet, the same "fading" is visible everywhere similar "skin colors" appear in the test scene.

You cannot say - different FL, different reflections, yet my test proves the same colors on the sensor.
  • The second is that DPReview has found out that they portrait prints are subject to significant fading. So, they change between shooting occasions.
Does the color chart change as well? You can see similar differenced there. Same for the color wheel.
As I noted, I used profiles generated the same way. DPReviews uses profiles that Adobe generated.
So? I was talking about the Adobe profiles.
  • The third is that I presume that they are glossy and subject to glare, depending on positioning of the camera and lights.
That can hardly explain the same effects elsewhere, see above. And even if it could, it would cast a deep shadow on your experiment based on the same images.
No, it would not. As the only comparison I made is on the ColorChecker. Here, I don't know how much fading it has. I would recall XRite suggests replacing it after two years.
You missed my point entirely. With the Adobe profiles, the "skin colors" are very different everywhere in the scene. Fading is a non-issue because it represents just some particular part of the scene. If all sensors are capable of producing the same color, how come Adobe never did it?
A way to improve my comparison would be to use the normal ColorChecker as a 'learning set' and use another reference as 'evaluation set'. Such an evaluation set could be ColorChecker GS.
The other references should be an year worth of shooting in various conditions.
You cannot do any controlled experiments under various conditions.
Why should you?
What I have seen here was that my samples from the ColorChecker were essentially indentical.
What you do is deeply flawed. I can write a code producing exactly the same colors with a B&W sensor.
So, all the math and experimental conditions resulted in similar images from the both X1D and GFX.
They should. The software is designed to do that regardless of the sensor color response.
That doesn't mean that they reproduce the ColorChecker perfectly:
This says something about the optimization scheme used by the software but not so much about the sensors.
Comparing the GFX100 with the reference values we have a CIEDE 2000 difference of 2.77 on average 6.04 max.

But comparing the X1D and the GFX 100, the figures are average 0.82 and max 1.6.

So the conversions between the GFX 100 and the X1D are very close.

Now, would we compare the GFX 100 to say the Phase One IQ3100MP, that probably has a different CFA design, the results would be different, although using the same methods:
That does not mean that there were no differences in the original comparison.

I am skipping the rest since those color squares mean nothing to me. They do not have the richness of colors in nature, and you test the calibration on the "training set" as you said yourself, which is deeply flawed.
 
Last edited:
A single pixel doesn't have a S/N ratio in a still image. It makes a measurement, like a hand-held light meter or a thermometer. That measurement has an uncertainty, an error bar if you like, the width of which depends on the measurement. There's no way to split that single number into "signal" and "noise".

It's only when you have a array or a sequence of measurements that you can talk about a signal.
This might be tangential to the discussion but... a single pixel does have noise characteristics. The sequence of measurements is in time.
That's true for video, but not for still photography where each pixel gives you one number.
That number changes when you repeat the experiment over and over again. Each pixel has statistical properties like a probability distribution. For read noise, it is something like Gaussian noise, discretized.
Of course the situation is different when you make multiple measurements. Then you can call the average reading the "signal" and the deviation the "noise". Likewise if you take the average from a number of neighbouring pixels (as in "noise reduction")

But if you shoot one still photograph, all you know is the number of photons that hit each pixel.
Perhaps you're thinking of your advanced photon-counting technology, but nobody is using that outside of labs.
No, I am not thinking about that. Noise can be modeled as follows. Consider all pixels as i.i.d. (or not) random variables (white noise). It is a random process generating a single frame, and roughly speaking, the statistical characteristics across a single frame (in uniform regions) are the same as the temporal ones of a single pixel.

This applies to additive noise (read noise). Shot noise (which is not really noise, as mentioned above) and other types of noise can be modeled along those lines as well with some modifications.
We may be in agreement after all. "Shot noise" is not really noise, unless you make some a priori decision about the colours and lighting in the scene and call that the "signal". For instance, you might believe as an article of faith that the sky on a clear day is really a flat uniform blue and every pixel should deliver exactly the same number.

Read noise is certainly noise because you now have a signal -- the voltage output from each pixel. You can measure the read noise by replacing the sensor with a known accurate voltage, as presumably is done by those who design circuits for cameras.

Don Cox
 
A single pixel doesn't have a S/N ratio in a still image. It makes a measurement, like a hand-held light meter or a thermometer. That measurement has an uncertainty, an error bar if you like, the width of which depends on the measurement. There's no way to split that single number into "signal" and "noise".

It's only when you have a array or a sequence of measurements that you can talk about a signal.
Flag on play.

We are talking about photon shot noise so in this case your analogy is not correct. There is no error because the sensor is measuring the exact number of photons that arrive*. It is not the sensor's fault that the number of photons arriving per unit time fluctuates. It is just the nature of light emission and propagation through scattering and absorbing media.
The pixel measures the exact number of photons,
The pixel doesn't measure an exact number of photons.
Of course it measures an exact number of photons, or can measure that if it is sensitive enough. What do you think it is measuring?
The exact number of photons impinged on a pixel for even an exact number of photoelectrons measured is an unknowable quantity.
Of course it is knowable. What do you mean by saying it is unknowable?
Because, a number of different (closely placed) photon number values could potentially generate the same number of photoelectrons, which are actually measured. Hence, you can only argue probabilistically about the exact number of photons that generated a particular number of photoelectrons - for typical, ordinary CMOS / CCD sensors.

That was in the 2 links in the original post of mine.
And, suppose the QE was 100%? As I mentioned, with a high value of QE it is a good approximation to say one is measuring the exact number of photons.
Do we have any detectors with a QE of 100% ?
 
Someone was discussing to buy Hasselblad X1D or Fujifilm GFX. One of the comments was:

'The X1D cameras just have a different look. It might be the massive color information that the camera is said to have in its firmware. It might be something in the way a "raw" file is generated. It might be the treatment of colors in the camera profile that Phocus has or that Adobe obtained from Hasselblad for Lightroom. Maybe one can wrestle an X1D raw file to look like a Fuji GFX file, but if you just start on the X1D raw in Phocus or even Lightroom, you typically have something of the camera "look" in most shots when you are done.'

That is a quite definitive statement. But do have cameras different color?

So, I downloaded DPReviews studio test images for both the X1D and the GFX 100, generated DCP Profiles for both with LumaRiver Profile Designer developed the Studio Test image with consistent exposure in Lightroom and analysed the ColorChecker colors using Babelcolor 'PatchTool'.

The color differences where like this:

The colors are split squares, I don't recall which side is wich, but they look pretty similar to me...
The colors are split squares, I don't recall which side is wich, but they look pretty similar to me...

So, it seems both cameras are capable of producing exactly the same color.
on this chart...
That is true.

But, it is a decent indication that the sensors can produce same colors.
If my understanding is right, you used the same chart for profiling. Is so, this makes things even worse. The result you see has nothing to do with the sensors; it produces similar colors because this is what the software was designed to do, having a cheat sheet. They could have written the code so that you would get those colors from a B&W sensor as well.

If producing the same colors was so simple, Adobe would have done it, right, in one of their profiles instead of this?
Hi,

Yes and no. One thing is that I have actually made a large set of tests.

But, the page you refer to has three fundamental errors:
  • The first is that photographs of skin color at best provide a metameric match. A combination of dyes that provide similar stimulus to the skin tones, but don't have a spectral match.
  • The second is that DPReview has found out that they portrait prints are subject to significant fading. So, they change between shooting occasions.
  • The third is that I presume that they are glossy and subject to glare, depending on positioning of the camera and lights.
A way to improve my comparison would be to use the normal ColorChecker as a 'learning set' and use another reference as 'evaluation set'. Such an evaluation set could be ColorChecker GS.

Unfortunately, I don't have access to a good set of ColorChecker SG shots Hasselblad X1D
A photograph of a portrait print is not a photograph of skin colours. The faces on the DPR test setup are worse than useless -- they are actively misleading.

The two skin colour squares on the Color Checker are probably closer. They seem to be mixed with iron oxide pigments with spectra resembling those of real skin. Colour prints are mixtures of YMCK dyes and have quite different spectra.
 
A single pixel doesn't have a S/N ratio in a still image. It makes a measurement, like a hand-held light meter or a thermometer. That measurement has an uncertainty, an error bar if you like, the width of which depends on the measurement. There's no way to split that single number into "signal" and "noise".

It's only when you have a array or a sequence of measurements that you can talk about a signal.
This might be tangential to the discussion but... a single pixel does have noise characteristics. The sequence of measurements is in time.
That's true for video, but not for still photography where each pixel gives you one number.
That number changes when you repeat the experiment over and over again. Each pixel has statistical properties like a probability distribution. For read noise, it is something like Gaussian noise, discretized.
Of course the situation is different when you make multiple measurements. Then you can call the average reading the "signal" and the deviation the "noise". Likewise if you take the average from a number of neighbouring pixels (as in "noise reduction")
They are two sides of the same process. Even if you do not take "multiple measurements", it is there; the reason you see noise in a single image is that this noise was created by a process random in time and homogeneous (or not) across the frame. Randomness in time is converted into randomness in space. Looking at a single image, you do take multiple measurements because you look at multiple pixels.
 
But if you shoot one still photograph, all you know is the number of photons that hit each pixel.
That doesn't matter. The noise is still defined, and it (the variance) adds to spatial noise.
Ok, if the noise is 'defined', then you should be able to measure it. Kindly report the shot noise number for the image below and how you derived it:

What is the numerical value of shot noise in this image?

What is the numerical value of shot noise in this image?
You are conflating a few things here. In the the typical measurement system such as:

output = signal + noise,

the noise is usually an external noise (say a read noise on the top of the actual signal coming out of a sensor). This noise has a notion of being mixed with the actual signal value even for a single reading, irrespective of whether we can separate the two from a single reading. Because, we know that it is there due to the nature of the measurement process. But, there is another noise associated with the internal state of the signal if that causes the signal to fluctuate independently of an externally added measurement noises. This 'internal noise' can only be measured given several readings.

And, this is how shot noise operates, as an 'internal noise'. Given a single measurement there is no meaning of shot noise, as the value you hold is the truth value of the number of photons that arrived in that measurement interval based upon the current state of the generating process. However, as said earlier, the read noise still has a meaning even for a single measurement.

A single, photographic natural image is a single sample of a joint spatial + temporal random process. Not a full view of the process. You would need to have an ensemble of pictures of the same scene to reason about shot noise on a given pixel. And, the usual practise of substituting temporal measurements for spatial measurements are only applicable if the shot noise mean value at each pixel location was the same across the image - i.e. the process is stationary in space and time. Not the case for a typical natural image as posted above. Unless you want to always image uniformly lit grayscale walls.

--
Dj Joofa
http://www.djjoofa.com
 
Last edited:
But if you shoot one still photograph, all you know is the number of photons that hit each pixel.
That doesn't matter. The noise is still defined, and it (the variance) adds to spatial noise.
Ok, if the noise is 'defined', then you should be able to measure it. Kindly report the shot noise number for the image below and how you derived it:
You measure it with multiple images with identical exposure. You need a stuffed cat for that. Come on, you know this.
 
But if you shoot one still photograph, all you know is the number of photons that hit each pixel.
That doesn't matter. The noise is still defined, and it (the variance) adds to spatial noise.
Ok, if the noise is 'defined', then you should be able to measure it. Kindly report the shot noise number for the image below and how you derived it:
You measure it with multiple images with identical exposure. You need a stuffed cat for that. Come on, you know this.
Right, multiple images, and to me that is contrary to your thesis the 'the noise is defined and and it (the variance) adds to spatial noise.' That was the whole point. Unlike read noise, shot noise has no meaning or definition for a single measurement (one still photograph as DCox put it). It only makes sense when an ensemble of images is available (multiple images as you say).

--
Dj Joofa
http://www.djjoofa.com
 
Last edited:
But if you shoot one still photograph, all you know is the number of photons that hit each pixel.
That doesn't matter. The noise is still defined, and it (the variance) adds to spatial noise.
Ok, if the noise is 'defined', then you should be able to measure it. Kindly report the shot noise number for the image below and how you derived it:
You measure it with multiple images with identical exposure. You need a stuffed cat for that. Come on, you know this.
Right, multiple images, and to me that is contrary to your thesis the 'the noise is defined and and it (the variance) adds to spatial noise.' That was the whole point. Unlike read noise, shot noise has no meaning or definition for a single measurement.
It's defined, but not measured. You can also get an estimate (maybe it's better than an estimate) from the number of quanta.
It only makes sense when an ensemble of images is available (multiple images as you say).
I don't know why you're telling me all this. I told you you need multiple measurements to measure total noise temporally, or you can estimate shot noise from the number of quanta. If you need a definition of the noise but not a measurement, consult a textbook or a Wikipedia article. It's defined.
 
Last edited:
But if you shoot one still photograph, all you know is the number of photons that hit each pixel.
That doesn't matter. The noise is still defined, and it (the variance) adds to spatial noise.
Ok, if the noise is 'defined', then you should be able to measure it. Kindly report the shot noise number for the image below and how you derived it:
You measure it with multiple images with identical exposure. You need a stuffed cat for that. Come on, you know this.
Right, multiple images, and to me that is contrary to your thesis the 'the noise is defined and and it (the variance) adds to spatial noise.' That was the whole point. Unlike read noise, shot noise has no meaning or definition for a single measurement.
It's defined, but not measured. You can also get an estimate (maybe it's better than an estimate) from the number of quanta measured.
It only makes sense when an ensemble of images is available (multiple images as you say).
I don't know why you're telling me all this. I told you you need multiple measurements to measure total noise temporally, or you can estimate shot noise from the number of quanta. If you need a definition of the noise but not a measurement, consult a textbook.
I have highlighted above what DCox wrote (one still photograph). Not multiple images. And, you had an objection to that. Let me say it again, one still natural image (unless it is a uniformly lit grayscale wall) has no meaning for a shot noise, the way I see it.

But, If you still think that a single image does. Then please measure the shot noise in the single cat image that I posted earlier.

Please no more wording. Just report a number. If you can't do that without having multiple images, then that negates your assertion about shot noise in a single image.
 
But, If you still think that a single image does. Then please measure the shot noise in the single cat image that I posted earlier.

Please no more wording. Just report a number. If you can't do that without having multiple images, then that negates your assertion about shot noise in a single image.
For what it’s worth, one couldn’t do that for read noise either, for various reasons.
 
But, If you still think that a single image does. Then please measure the shot noise in the single cat image that I posted earlier.

Please no more wording. Just report a number. If you can't do that without having multiple images, then that negates your assertion about shot noise in a single image.
For what it’s worth, one couldn’t do that for read noise either, for various reasons.
The thing that is important in my mind is the equation, which I mentioned in the original post, (https://www.dpreview.com/forums/post/64327430):

output = signal + noise

This equation has a meaning for a single measurement in the case of an external noise source - measurement noise in our case, say for e.g. read noise, irrespective of whether we can measure it or not. We know this noise is there.

But, it has no meaning for the internal noise, shot noise here. Because, the value you hold for a single measurement is the truth value for the internal state of the overall system at that time. It is the ground truth in that measurement interval. There is no notion of noise, IMHO, in a single measurement here. So we can't say that in a single measurement that the shot noise is there and we just can't measure it. The value you have is the truth value.

But, I guess the discussion is becoming more philosophical :-) .
 

Keyboard shortcuts

Back
Top