Do not miss: Understanding ISO and Your Camera's Sensor

I finally found a fellow who (bravely) states (in Section 5 - "ISO and Exposure/Brightness"):

... digital sensors act very differently than film, as there is no varying sensitivity to different light. In fact, digital sensors only have one sensitivity level. Changing ISO simply amplifies the image signal, so the sensor itself is not getting any more or less sensitive.

From: https://photographylife.com/sensor-crop-factors-and-equivalence

... and then (in his web-page treatise entitled, "Understanding ISO - A Beginner's Guide") flips to:

... The component within your camera that can change sensitivity is called “image sensor” or simply “sensor”. It is the most important (and most expensive) part of a camera and it is responsible for gathering light and transforming it into an image. With increased sensitivity, your camera sensor can capture images in low-light environments ...

It seems that joining the "photographic priesthood" perhaps involves "incentives to obfuscate" ?

Camera manufacturers tend to minimize specific information so as to minimize the asking of further (at times troubling) questions. "Information manufacturers" may prefer a similar simplicity. We are thus "saved from our own curiosities and intelligences" by "cruel compassions" of "technical paternalisms".
 
Last edited:
Maybe in addition to Wikipedia articles a popular blog could be created that would administer curative medicines serriptiously.
I know a lot of folks submitting the corrections to Wiki only to see their edits to be reverted in a matter of minutes.

A collective blog may be a better option, with editors elected among those who are qualified to speak on the matters.

Promoting such a blog may be a difficult task however.

--
http://www.libraw.org/
I know I said I was getting out of your hair, but couldn't resist one more post. Now I have pretty much shot my wad.

A google search for "exposure triangle" of course brings up many articles. Think of the good that could be done if among those was an article titled, Exposure Triangle Exposed. That title would attract attention and views. Anyone seeking knowledge about the exposure triangle via a search engine could then be exposed to the "truth". The article would feed on itself, the more clicks it got the better its exposure among the exposure triangle articles. I would think it would gravitate to the top.

If only that much was done, much would be accomplished. Such an article getting search engine hits would do much more good than an article buried deep in the bowels of DPR, but attention to those and others could be accomplished via links. The father article could be used to spawn offspring articles, which would feed off each other.

Just some food for thought.
 
Last edited:
I finally found a fellow who (bravely) states (in Section 5 - "ISO and Exposure/Brightness"):

... digital sensors act very differently than film, as there is no varying sensitivity to different light.
OK. So he thinks film has that varied sensitivity? I somehow doubt he is referring to flashing and other hypersensitization methods.
(Perhaps) he is referring to the inability to physically "swap" image-sensors, contrasted to an ability to physically "swap out" rolls of film having one "speed" with rolls of film having another "speed" ?

(Perhaps) he is using the term "sensitivity" to describe "small-signal gain" existing at individual points within the output range of a non-linear transfer-function (whereas solid-state is "linear" slope) ?
 
Last edited:
I don't understand this part. If by ISO you loosely mean having a sensor that can be made variably sensitive to light, then are you basing the fact that digital sensors have no ISO based upon current technology. Then, IMHO, it is not a good decision as technology changes might make you change your definition.

In any case I acquired the following image without changing any optical parameters or analog gain. However, a certain trick was used to make the sensor more 'sensitive' to light in a certain spectral range.


Does that mean that the sensor from which the above image was acquired has a variable ISO?

--

Dj Joofa
 
I don't understand this part. If by ISO you loosely mean having a sensor that can be made variably sensitive to light, then are you basing the fact that digital sensors have no ISO based upon current technology. Then, IMHO, it is not a good decision as technology changes might make you change your definition.
Changing ISO may amplify nothing. If it amplifies, it is not always "simply" :) Technically, sensor responsivity also may be changed (and is changed in some designs), but it is not measured is ISO sensitivity units. In video they do not have ISO, they have dB switch right on the cameras and if it hardly ever caused any confusion in video world, why digital photography went by a different and more convoluted route?
 
I don't understand this part. If by ISO you loosely mean having a sensor that can be made variably sensitive to light, then are you basing the fact that digital sensors have no ISO based upon current technology. Then, IMHO, it is not a good decision as technology changes might make you change your definition.
Changing ISO may amplify nothing. If it amplifies, it is not always "simply" :) Technically, sensor responsivity also may be changed (and is changed in some designs), but it is not measured is ISO sensitivity units. In video they do not have ISO, they have dB switch right on the cameras and if it hardly ever caused any confusion in video world, why digital photography went by a different and more convoluted route?
The whole thing is such a mess. (Manufacturer utilized) "ISO indications" result, by definition, from fixed level (as required by the SOS method), or from arbitrary level (as allowed in the REI method) encoded JPEG output image-data - which represents the non-linear tone-mappings of RGB transfer-functions followed by gamma-corrections produced in response to some given photometric Exposure of the image-sensor front-end. Little wonder that it fails to describe anything else with coherence ?
 
Last edited:
I don't understand this part. If by ISO you loosely mean having a sensor that can be made variably sensitive to light, then are you basing the fact that digital sensors have no ISO based upon current technology. Then, IMHO, it is not a good decision as technology changes might make you change your definition.
Changing ISO may amplify nothing. If it amplifies, it is not always "simply" :) Technically, sensor responsivity also may be changed (and is changed in some designs), but it is not measured is ISO sensitivity units. In video they do not have ISO, they have dB switch right on the cameras and if it hardly ever caused any confusion in video world, why digital photography went by a different and more convoluted route?
The whole thing is such a mess. (Manufacturer utilized) "ISO indications" result, by definition, from fixed level (as required by the SOS method), or from arbitrary level (as allowed in the REI method) encoded JPEG output image-data - which represents the non-linear tone-mappings of RGB transfer-functions followed by gamma-corrections produced in response to some given photometric Exposure of the image-sensor front-end. Little wonder that it fails to describe anything else with coherence ?
I agree the whole thing is in a mess. Digital photography, including colorimetry, has certain hard-entrenched ideas that are difficult to undo now.
 
How helpful it would be to define what SNR one is talking about. SNR on "pixel level"? Yes of course, larger pixels achieve better SNR. SNR on photos viewed in equal output sizes from the same distance? Pixel size is rather irrelevant, the larger sensor wins.

Gruß, masi1157
 
How helpful it would be to define what SNR one is talking about. SNR on "pixel level"? Yes of course, larger pixels achieve better SNR. SNR on photos viewed in equal output sizes from the same distance? Pixel size is rather irrelevant, the larger sensor wins.

Gruß, masi1157
Excellent point.

SNR at the pixel level is the ratio between the signal and the noise. I believe that there are two main methods of measuring this.

The first is to take a series of images of a flat field generated by a calibrated light. Then the signal (S) is the average ADU value for the pixel in the raw files. The noise (N) is the standard deviation of the pixel values. I believe that this is the one defined in the ISO noise standard. Don't have a copy - so not sure about this. This is difficult to set up and measure.

The second is a simpler method that takes a single image of an evenly illuminated patch. Then simply averages the pixel values in the raw file and computed the standard deviation. This is the method used by DxoMark to produce their "Screen" SNR numbers. I have written software to do this myself and use it to characterize my cameras.

SNR as viewed at a standard resolution is the ratio after de-mosaicking and re-sampling to a standard resolution. The value depends on the algorithms used and the standard resolution that the image is re-sampled to. I do not know how this would be measured. For their "Print" SNR, DxoMark simply adjusts the measured or "Screen" SNR by the ratio of the camera resolution to the "standard" resolution.

My view point is that the pixel level SNR is the inherent SNR of the camera. Perceived SNR of the final image is strongly influenced by the selection of algorithms and final resolution. A camera that has more pixels allows one greater scope in improving SNR by down sampling the image.
 
SNR at the pixel level is the ratio between the signal and the noise. I believe that there are two main methods of measuring this.

The first is to take a series of images of a flat field generated by a calibrated light. Then the signal (S) is the average ADU value for the pixel in the raw files. The noise (N) is the standard deviation of the pixel values. I believe that this is the one defined in the ISO noise standard. Don't have a copy - so not sure about this. This is difficult to set up and measure.

The second is a simpler method that takes a single image of an evenly illuminated patch. Then simply averages the pixel values in the raw file and computed the standard deviation. This is the method used by DxoMark to produce their "Screen" SNR numbers. I have written software to do this myself and use it to characterize my cameras.
There is a third method that avoids the need for even illumination -- a condition which I have found difficult to achieve. Make two images under identical conditions. Convert them to floating point. Crop both to the region of interest (ROI). Add them, take the mean, and divide by two. That's the S. Subtract them (making sure not to clip negative numbers), compute the standard deviation, divide by 1.414. That's the N.

This method was suggested to me by Jack Hogan, and we have used it with great success in our camera modeling program.

Relieved of the need for an even illuminant, I often take pairs of pictures of gradients and move the ROI around to get the SNR at various places on the Photon transfer curve.

Jim
 
There is a third method that avoids the need for even illumination -- a condition which I have found difficult to achieve. Make two images under identical conditions. Convert them to floating point. Crop both to the region of interest (ROI). Add them, take the mean, and divide by two. That's the S. Subtract them (making sure not to clip negative numbers), compute the standard deviation, divide by 1.414. That's the N.
Actually, I use a ROI in my software to gather just a 64x64 pixels patch in the center of the target. This reduces the potential for uneven lighting. I also compute the range of the reported meter reading where available. Use this to spot where I need to redo because of uneven lighting in the center patch.
 
SNR at the pixel level is the ratio between the signal and the noise. I believe that there are two main methods of measuring this.

The first is to take a series of images of a flat field generated by a calibrated light. Then the signal (S) is the average ADU value for the pixel in the raw files. The noise (N) is the standard deviation of the pixel values. I believe that this is the one defined in the ISO noise standard. Don't have a copy - so not sure about this. This is difficult to set up and measure.

The second is a simpler method that takes a single image of an evenly illuminated patch. Then simply averages the pixel values in the raw file and computed the standard deviation. This is the method used by DxoMark to produce their "Screen" SNR numbers. I have written software to do this myself and use it to characterize my cameras.
There is a third method that avoids the need for even illumination -- a condition which I have found difficult to achieve.
I use an EL panel, mounted in Lee filter holder. Very close to even and constant, given the power source is proper.

--
http://www.libraw.org/
 
Last edited:
Ed, my smartphone doesn't allow me to quote, sorry. You say the SNR on pixel level is "inherent to the camera". True, in a way, but how relevant is that for a photo you look at? Even more, how relevant is it when comparing them? You would not only need to look at all of them in 100% views, so at largely different output sizes. You even would have to be able to distiguish pixel from pixel and they should not be demosaiced, if that changes the SNR. After all that I think DxOMark's "print" numbers are a good estimate to realistically compare sensors. The absolute numbers may be "wrong", but the deltas are ok, I think.

Gruß, masi1157
 
How helpful it would be to define what SNR one is talking about. SNR on "pixel level"? Yes of course, larger pixels achieve better SNR. SNR on photos viewed in equal output sizes from the same distance? Pixel size is rather irrelevant, the larger sensor wins.

Gruß, masi1157
Excellent point.
Which has been pointed out to you several times by me and others. Yet you continue to ignore it.
 
I found it interesting to read that, "ISO isn’t even an acronym, but a shortname for “International Organization for Standardization..."
As far as I know, this is correct. The wrong order of the letters in the shortname (ISO instead of IOS) compared to the full name have been made on purpose as some kind of compromize between different wordings of the name in different languages.
 
How helpful it would be to define what SNR one is talking about. SNR on "pixel level"? Yes of course, larger pixels achieve better SNR. SNR on photos viewed in equal output sizes from the same distance? Pixel size is rather irrelevant, the larger sensor wins.
Excellent point.
Which has been pointed out to you several times by me and others. Yet you continue to ignore it.
The idea perennially held by Ed - that a larger photosite-array size would not result in a higher SNR in the same manner that a larger individual photosite size leads to a higher (temporally derived) SNR - may result from an inability to recognize that it is spatially derived (as opposed to temporally derived) photosite data that is being measured when a "uniform field" is analyzed. An important distinction.

Rather than recognize and understand that the "total transduced light" of an image-sensor photosite-array within an Exposure time determines sensor-level SNR, he appears to be of a mind that RAW-level data is not "image-data", that an "image" comes into being only once the RAW data is demosaced, RGB-rendered, processed, and printed/displayed. On that (presentation) end of things, he attributes (any and all) changes in SNR as arising exclusively out of pixel-resampling normalizations.

The entire matter that pixel-resamplings affect the spatial frequencies of (both) desired "signal" as well as undesired "noise" (notably) appear to evade his thinking entirely. In a world where all "signals" are zero frequency (as is the case with a "uniform field" target), such things are blithely ignorable.

Everybody's models seem humbled by their insufficiency by Nature's complexity when scenes/targets that are not "uniform fields" are analyzed. (A generally applicable) "Image Level SNR" evades us ...

DM
 
Last edited:
And actually I only repeated it because I was fooled by my new smartphone. It looked to me as if the disussion had ended at that point. It obviously didn't, as I saw later. Sorry to restart it all from zero.

Gruß, masi1157
 
My understanding of the argument presented in this paper is that, while smaller pixels result in smaller pixel level SNR due to the shot noise effect, the larger number of pixels allow you to resample and thus gain back the SNR.

I agree with that statement. Almost all transformations of the raw image result in a change in the SNR. The algorithm chosen for de-mosaicking will impact the final SNR. The amount of re-sampling to a lower or higher resolution will also change the SNR.

What I object to are statements to the effect that, because the SNR improves with de-mosaicking and re-sampling, the higher SNR must be present in the original data and is because the larger sensor has more total light striking it. Or that, because I can down sample and get the same SNR as larger pixels with lower resolution, larger pixels do not result in higher SNRs.

My view point is that raw files are a collection of data points where each point represents the information from one pixel. The SNR in that file is necessarily pixel level SNR. Manipulation of that information later in the process does not change that - it only changes the perceived SNR in the final image.

Stated another way, my view is that more pixels ALLOW one to use more down sizing to gain SNR. But if I decide to crop instead I do not get the gain in SNR. But neither of these post processing decision change in SNR of the original raw file.
 

Keyboard shortcuts

Back
Top