Do not miss: Understanding ISO and Your Camera's Sensor

I don't have facebook:

The section on ISO could be written better. Simply say that the scene, relative aperture, and exposure time determine the amount of light falling on the sensor, and the ISO setting on the camera determines how bright the image will appear. The noise is all about the amount of light falling on the sensor (and how good the sensor is), not the ISO setting.

As for the section of pixel size vs noise, well, that's just plain wrong. It's not the size of the pixel that determines the noise, but the amount of light falling on the sensor and the sensor tech. For example, the D810 and 5D3 will have [roughly] the same noise for the same exposure, 'cause the same amount of light falls on their sensors, despite the D810 pixels being about half the size as the 5D3 pixels. That said, about 2.5x the light will fall on a FF sensor as an APS-C sensor, for example, for the same exposure, which is why larger sensor cameras tend to be less noisy.

Lastly, with regards to "When buying a new digital camera, favor those with larger sensors, especially if you shoot in a lot of low light because they will deliver superior image quality at higher ISO settings", well, there's a whole lot about a camera other than how much noise its photos have for a given exposure that will lead a photographer to be better served by one than the other.
 
As most know, I hold a contrary view about pixel size vs SNR.

I understand that downsizing will improve SNR. However, downsizing is something done in post processing. I do not see how it impact the SNR of the sensor.

There is a lot of insistent and load assertions here to the contrary. So to make sure I have not missed anything, I did a Google on "noise camera pixel size" and went to the sites that had some level of authority:

Nikon – exposure triangle and larger pixels = better SNR

http://www.nikonusa.com/en/Learn-And-Explore/Article/g9mqnyb1/understanding-iso-sensitivity.html

Hasselblad manual - larger pixels = better SNR

https://books.google.com/books?id=_...epage&q=iso digital camera pixel size&f=false

Stanford paper - larger pixels = better SNR

http://white.stanford.edu/~brian/papers/pdc/pixelSize_SPIE00.pdf

Imatest - larger pixels = better SNR

http://www.imatest.com/docs/noise/

IEEE - larger pixels = better SNR

http://spectrum.ieee.org/geek-life/tools-toys/pixels-size-matters/0

Kodak - larger pixels = better SNR

http://www.kodak.com/US/en/digital/pdf/largePixels.pdf

International Society for optics and photonics SPIE - larger pixels = better SNR

http://spie.org/x102662.xml
 
Dear Ed,

Have you read the article?
Yes.
What are they saying is the relation between ISO setting and noise?
I believe your problem with the article is that ... it states that larger pixel size = better SNR.
No, that is not the problem. The problem is _how_ they say and exploit that.

The good authority actually does not count. They had it in good authority for centuries so many things. Funny enough, those who were spreading misconceptions often knew too well they are deceiving the faithful; but it was a profitable business for them.

--
http://www.libraw.org/
 
Last edited:
As most know, I hold a contrary view about pixel size vs SNR.

... to make sure I have not missed anything, I did a Google on "noise camera pixel size" and went to the sites that had some level of authority:

Stanford paper - larger pixels = better SNR

http://white.stanford.edu/~brian/papers/pdc/pixelSize_SPIE00.pdf
From the above-referenced paper (in Section 2.1, PDF Page 3, journal Page 453) - where the authors are clearly speaking very specifically about "pixel-level" temporal measuremenrs):

DR increases roughly as the square root of pixel size, since both C and reset noise (kTC) increase approximately linearly with pixel size.

True only to the extent that the spectrum of all internal readout/ADC noises are random in nature.

SNR also increases roughly as the square root of pixel size since the RMS shot noise increases as the square root of the signal. These curves demonstrate the advantages of choosing a large pixel.

True only to the extent that the Photon Shot Noise existing within the light itself dominates noise.

.

I attempt (for the Nth time, it seems) to inspire you to squarely face the following "equivalence". In the quoted text from the Stanford paper above, I substitute every occurance of the term "pixel" with the phrase "image-sensor active-area" ([bolded] below to show the changes made):

DR increases roughly as the square root of [image-sensor active-area] size, since both C and reset noise (kTC) increase approximately linearly with [image-sensor active-area] size.

SNR also increases roughly as the square root of
[image-sensor active-area] size since the RMS shot noise increases as the square root of the signal. These curves demonstrate the advantages of choosing a large [image-sensor active-area].

Remaining (for the moment) in the world of temporal noise measurements that such single-photosite analysis necessitates, explain to us why the above [substitutions] would in any way be different when applied to image-sensor active-area sizes of arrays of multiple photosites.

.

Enter the spatial (inter-photosite measurement) domain:

With microlens array assemblies, 100% optical fill-factor is not an unreasonable assumption.

In the spatial domain (as opposed to the temporal domain - the only domain possible with single photosite analysis), explain to us why the above [substitutions] would in any way be different when applied to image-sensor active-area sizes of arrays of multiple photosites.

You have never responded to that query (previously specifically made to you several times) ...

If you cannot identify what differences demonstrably exist between "individual photosite size" and "image-sensor active-area sizes of arrays of multiple photosites" (as those phrases are used in the original, and in the by me modified, quotes appearing above), then there exists no meaningful case that can be made whatsoever for any sort of unique "primacy" of single-photosite analysis.

What matters in analyzing image-sensor performance is spatial (inter-photosite measurements performed over some given measurement time) - not temporal single photosite measurements.

RAW photosite data relates to image-data formed from an image-sensor active-area consisting of an array of multiple photosites. It is not (in applications of interest) about individual photosite output - that is, unless we are discussing a single-photosite ("cyclops") imaging device using one photodetector.

DM
 
Last edited:
Looks like what is needed is some good people to get off the island and venture out into the real world and spread the word about photography according to science.
"Exposuristas" and their ilk have not only been banished here to "the island" - they appear to have infiltrated the "inner sanctum" of DPReview. Indeed, they seem to be lurking "behind every bush".
 
Last edited:
Looks like what is needed is some good people to get off the island and venture out into the real world and spread the word about photography according to science.
"Exposuristas" and their ilk have not only been banished here to "the island" - they appear to have infiltrated the "inner sanctum" of DPReview. Indeed, they seem to be lurking "behind every bush".
I thought they migrated here.

Now if you can just persuade Gollywop to make that into a viral, scratch that, a popular you tube video.
 
Last edited:
Looks like what is needed is some good people to get off the island and venture out into the real world and spread the word about photography according to science.
"Exposuristas" and their ilk have not only been banished here to "the island" - they appear to have infiltrated the "inner sanctum" of DPReview. Indeed, they seem to be lurking "behind every bush".
I thought they migrated here.
There are eight million stories in the Naked City.
Now if you can just persuade Gollywop to make that into a viral, scratch that, a popular you tube video.
"The Resolution Will Not Be Televised" ... :P

(The market-dominant) JPG-shooters are often unable to reliably determine sensor-level Exposure (by knowing that the status of the RAW channel ADC outputs is a high, yet linear, non-saturated one), even if they would like to. True also for RAW shooters (without their determining reliable "calibration" methods for a specific camera under specific conditions).

It is common for JPG-shooters to have an encouraged faith that their camera metering systems will act effectively to prevent such conditions, while still maximizing sensor-level Exposure. Not so ...

Given the effects of the non-linear gamma-correction (as well as compression in the non-linear RGB transfer-function "shoulders"), I think that it (may) be that some JPG-shooters find themselves preferring to err on the side of lower (sensor-level) Exposure levels - reducing the amount of loss of highlight-details within the in-camera encoded JPGs, while at the same time having the adverse effect of reducing (sensor-level) Signal/Noise Ratios.

All shooters (JPG/RAW) can simply adjust F-Number (for minumum desired Depth of Field) and Shutter Speed (given camera/subject stabilities) prior to adjusting the camera's "ISO" setting value (to the lowest feasible ISO setting in ISO-invariant cameras) - but in order for users to effectively maximize Exposure, they require the benefits of reliable on-camera RAW channel level indicators - preferably in "preview" (in order to avoid having to "chimp" by first shooting, and then reviewing).

DM
 
Last edited:
Thanks for the links--interesting, informative reading, most of which I hadn't seen before.
 
I don't often see much to interest me in this forum, but you've been on a roll lately providing links to helpful, informative articles. Thanks!
 
Looks like what is needed is some good people to get off the island and venture out into the real world and spread the word about photography according to science.
Before anything, cameras need to be optimized for shooting raw. Not only they are not, most of the application/user level programmers in the industry have no clue what are the challenges.

Same is true about shooting accessories. No colormeter will tell you what filter you need to use to balance raw. No exposure meter will allow you to enter the dynamic range in a direct and easy way or, G-в forbid, to enter the table of DR depending on ISO setting. No studio lights or flashes are optimized for the spectral sensitivity of a typical sensor, and no filters are offered.
 
- but in order for users to effectively maximize Exposure, they require the benefits of reliable on-camera RAW channel level indicators - preferably in "preview" (in order to avoid having to "chimp" by first shooting, and then reviewing).
There is another way.

First, taking the lead from Adams, "calibrate" your "system". I do it by taking a series of images of a gray card and adjusting the exposure from above saturation to below the 0 dB noise point. I plot the resulting ADU vs exposure and measure the number of stops from suggested exposure from the internal meter. I do this for the various qualities of light (sunny, shade, tungsten) and for each lens I have. Turns out I need only nine numbers - 3 each for each quality of light and each class of lens (modern Pentax, legacy Pentax, and other manufacturers). To simplify maters I have wrote software to un-compress the raw files, measure the average ADU values and the SNR, then plot the values. Software also measures computes number of stops from saturation to meter reading. Saturation is where the standard deviation goes down. I identify the suggested exposure by taking two images at the suggested exposure. The software locates the images with the duplicate exposures and uses that as the meter suggested exposure. The major limitation is the need to insure that the light stays relatively constant during the test exposures. I measure and report the variation in LV reported by the camera so I can spot a bad run. Here is an example of the output:



a82108715dbf4d7798aab4a27ee3f951.jpg



Second, I shoot raw.

Third, for those scenes with troublesome subject brightness range, I use the internal meter in spot mode and determine the recommended exposure for the brightest area where I need to avoid saturation. I adjust that exposure with the calibration factor for the lens and light quality. For my Pentax K-3 and most lenses it is 2 2/3 to 3 2/3 stops depending on quality of light. This is also called "placing" in the zone system.

Finally, in post processing I adjust black level, brightness and contrast to suit the image.

In other words I "expose for the highlights and develop for the shadows".

--
Ed Hannon
 
Thanks for the links--interesting, informative reading, most of which I hadn't seen before.
Google is your friend :-)

You just have to learn how to use it. I developed the following doing research for work and school:
  • Try several combinations of key words about your question - in this case "noise camera pixel size".
  • Filter the responses based on the source and your goal. In this case I ignored blogs, personal web sites, advertising, etc. I concentrated on camera manufactures (e.g. Nikon), Universities (Stanford ), testing companies (imatest), etc where I had some expectation of objective data.
 

Keyboard shortcuts

Back
Top