Common misconception with noise, pixel size and sensor size

Started Jun 19, 2016 | Discussions
silentstorm Senior Member • Posts: 1,348
Common misconception with noise, pixel size and sensor size
1

This thread is hoping to address the common misconception regarding sensor size, pixel size & noise from a recent discussion on the m43 forum here. Do spare a few mins & read through what I have to say to determine if it is correct and makes sense. I try to be as concise & simple, so this is going to be long.

My argument is that bigger sensor size (eg. FF) doesn't bring about having more light captured than the smaller sensor. It is the pixel size & its quantum efficiency (QE) that determines the output signal's quality or fidelity (Signal to Noise Ratio). QE is used to describe how well a pixel can convert light photons to electrical charges, the bigger the pixel size, the better the QE, the higher the signal & thus higher SNR.

The shot noise & dark current noise is pretty much fixed in a given sensor with the type of material used & sensor design (eg. CCD, CMOS, JFET). Besides changing different materials (eg. organic mat.) & designs to improve QE, the other common method to improve QE is to increase light/photons capture.

There are methods to increase the efficiency of capturing more light into the pixels eg. micro lens design, gapless micro lens, bigger size, backside illumination, etc, etc. Knowing so many variables can influence a given sensor performance, I'll try to use equipment that are close to each other in generation of production in my explanation. This is proven to be harder than I thought because data available for older equipment is not complete or comprehensive to do a comparison, eg. the test method dpreview for Canon 40D & 1Ds3 was different even though they belong to the same generation. Dpreview used Adobe NR off for 1Ds3 but used Canon DPP NR off for 40D. Quick link to dpreview for 40D and 1Ds3 .

As of this writing, we are looking at 2 common ways to improve QE for lower noise signal:

1) More light.

2) By increasing the pixel size.

Let's look at point no. 1 first.

Many people believe that the bigger the sensor size, the more light it gathers. It is true if this theory is to look at things that reflect light, things like tables or an open field. However this is a wrong way of accessing a sensor's ability because a sensor's primary function is not to reflect light, but absorbs & converts light.

This call for a simple experiment, it is repeatable & consistent. Use a plain wall or paper, it could be white or grey, doesn't matter. Put 4 different lenses or 5 (optional) namely, a 100mm medium format lens, a 100mm FF lens, a 50mm FF lens and a 50mm 43 lens. You can add another 50mm APS-c lens and it is optional. Look at the diagram I've done up. I chose Canon lenses because they have both crop & FF cameras, and they design & produce their own sensors. Now set all lenses to F4, it's easier to keep consistent this way.

Use any light meter & take a reading from the lens output. Every reading is the same or close depending if you want to ignore the transmissivity variance from the lens tolerence. You don't find the MF lens gives the brightest reading compared to the rest.

The meter reads the same output but many people swear that the bigger format lens will transmit more light. So what gives?

Time for another diagram.

The light intensity or density from all lenses are the same. Noticed only the image circle is different in sizes.

Many people mixed up the size of image circle with light intensity. In fact these are 2 separate things. Image circle doesn't affect light intensity, it's an area of coverage. Light intensity is the brightness. This is why when you put a FF lens or MF lens onto a crop sensor 80D camera, the metering is same as its bigger brother.

I repeat the confusion here... People mixed up with the way they access light hitting on a light absorbing thing (sensor) and a light reflecting thing. The biggest image circle from the MF lens doesn't transmit more light because the principle behind the calculations of light reflected or hit on a surface and light conversion from one form to another is vastly different.

We use the sensor to capture an image because it converts light, not because it reflects light. Therefore the area size here, should not be taken into account or used to access the QE for any given sensor.

For the people who still believe total larger surface counts for lower noise, they need to understand a simple fact: It is the human brain that choose to sum up the total light fall on a given area, BUT the pixel doesn't know about area, it doesn't  add up the total light, it doesn't even know there's a neighbor pixel next to it, it also doesn't know how many total pixels are there. It only knows one thing, ie. to convert photons into electrons. And this process is done independently, discretely and isolated within its well. The DSP process the output signal individually into data bits too, that's why every pixel has its unique data bits representing an output signal. Till this point, we can see neither the pixels nor the processors are adding up the light on the surface. Humans are the error here by insisting on totaling the light covered by a surface area to access light conversion efficiency.

Pixel no.1 doesn't know pixel no.2 exist. It doesn't know the total no. of pixels. It doesn't know area. It doesn't share photons with next door. It certainly & absolutely doesn't add up total light from other pixels. It is all alone isolated in its well converting photons into electrons independently.

At this point, I would also like to highlight another misconception about lenses. By putting the MF lens on a smaller sensor camera (FF, APS-c & 43) it does not become brighter. The amount of light passing through is controlled by the aperture in F-stops. These F numbers are locked in a relationship between Focal Length (FL) & aperture diameter. The formula is FL divided by aperture diameter (FL/Dia). That's to say for a given FL to have a certain amount of light transmitting through, the F-stop has to be of a certain diameter size.

Many people tend to mix Field Of View (FOV) with aperture F-stops when using lenses meant for bigger format. Very common to find people here insisting that by using a FF lens on 43 camera, the lens becomes xx times brighter. This is simply not true. Again the FOV & F-stop are two separate things. You cannot throw the formula off balance by throwing away 1 aspect in the formula & keeping the other part, eg. in order to keep the same FOV as a FF camera at 100mm, I use a 50mm lens but keep the aperture size unchanged, therefore resulting in brighter lens. Doesn't work that way.

As shown in the experiment above, the light at F4 is same for all lenses. If the 50mm FF Canon lens is put on a 43 camera, F4 doesn't become F2. Applying the formula, 50mm F4 has a aperture diameter of 12.5mm. This diameter is exactly the same size as a native 43 lens with same spec of 50/4. Remember only the image circle changed, the light intensity does not.

If the 100mm F4 aperture (25mm) is transplanted into the 50mm lens, both Canon FF & native 43 lens will become F2, the Canon FF lens will not remain at F4 just because it is a FF lens. Being FF means having a bigger image circle than smaller sensor lenses, but still can't run away from the formula & physics.

With these explained let's look at point no.2

Bigger pixel size.

Pixels are the ones that is converting photons to electrons. The pixels are the ones that produce shot noise & dark current noise. The pixels are essentially what make up an image that the brain interprets.

A pixel's QE is fundamentally very much limited by its size, it is a relationship that is locked. The smaller the pixel size the lower the QE & hence, noisier output signal.

To better illustrate my point, I took a quick look at a sensor manufacturer OmniVision. I chose 2 sensors that are currently in production & here is the white papers data. You can use other manufacturers like Sony or Toshiba or Samsung, doesn't matter.

Left is a sensor with 5MP and slightly bigger area. Right is a sensor with 1MP & slightly smaller area. Both sensors belong to 1/4" size family. The 1MP sensor has 3microns pixel size and, showing sensitivity & SNR to be better than the 5MP sensor with 1.4 microns pixel size.

Those are raw measured data directly from sensors. So how do our cameras fare? I tried to look for more comprehensive data when comparing our cameras but somehow these data are not complete. Great Bustard (Joseph) pointed me to a link to some sensors data (Sensorgen). I tried to look for something close to each other to make a better comparison.

Looking at Canon 30D and 1Ds3 (I know they are different generation & not exactly a good eg), surprisingly the the QE is not any worse. Click on the model & you can see both 30D & 1Ds3 have 6.3microns pixel. For people who still believe that smaller sensor must be noisier, this is very telling.

Moving further back to a 10D, it fares worse with 7.2microns pixel size. And with progress in technology, this is to be expected, that's why I wanted to look for something close & it proves challenging. From 10D to 30D, the progress with QE has improved. I actually expect the 1Ds3 to do much better than the old 30D with the same pixel size.

Maybe the 30D vs 1Ds3 doesn't show very well how pixel size affects QE. Turning to Sony since they are like Canon, with both APS-c & FF cameras and design & manufacture their own sensors.

The same generation of cameras that can display very obviously the significance between pixel size & QE relationship are A7/s/r. Looking at their numbers I guess no further emphasis is required.

There's another source of sensor data but it is again not comprehensive. Similar to Sensorgen lack of SNR in their data, Clarkvision lacks camera models. Still, we can see how pixel size is affecting SNR data among the limited models.

The 7D has lower SNR compared to 50D. MP difference is small with only 3MP. Within the same generation of cameras, the slightly larger pixel shows a better SNR. Even though 30D is 1 gen behind 40D, its larger pixel is still pulling ahead with SNR.

I'll like to clear up something before someone shouts "comparing 8MP with 21MP is BS!" What I'm saying is all in relative to what a sensor does. Things like upscaling, downsampling, normalizing, printing size, viewing size, viewing distance, details, noise reduction etc etc. are not included. These actions are manipulation of data that comes after what a sensor did. Such manipulations are based on individual preferences & agendas, and these are not something a sensor knows & does. Remember, a sensor doesn't have thoughts. It doesn't know what is a scene, what is details, and doesn't sum up the lights on a given surface area. A sensor has to be looked at for what it is & in its context.

Another thing I don't do here is to draw up an imaginary ideal situation. Joseph has this imaginary ideal situation where he has a big sensor & a small sensor both having same QE & resolution.  Question is which will produce noisier image?  His answer is the smaller sensor will be noisier,

We know noisier sensor means lower QE and lower QE comes from smaller pixels.  So how can a sensor be noisier yet high in QE?  This problem exist because a result from reality is being forced into an imaginary ideal environment.  That's why discussion like this is not my thing.  If you want my answer, my answer would also be an imaginary one, they are the same since QE is the same, one will not be noisier than the other.

Final words to end this long read.  Just to highlight, from manufacturer white papers to what the brands are saying when they have sensor improvements, they are correct in telling us the essence of the sensor quality comes from the pixels.

There's no mentioned of scene, upscaling, downsampling, printing, viewing distance, etc etc. Just plain pure pixel illustrations.

Writing something so long is kind of tedious. Makes me appreciates the reviewers here & the reviews they put up more. Thank you!

Have a good day y'all. Feel free to exchange your pointers.

Canon EOS 10D Canon EOS 30D Canon EOS 40D Canon EOS 50D Canon EOS 7D Canon EOS 80D Canon EOS-1Ds Sony a7 Sony FE 50mm F1.8
If you believe there are incorrect tags, please send us this post using our feedback form.
Eric Fossum
Eric Fossum Senior Member • Posts: 1,477
Re: Common misconception with noise, pixel size and sensor size
30

silentstorm wrote:

This thread is hoping to address the common misconception regarding sensor size, pixel size & noise from a recent discussion ...

The shot noise & dark current noise is pretty much fixed in a given sensor with the type of material used & sensor design (eg. CCD, CMOS, JFET).

JFET?  And not fixed btw.

There are methods to increase the efficiency of capturing more light into the pixels eg. micro lens design, gapless micro lens, bigger size, backside illumination, etc, etc. Knowing so many variables can influence a given sensor performance,

Right on.

As of this writing, we are looking at 2 common ways to improve QE for lower noise signal:

1) More light.

More light does not affect QE.

2) By increasing the pixel size.

Pixel size does not essentially affect QE.

Are you sure you know what QE is?


A pixel's QE is fundamentally very much limited by its size, it is a relationship that is locked. The smaller the pixel size the lower the QE & hence, noisier output signal.

Again, check the definition of QE.

To better illustrate my point, I took a quick look at a sensor manufacturer OmniVision. I chose 2 sensors that are currently in production & here is the white papers data. You can use other manufacturers like Sony or Toshiba or Samsung, doesn't matter.

For people who still believe that smaller sensor must be noisier, this is very telling.

Not really. Are you sure you understand noise?

We know noisier sensor means lower QE and lower QE comes from smaller pixels.

Wrong. Wrong.

Final words to end this long read. Just to highlight, from manufacturer white papers to what the brands are saying when they have sensor improvements, they are correct in telling us the essence of the sensor quality comes from the pixels.

Look, you really ought to just delete your post.  Follow up by learning more about sensors and the definitions you are using incorrectly.  Then, try again.  This post just makes you look, well, like a beginner.  Nothing wrong with that, but a person has to know his/her limitations (Dirty Harry) and move slowly until you come up to speed.

Good luck. -EF

 Eric Fossum's gear list:Eric Fossum's gear list
Sony RX100 II Nikon Coolpix P1000 +1 more
gollywop
gollywop Veteran Member • Posts: 8,301
Thank you, Eric.
12

Eric Fossum wrote:

Look, you really ought to just delete your post. Follow up by learning more about sensors and the definitions you are using incorrectly. Then, try again. This post just makes you look, well, like a beginner. Nothing wrong with that, but a person has to know his/her limitations (Dirty Harry) and move slowly until you come up to speed.

Good luck. -EF

You've just saved a lot of people a lot of trouble. It was hard to know where to start and how far to go, but you did fine. It is also sad to see what had to be so much effort go for nothing.

-- hide signature --

gollywop
I am not a moderator or an official of dpr. My views do not represent, or necessarily reflect, those of dpr.
http://g4.img-dpreview.com/D8A95C7DB3724EC094214B212FB1F2AF.jpg

Mikael Risedal
Mikael Risedal Veteran Member • Posts: 4,625
Please silentstorm , study the subject first
4

and you can ask  Erik questions right  here

-- hide signature --

Member of Swedish Photographers Association since 1984
Canon, Hasselblad, Leica, Nikon, Linhoff, Sinar,Zeiss, Sony . Phantom 4

OP silentstorm Senior Member • Posts: 1,348
Re: Common misconception with noise, pixel size and sensor size

Eric Fossum wrote:

silentstorm wrote:

This thread is hoping to address the common misconception regarding sensor size, pixel size & noise from a recent discussion ...

The shot noise & dark current noise is pretty much fixed in a given sensor with the type of material used & sensor design (eg. CCD, CMOS, JFET).

JFET? And not fixed btw.

Nikon has a JFET LBCAST sensor. I meant the characteristics of the semiconductor used is fixed. I understand dark current noise will go up with temperature in a manner & this manner doesn't change with this material composite. I should have phrased it more accurately. Thanks.

There are methods to increase the efficiency of capturing more light into the pixels eg. micro lens design, gapless micro lens, bigger size, backside illumination, etc, etc. Knowing so many variables can influence a given sensor performance,

Right on.

As of this writing, we are looking at 2 common ways to improve QE for lower noise signal:

1) More light.

More light does not affect QE.

I used QE too loosely here. More light to improve the signal output under the fixed QE on the given sensor. Sorry about that.

2) By increasing the pixel size.

Pixel size does not essentially affect QE.

Are you sure you know what QE is?

A pixel's QE is fundamentally very much limited by its size, it is a relationship that is locked. The smaller the pixel size the lower the QE & hence, noisier output signal.

Again, check the definition of QE.

Checked:

The "quantum efficiency" (Q.E.) is the ratio of the number of carriers collected by the solar cell to the number of photons of a given energy incident on the solar cell. Link.

From Clarkvision site:

...and the larger pixel simply enable collection of the increased light delivered by the lens

Further down the article:

...the larger pixels of the big camera collects so much more light per pixel that the image quality is much better than the small pixel camera

Ability to collect more light doesn't equates to higher QE. I got confused here, my bad. That means like in the case of Canon 10D, the big pixels have lower QE compared to 30D with smaller pixels. Got it.

There are some things I don't agree with Clarkvision.

Given two sensors with equal numbers of pixels, and each with lenses of the same f/ratio, the larger sensor collects more photons yet has the same spatial resolution.
The lens for the larger sensor would have a longer focal length in order to cover the same field of view is the system with the smaller sensor and the lens will also have a larger aperture diameter, thus collecting more light.

The lens is not collecting more light, the bigger diameter is used to compensate for the light loss for a longer FL, thus bigger aperture is needed to match the same intensity as a shorter FL lens.  The term "collecting more light" and "smaller sensor"  when used together, creates a misconception that a FF lens has more light coming through than a 43 lens at the same F-stop.  That's not true.  More light coming through is achieved by have a bigger aperture, this applies to both big & small sensor alike.

Also I see the same misconception here. He said the bigger pixel collects more rain & also the larger sensor collects more photons. By calculating the total light on the area is not the same as what the pixel is doing, it doesn't add the light from the other pixels. I agree the bigger pixel collects more light, but the area size is not doing the collection. This collective action is done at the pixel level individually & isolated from another pixel.

To better illustrate my point, I took a quick look at a sensor manufacturer OmniVision. I chose 2 sensors that are currently in production & here is the white papers data. You can use other manufacturers like Sony or Toshiba or Samsung, doesn't matter.

For people who still believe that smaller sensor must be noisier, this is very telling.

Not really. Are you sure you understand noise?

We know noisier sensor means lower QE and lower QE comes from smaller pixels.

Wrong. Wrong.

Lower QE comes from smaller pixels, that mistake I sorted out.  But can you explain why lower QE is not noisier?  Let's say 2 sensors are identical in size & resolution, 1 having 30% QE & the other 63%.  Given the same exposure of 10 lux, doesn't the  sensor with lower QE measures higher noise since signal output is lower & thus lower SNR?

Final words to end this long read. Just to highlight, from manufacturer white papers to what the brands are saying when they have sensor improvements, they are correct in telling us the essence of the sensor quality comes from the pixels.

Look, you really ought to just delete your post. Follow up by learning more about sensors and the definitions you are using incorrectly. Then, try again. This post just makes you look, well, like a beginner. Nothing wrong with that, but a person has to know his/her limitations (Dirty Harry) and move slowly until you come up to speed.

Good luck. -EF

Gotta keep this post to play spot the mistake.  When all the mistakes are sorted out, anyone can benefit from this post.

Thanks Eric for your reply.

Detail Man
Detail Man Forum Pro • Posts: 17,180
(Image) Total Light Transduced / Angular Subject Area
2

... "a larger sensor" [does] "not collect more light over a given amount of detail. Only a larger lens can collect and deliver more light."

Source: http://www.clarkvision.com/articles/telephoto.system.performance/

"There are three factors that determine the true exposure in a camera + lens. 1) The lens area, or more accurately, the lens entrance pupil, which is the effective light collection area of a complex lens. The area determines how much light the lens collects to deliver to the sensor. 2) The angular area of the subject. The product of these two values is called Etendue, or A*Ω (A*Omega) product. (A= the lens entrance pupil area, and Ω, omega = the angular area of subject). The third value is 3) exposure time, the length of time the sensor is exposed to light."

Source: http://www.clarkvision.com/articles/low.light.photography.and.f-ratios/

Angular Area Approximation (considering any rectangle within the maximum field of view).

The more general (rectilinear) case: https://en.wikipedia.org/wiki/Steradian#Definition

.

The (Signal + Noise) / Noise Ratio ...

... which can only be a temporal measurement (from combining multiple samples over time when considered on the level of a single photosite) - as opposed to a spatial measurement (simultaneously combining multiple samples on the level of an array of photosites) ...

... varies by the square-root of the product of Signal+Noise multiplied by the Quantum Efficiency (of photon > electron transduction at some optical wavelength of measurement).

.

silentstorm wrote:

From Clarkvision site:

Given two sensors with equal numbers of pixels, and each with lenses of the same f/ratio, the larger sensor collects more photons yet has the same spatial resolution.
The lens for the larger sensor would have a longer focal length in order to cover the same field of view is the system with the smaller sensor and the lens will also have a larger aperture diameter, thus collecting more light. ...

... and the larger pixel simply enable collection of the increased light delivered by the lens

Eric Fossum
Eric Fossum Senior Member • Posts: 1,477
Re: Common misconception with noise, pixel size and sensor size
8

silentstorm wrote:

Eric Fossum wrote:

silentstorm wrote:

This thread is hoping to address the common misconception regarding sensor size, pixel size & noise from a recent discussion ...

The shot noise & dark current noise is pretty much fixed in a given sensor with the type of material used & sensor design (eg. CCD, CMOS, JFET).

JFET? And not fixed btw.

Nikon has a JFET LBCAST sensor.

I think this did not work well and has not been adopted in Nikon cameras for a while.

I meant the characteristics of the semiconductor used is fixed.

Shot noise originates with the number of photons incident on the pixel. Really has little to do with the semiconductor material, except perhaps QE.

Dark current varies from design to design, from fab to fab, and even from pixel to pixel in the same array. Definitely not fixed.

I understand dark current noise will go up with temperature in a manner & this manner doesn't change with this material composite. I should have phrased it more accurately. Thanks.

Now you shot yourself in the foot. The rate of dark current increase with temperature depends totally on the bandgap of the material.

.

Again, check the definition of QE.

Checked:

The "quantum efficiency" (Q.E.) is the ratio of the number of carriers collected by the solar cell to the number of photons of a given energy incident on the solar cell. Link.

OK. Note it is the ratio.

From Clarkvision site:

...and the larger pixel simply enable collection of the increased light delivered by the lens

Further down the article:

...the larger pixels of the big camera collects so much more light per pixel that the image quality is much better than the small pixel camera

Ability to collect more light doesn't equates to higher QE. I got confused here, my bad. That means like in the case of Canon 10D, the big pixels have lower QE compared to 30D with smaller pixels. Got it.

The QE (the ratio of carriers collected to photons in) does not depend on pixel size (to first order). Just think about the rain picture. Bigger bucket, same ratio.

.

Lower QE comes from smaller pixels, that mistake I sorted out. But can you explain why lower QE is not noisier? Let's say 2 sensors are identical in size & resolution, 1 having 30% QE & the other 63%. Given the same exposure of 10 lux, doesn't the sensor with lower QE measures higher noise since signal output is lower & thus lower SNR?

Dont confuse SNR with noise.  Generally, more light, more noise.   But the Signal to Noise ratio also goes up, so the fraction of noise in the signal goes down.  People erroneously refer to that as lower noise but technically the noise itself is higher.

Final words to end this long read. Just to highlight, from manufacturer white papers to what the brands are saying when they have sensor improvements, they are correct in telling us the essence of the sensor quality comes from the pixels.

Look, you really ought to just delete your post. Follow up by learning more about sensors and the definitions you are using incorrectly. Then, try again. This post just makes you look, well, like a beginner. Nothing wrong with that, but a person has to know his/her limitations (Dirty Harry) and move slowly until you come up to speed.

Good luck. -EF

Gotta keep this post to play spot the mistake. When all the mistakes are sorted out, anyone can benefit from this post.

Thanks Eric for your reply.

Well, I think if you had started out saying you had some misconceptions that needed sorting out, this post would be more sensible.  Saying you were sorting out other people's misconceptions was a bit upside down.  So if you ask questions to people in this forum, you will get kind answers.  Just remember, you are a white belt among nth-degree black belts here.

 Eric Fossum's gear list:Eric Fossum's gear list
Sony RX100 II Nikon Coolpix P1000 +1 more
D Cox Forum Pro • Posts: 27,176
Re: Common misconception with noise, pixel size and sensor size

Eric Fossum wrote:

Well, I think if you had started out saying you had some misconceptions that needed sorting out, this post would be more sensible. Saying you were sorting out other people's misconceptions was a bit upside down. So if you ask questions to people in this forum, you will get kind answers. Just remember, you are a white belt among nth-degree black belts here.

Well, being originally a biologist rather than a semiconductor engineer, I probably don't have a belt at all.

My question to Eric is, why do you think larger sensors give better dynamic range and less noise ?

It seems to me that everyone has a different untenable theory.

By "noise", I mean unwanted variation across an array of pixels in a still photo, not changes from frame to frame in a video.

 D Cox's gear list:D Cox's gear list
Sigma fp
Eric Fossum
Eric Fossum Senior Member • Posts: 1,477
Re: Common misconception with noise, pixel size and sensor size
3

D Cox wrote:

Eric Fossum wrote:

Well, I think if you had started out saying you had some misconceptions that needed sorting out, this post would be more sensible. Saying you were sorting out other people's misconceptions was a bit upside down. So if you ask questions to people in this forum, you will get kind answers. Just remember, you are a white belt among nth-degree black belts here.

Well, being originally a biologist rather than a semiconductor engineer, I probably don't have a belt at all.

My question to Eric is, why do you think larger sensors give better dynamic range and less noise ?

Larger means more pixels of the same size (larger die size), or same size sensor (die size), larger and fewer pixels, or larger die size, same # of pixels (larger pixels)?  And do you mean DR on a pixel by pixel basis, or perception when viewing a screen image?  And do you mean for the same technology node (technology generation)?

(DR for sensor technologists is the ratio of maximum signal to signal that yields SNR=1)

It seems to me that everyone has a different untenable theory.

I think it is probably due to a loose definition of both "larger" and "dynamic range" and getting mixed up with comparing different technology generations and "perception" vs. measurement.  There is NO mystery when it comes to understanding how the sensor is performing.

By "noise", I mean unwanted variation across an array of pixels in a still photo, not changes from frame to frame in a video.

So, spatial noise vs. temporal noise.

 Eric Fossum's gear list:Eric Fossum's gear list
Sony RX100 II Nikon Coolpix P1000 +1 more
bobn2
bobn2 Forum Pro • Posts: 66,744
Re: Common misconception with noise, pixel size and sensor size
6

silentstorm wrote:

Eric Fossum wrote:

silentstorm wrote:

This thread is hoping to address the common misconception regarding sensor size, pixel size & noise from a recent discussion ...

The shot noise & dark current noise is pretty much fixed in a given sensor with the type of material used & sensor design (eg. CCD, CMOS, JFET).

JFET? And not fixed btw.

Nikon has a JFET LBCAST sensor. I meant the characteristics of the semiconductor used is fixed. I understand dark current noise will go up with temperature in a manner & this manner doesn't change with this material composite. I should have phrased it more accurately. Thanks.

There are methods to increase the efficiency of capturing more light into the pixels eg. micro lens design, gapless micro lens, bigger size, backside illumination, etc, etc. Knowing so many variables can influence a given sensor performance,

Right on.

As of this writing, we are looking at 2 common ways to improve QE for lower noise signal:

1) More light.

More light does not affect QE.

I used QE too loosely here. More light to improve the signal output under the fixed QE on the given sensor. Sorry about that.

2) By increasing the pixel size.

Pixel size does not essentially affect QE.

Are you sure you know what QE is?

A pixel's QE is fundamentally very much limited by its size, it is a relationship that is locked. The smaller the pixel size the lower the QE & hence, noisier output signal.

Again, check the definition of QE.

Checked:

The "quantum efficiency" (Q.E.) is the ratio of the number of carriers collected by the solar cell to the number of photons of a given energy incident on the solar cell. Link.

From Clarkvision site:

...and the larger pixel simply enable collection of the increased light delivered by the lens

Further down the article:

...the larger pixels of the big camera collects so much more light per pixel that the image quality is much better than the small pixel camera

Ability to collect more light doesn't equates to higher QE. I got confused here, my bad. That means like in the case of Canon 10D, the big pixels have lower QE compared to 30D with smaller pixels. Got it.

There are some things I don't agree with Clarkvision.

Given two sensors with equal numbers of pixels, and each with lenses of the same f/ratio, the larger sensor collects more photons yet has the same spatial resolution.
The lens for the larger sensor would have a longer focal length in order to cover the same field of view is the system with the smaller sensor and the lens will also have a larger aperture diameter, thus collecting more light.

The lens is not collecting more light, the bigger diameter is used to compensate for the light loss for a longer FL, thus bigger aperture is needed to match the same intensity as a shorter FL lens. The term "collecting more light" and "smaller sensor" when used together, creates a misconception that a FF lens has more light coming through than a 43 lens at the same F-stop. That's not true. More light coming through is achieved by have a bigger aperture, this applies to both big & small sensor alike.

Also I see the same misconception here. He said the bigger pixel collects more rain & also the larger sensor collects more photons. By calculating the total light on the area is not the same as what the pixel is doing, it doesn't add the light from the other pixels. I agree the bigger pixel collects more light, but the area size is not doing the collection. This collective action is done at the pixel level individually & isolated from another pixel.

To better illustrate my point, I took a quick look at a sensor manufacturer OmniVision. I chose 2 sensors that are currently in production & here is the white papers data. You can use other manufacturers like Sony or Toshiba or Samsung, doesn't matter.

For people who still believe that smaller sensor must be noisier, this is very telling.

Not really. Are you sure you understand noise?

We know noisier sensor means lower QE and lower QE comes from smaller pixels.

Wrong. Wrong.

Lower QE comes from smaller pixels, that mistake I sorted out. But can you explain why lower QE is not noisier? Let's say 2 sensors are identical in size & resolution, 1 having 30% QE & the other 63%. Given the same exposure of 10 lux, doesn't the sensor with lower QE measures higher noise since signal output is lower & thus lower SNR?

Final words to end this long read. Just to highlight, from manufacturer white papers to what the brands are saying when they have sensor improvements, they are correct in telling us the essence of the sensor quality comes from the pixels.

Look, you really ought to just delete your post. Follow up by learning more about sensors and the definitions you are using incorrectly. Then, try again. This post just makes you look, well, like a beginner. Nothing wrong with that, but a person has to know his/her limitations (Dirty Harry) and move slowly until you come up to speed.

Good luck. -EF

Gotta keep this post to play spot the mistake. When all the mistakes are sorted out, anyone can benefit from this post.

Thanks Eric for your reply.

When playing this kind of game against people with the repitation of Eric, It's worthwhile taking a sober look ar your own accomplishment. Let's just say, ypu haven't learned to dribble yet.

Really you made so many basic mistakes in your OP, it's only worth commnting on the first. With tespect to your suggesyion that larger sensors don't gather more light, if you tead up about exposure, you'll find its measured in lux seconds. Find out what a lux is, you"ll see straight away that you are wrong.

-- hide signature --

Bob.
DARK IN HERE, ISN'T IT?

D Cox Forum Pro • Posts: 27,176
Re: Common misconception with noise, pixel size and sensor size

Eric Fossum wrote:

D Cox wrote:

Eric Fossum wrote:

Well, I think if you had started out saying you had some misconceptions that needed sorting out, this post would be more sensible. Saying you were sorting out other people's misconceptions was a bit upside down. So if you ask questions to people in this forum, you will get kind answers. Just remember, you are a white belt among nth-degree black belts here.

Well, being originally a biologist rather than a semiconductor engineer, I probably don't have a belt at all.

My question to Eric is, why do you think larger sensors give better dynamic range and less noise ?

Larger means more pixels of the same size (larger die size), or same size sensor (die size), larger and fewer pixels, or larger die size, same # of pixels (larger pixels)?

Let's consider the case of more pixels of the same size.

And do you mean DR on a pixel by pixel basis, or perception when viewing a screen image? And do you mean for the same technology node (technology generation)?

As I don't do video, I'm interested only in spatial noise, so that would be DR with (at the dark end) SNR measured across an area of the sensor. At the bright end, we are dealing with full wells, which seem fairly straightforward.

(DR for sensor technologists is the ratio of maximum signal to signal that yields SNR=1)

It seems to me that everyone has a different untenable theory.

I think it is probably due to a loose definition of both "larger" and "dynamic range" and getting mixed up with comparing different technology generations and "perception" vs. measurement. There is NO mystery when it comes to understanding how the sensor is performing.

And yet there is a mystery. Images from larger sensors do have smoother gradation of tones and colours, and the system is more robust in very dim light.

To put it another way, why do professional photographers find it worth while to spend the very high prices of "medium format" digital cameras ?

(Much the same applied to film, but perhaps for different reasons.)

By "noise", I mean unwanted variation across an array of pixels in a still photo, not changes from frame to frame in a video.

So, spatial noise vs. temporal noise.

Definitely spatial noise. There is no temporal noise in a still photograph.

 D Cox's gear list:D Cox's gear list
Sigma fp
gollywop
gollywop Veteran Member • Posts: 8,301
Re: Common misconception with noise, pixel size and sensor size
11

Eric only commented on the numerous errors you've made regarding sensors and sensor behavior. You should also engage in a major revision of your notions about light. Be aware that those equal-valued measurements you're referring to are equal lux values, and lux is a per-unit-area measure (lumens per square-meter). This, I fear, undermines your entire "equal light" thesis – as well it should, because it's flat-out wrong.

-- hide signature --

gollywop
I am not a moderator or an official of dpr. My views do not represent, or necessarily reflect, those of dpr.
http://g4.img-dpreview.com/D8A95C7DB3724EC094214B212FB1F2AF.jpg

Jack Hogan Veteran Member • Posts: 7,538
Re: Common misconception with noise, pixel size and sensor size
2

D Cox wrote:  And yet there is a mystery. Images from larger sensors do have smoother gradation of tones and colours, and the system is more robust in very dim light.

Hi David,

The mystery tends to disappear in a puff of smoke once one specifies all variables clearly.

To put it another way, why do professional photographers find it worth while to spend the very high prices of "medium format" digital cameras ?

The larger the format, often the more the options at the photographer's disposal, all else somewhat equal.  Oh, and better spatial resolution in its various incarnations.

Jack

bobn2
bobn2 Forum Pro • Posts: 66,744
Re: Common misconception with noise, pixel size and sensor size

D Cox wrote:

By "noise", I mean unwanted variation across an array of pixels in a still photo, not changes from frame to frame in a video.

So, spatial noise vs. temporal noise.

Definitely spatial noise. There is no temporal noise in a still photograph.

There is not very much distinction. Photon arrival is random in all four dimensions in space-time. In photography, we don't generally distinguish between exposure aggregated over space or time, that is, we'd say that f/2 at 1/250 is the same exposure as f/2.8 at 1/125

-- hide signature --

Bob.
DARK IN HERE, ISN'T IT?

Eric Fossum
Eric Fossum Senior Member • Posts: 1,477
Re: Common misconception with noise, pixel size and sensor size
2

D Cox wrote:

Eric Fossum wrote:

D Cox wrote:

My question to Eric is, why do you think larger sensors give better dynamic range and less noise ?

Let's consider the case of more pixels of the same size.

As I don't do video, I'm interested only in spatial noise, so that would be DR with (at the dark end) SNR measured across an area of the sensor. At the bright end, we are dealing with full wells, which seem fairly straightforward.

(DR for sensor technologists is the ratio of maximum signal to signal that yields SNR=1)

It seems to me that everyone has a different untenable theory.

I think it is probably due to a loose definition of both "larger" and "dynamic range" and getting mixed up with comparing different technology generations and "perception" vs. measurement. There is NO mystery when it comes to understanding how the sensor is performing.

And yet there is a mystery. Images from larger sensors do have smoother gradation of tones and colours, and the system is more robust in very dim light.

To put it another way, why do professional photographers find it worth while to spend the very high prices of "medium format" digital cameras ?

(Much the same applied to film, but perhaps for different reasons.)

By "noise", I mean unwanted variation across an array of pixels in a still photo, not changes from frame to frame in a video.

So, spatial noise vs. temporal noise.

Definitely spatial noise. There is no temporal noise in a still photograph.

Well yes and no. We sometimes call it "frozen noise" - when the spatial noise is dominated by what was temporal noise - like photon shot noise manifested as spatial noise in an image of a clear blue sky.  There is also PRNU which is non-temporal but may be signal dependent.

So let's assume the low end is where there is no signal and the residual spatial variation is the same image to image.

I guess you are putting more photons, in total, on the larger sensor, and the photons per pixel is the same, between the larger and smaller sensor.  And, the iFOV is the same for the larger and smaller sensor?  Or the FOV is the same?

If the iFOV is the same, then really the smaller sensor is just seeing a cropped FOV of the larger sensor, and I don't believe any of your claims, at least not at the sensor level or RAW image level.   If the FOV is the same, then of course the spatial resolution is better in the larger sensor and this can result in many benefits.  In the same size print or display image, spatial noise gets more suppressed by human perception - and is why more pixels in the same size sensor often looks better.

Generally though, medium format cameras often have larger pixel sizes than compact cameras and perhaps also higher pixel count. So this is a win on several fronts, except for your wallet.

I am sorry but I have to leave this conversation for most of the rest of the week due to travel.  If I can pop in I will.

 Eric Fossum's gear list:Eric Fossum's gear list
Sony RX100 II Nikon Coolpix P1000 +1 more
Mikael Risedal
Mikael Risedal Veteran Member • Posts: 4,625
to silentstorm
1

you can still ask questions even if Eric is  traveling

so you get things right

-- hide signature --

Member of Swedish Photographers Association since 1984
Canon, Hasselblad, Leica, Nikon, Linhoff, Sinar,Zeiss, Sony . Phantom 4

tony field Forum Pro • Posts: 10,937
Re: Common misconception with noise, pixel size and sensor size

Good Gosh!!! Power to the Pixel - down with photographs !!! Must be related to MBP (or at least his acolyte). We will have another 18 months of entertainment reruns.

-- hide signature --

Charles Darwin: "ignorance more frequently begets confidence than does knowledge."
tony
http://www.tphoto.ca

Mikael Risedal
Mikael Risedal Veteran Member • Posts: 4,625
Re: Common misconception with noise, pixel size and sensor size

bobn2 wrote:

silentstorm wrote:

Eric Fossum wrote:

silentstorm wrote:

This thread is hoping to address the common misconception regarding sensor size, pixel size & noise from a recent discussion ...

The shot noise & dark current noise is pretty much fixed in a given sensor with the type of material used & sensor design (eg. CCD, CMOS, JFET).

JFET? And not fixed btw.

Nikon has a JFET LBCAST sensor. I meant the characteristics of the semiconductor used is fixed. I understand dark current noise will go up with temperature in a manner & this manner doesn't change with this material composite. I should have phrased it more accurately. Thanks.

There are methods to increase the efficiency of capturing more light into the pixels eg. micro lens design, gapless micro lens, bigger size, backside illumination, etc, etc. Knowing so many variables can influence a given sensor performance,

Right on.

As of this writing, we are looking at 2 common ways to improve QE for lower noise signal:

1) More light.

More light does not affect QE.

I used QE too loosely here. More light to improve the signal output under the fixed QE on the given sensor. Sorry about that.

2) By increasing the pixel size.

Pixel size does not essentially affect QE.

Are you sure you know what QE is?

A pixel's QE is fundamentally very much limited by its size, it is a relationship that is locked. The smaller the pixel size the lower the QE & hence, noisier output signal.

Again, check the definition of QE.

Checked:

The "quantum efficiency" (Q.E.) is the ratio of the number of carriers collected by the solar cell to the number of photons of a given energy incident on the solar cell. Link.

From Clarkvision site:

...and the larger pixel simply enable collection of the increased light delivered by the lens

Further down the article:

...the larger pixels of the big camera collects so much more light per pixel that the image quality is much better than the small pixel camera

Ability to collect more light doesn't equates to higher QE. I got confused here, my bad. That means like in the case of Canon 10D, the big pixels have lower QE compared to 30D with smaller pixels. Got it.

There are some things I don't agree with Clarkvision.

Given two sensors with equal numbers of pixels, and each with lenses of the same f/ratio, the larger sensor collects more photons yet has the same spatial resolution.
The lens for the larger sensor would have a longer focal length in order to cover the same field of view is the system with the smaller sensor and the lens will also have a larger aperture diameter, thus collecting more light.

The lens is not collecting more light, the bigger diameter is used to compensate for the light loss for a longer FL, thus bigger aperture is needed to match the same intensity as a shorter FL lens. The term "collecting more light" and "smaller sensor" when used together, creates a misconception that a FF lens has more light coming through than a 43 lens at the same F-stop. That's not true. More light coming through is achieved by have a bigger aperture, this applies to both big & small sensor alike.

Also I see the same misconception here. He said the bigger pixel collects more rain & also the larger sensor collects more photons. By calculating the total light on the area is not the same as what the pixel is doing, it doesn't add the light from the other pixels. I agree the bigger pixel collects more light, but the area size is not doing the collection. This collective action is done at the pixel level individually & isolated from another pixel.

To better illustrate my point, I took a quick look at a sensor manufacturer OmniVision. I chose 2 sensors that are currently in production & here is the white papers data. You can use other manufacturers like Sony or Toshiba or Samsung, doesn't matter.

For people who still believe that smaller sensor must be noisier, this is very telling.

Not really. Are you sure you understand noise?

We know noisier sensor means lower QE and lower QE comes from smaller pixels.

Wrong. Wrong.

Lower QE comes from smaller pixels, that mistake I sorted out. But can you explain why lower QE is not noisier? Let's say 2 sensors are identical in size & resolution, 1 having 30% QE & the other 63%. Given the same exposure of 10 lux, doesn't the sensor with lower QE measures higher noise since signal output is lower & thus lower SNR?

Final words to end this long read. Just to highlight, from manufacturer white papers to what the brands are saying when they have sensor improvements, they are correct in telling us the essence of the sensor quality comes from the pixels.

Look, you really ought to just delete your post. Follow up by learning more about sensors and the definitions you are using incorrectly. Then, try again. This post just makes you look, well, like a beginner. Nothing wrong with that, but a person has to know his/her limitations (Dirty Harry) and move slowly until you come up to speed.

Good luck. -EF

Gotta keep this post to play spot the mistake. When all the mistakes are sorted out, anyone can benefit from this post.

Thanks Eric for your reply.

When playing this kind of game against people with the repitation of Eric, It's worthwhile taking a sober look ar your own accomplishment. Let's just say, ypu haven't learned to dribble yet.

Really you made so many basic mistakes in your OP, it's only worth commnting on the first. With tespect to your suggesyion that larger sensors don't gather more light, if you tead up about exposure, you'll find its measured in lux seconds. Find out what a lux is, you"ll see straight away that you are wrong.

take it easy BOB, Its reminds me about 10-12 years ago when I and others read Clarks explanation as the bible.

Today we know lot more through a number of knowledgeable people here at dpreview and it was John Sheehy and others who killed the myths around Clarks explanations. Read: Emil Martinec (and also you BobN2) The_Suede, Gaborek, Wishnowski, John Sheehy and many many more. And to have free  access to Eric Fossum's knowledge is gold worth.

-- hide signature --

Member of Swedish Photographers Association since 1984
Canon, Hasselblad, Leica, Nikon, Linhoff, Sinar,Zeiss, Sony . Phantom 4

Mikael Risedal
Mikael Risedal Veteran Member • Posts: 4,625
Re: Common misconception with noise, pixel size and sensor size

Eric Fossum wrote:

D Cox wrote:

Eric Fossum wrote:

D Cox wrote:

My question to Eric is, why do you think larger sensors give better dynamic range and less noise ?

Let's consider the case of more pixels of the same size.

As I don't do video, I'm interested only in spatial noise, so that would be DR with (at the dark end) SNR measured across an area of the sensor. At the bright end, we are dealing with full wells, which seem fairly straightforward.

(DR for sensor technologists is the ratio of maximum signal to signal that yields SNR=1)

It seems to me that everyone has a different untenable theory.

I think it is probably due to a loose definition of both "larger" and "dynamic range" and getting mixed up with comparing different technology generations and "perception" vs. measurement. There is NO mystery when it comes to understanding how the sensor is performing.

And yet there is a mystery. Images from larger sensors do have smoother gradation of tones and colours, and the system is more robust in very dim light.

To put it another way, why do professional photographers find it worth while to spend the very high prices of "medium format" digital cameras ?

(Much the same applied to film, but perhaps for different reasons.)

By "noise", I mean unwanted variation across an array of pixels in a still photo, not changes from frame to frame in a video.

So, spatial noise vs. temporal noise.

Definitely spatial noise. There is no temporal noise in a still photograph.

Well yes and no. We sometimes call it "frozen noise" - when the spatial noise is dominated by what was temporal noise - like photon shot noise manifested as spatial noise in an image of a clear blue sky. There is also PRNU which is non-temporal but may be signal dependent.

So let's assume the low end is where there is no signal and the residual spatial variation is the same image to image.

I guess you are putting more photons, in total, on the larger sensor, and the photons per pixel is the same, between the larger and smaller sensor. And, the iFOV is the same for the larger and smaller sensor? Or the FOV is the same?

If the iFOV is the same, then really the smaller sensor is just seeing a cropped FOV of the larger sensor, and I don't believe any of your claims, at least not at the sensor level or RAW image level. If the FOV is the same, then of course the spatial resolution is better in the larger sensor and this can result in many benefits. In the same size print or display image, spatial noise gets more suppressed by human perception - and is why more pixels in the same size sensor often looks better.

Generally though, medium format cameras often have larger pixel sizes than compact cameras and perhaps also higher pixel count. So this is a win on several fronts, except for your wallet.

I am sorry but I have to leave this conversation for most of the rest of the week due to travel. If I can pop in I will.

please do "pop in"

-- hide signature --

Member of Swedish Photographers Association since 1984
Canon, Hasselblad, Leica, Nikon, Linhoff, Sinar,Zeiss, Sony . Phantom 4

Great Bustard Forum Pro • Posts: 44,818
OK, I'll have a play.
2

silentstorm wrote:

This thread is hoping to address the common misconception regarding sensor size, pixel size & noise from a recent discussion on the m43 forum here. Do spare a few mins & read through what I have to say to determine if it is correct and makes sense. I try to be as concise & simple, so this is going to be long.

My argument is that bigger sensor size (eg. FF) doesn't bring about having more light captured than the smaller sensor. It is the pixel size & its quantum efficiency (QE) that determines the output signal's quality or fidelity (Signal to Noise Ratio). QE is used to describe how well a pixel can convert light photons to electrical charges, the bigger the pixel size, the better the QE, the higher the signal & thus higher SNR.

.

.

.

Have a good day y'all. Feel free to exchange your pointers.

Please excuse me for not addressing all the points individually. Basically, what it comes down to is that the more light a photo is made from, the less noisy it will be. There are exceptions, however. For example, if the photo is made with very little light so that the electronic noise (the noise from the sensor and supporting hardware) is dominant, then it is possible that a photo made with less light on a camera that has less electronic noise may be less noisy than a photo made with more light on a camera that has more electronic noise. This exception is most clearly seen in base ISO photos where the photo is largely strongly pushed shadows or at "insanely" high ISO settings (e.g. ISO 51200+).

However, for the vast, vast, vast majority of photos (and maybe even a few more "vasts" in there would be closer to the mark), the photo made with more light will be less noisy.

Now, the amount of light a photo is made from begins with the amount of light that falls on the sensor, where more light falls on the sensor for larger sensor systems for the same exposure (but the same amount of light falls on the sensor for photos of the same scene at the same DOF with the same exposure time).

Not all the light falling on the sensor is recorded, however. First of all, we have a lot of light lost by the very nature of the Bayer CFA (RGGB Color Filter Array) where the dyes above the pixels intentionally block out a great deal of the light. Aside from that, we have the QE of the sensor, which is the proportion of light making it through the CFA that gets recorded. For example, a QE of 50% (which is very close to the actual QE for most modern sensors, although BSI sensors hover around 70%) means that half of the light passing through the CFA onto the sensor is recorded.

So, if 33% of the light falling on a Bayer CFA makes it through the color filters and falls onto pixels with a QE of 50%, then then "real" QE is 33% x 50% = 17%.

The question, then, is how the electronic noise compares to the photon noise (the noise from the light itself). The answer is: very, very, very little until the light gets very, very, very low. Consider the Nikon D750. The electronic noise varies from 5.5 electrons/pixel to 2.1 electrons/pixel, depending on the ISO setting, whereas saturation ranges from 81608 photoelectrons to 175 photoelectrons at it's "insane" highest ISO setting.

For example, at ISO 51200, the light would have to be so low that only 8 photons made it through the CFA onto a pixel with a 50% QE before the electronic noise matched the photon noise, which is about 5.5 stops below the saturation limit of the pixel at that ISO setting, keeping in mind that the per-pixel DR of the D750 using the electronic noise as the noise floor is 6.4 stops. Thus, even at ISO 51200, electronic noise is still only dominant in the shadows (but much more of the shadows than at lower ISO settings).

Next up is the role of the pixel count. The *only* reason the pixel count matters in terms of noise is that, for a given proportion of a photo, the electronic noise of more smaller pixels tends to be greater than for fewer larger pixels (QE, however, is unaffected). Thus, at insanely high ISO settings (or heavy shadow pushing at base ISO), we will notice sensors of the same generation and size that have more pixels will tend to be more noisy (explained in detail here), but this greater noise will be insignificant unless shadows are heavily pushed at base ISO or shooting at very high ISO settings.

However, we need to consider the role of noise filtering. If noise filtering is [intelligently] applied, either in the RAW conversion, in the camera's jpg engine, or in PP (post processing), the overall noise/detail balance will tend to favor more pixels, as wonderfully demonstrated here.

Of course, if we compare pixel-for-pixel instead of the same proportion of the photo viewed at the same size from the same distance, then we might be deceived into thinking that more smaller pixels are more noisy than fewer larger pixels. However, that's the same fallacy that people make when thinking that their lenses become less sharp when their images are recorded by sensors with more pixels.

Now, there are other situations, such as thermal noise from long exposures, where we may find that one sensor produces less noisy photos than another despite recording more light, and this is a very important consideration, indeed, for people who take long exposures.

But, for the most part, it really is as simple as saying that the more light a photo is made from, the less noisy it will be, that larger sensors of a given generation record more light than smaller sensors in proportion to the ratio of the sensor areas for a given exposure, and that the more pixels the photo is made from, the higher the quality the photo will be.

Keyboard shortcuts:
FForum MMy threads