When will Canon focus on IQ instead of MP!?

But to say that a smaller pixel image image doesn't start off with more noise is simply false. It starts off with more detail, and more noise. One is free to eliminate both, by resampling; or to keep much of the added detail and remove much of the added noise with more sophisticated forms of filtering.
I prefer the point of view that the smaller pixel image captures more detail, and the photon shot noise is part of that detail. Capturing it is part and parcel of capturing the detail, and it isn't something 'added' by the sensor.
The shot noise is a result of limited sampes of photons. It is a counting fluctuation, relative to the long-time average rate of photon arrival integrated over the exposure time (assuming a static scene). In a tonally uniform area of the image, and reasonable photon counts (say more than a hundred) it is to a good approximation white noise. It is not "detail" in any useful sense, since as you note below these fluctuations are mitigated by capturing more photons, while the actual detail remains the same.
Of course it's not detail in any useful sense. It's detail in a pain in the a*se sense, but it is there in the image projected on the sensor, it does not originate in the sensor. It is the consequence of trying to make an image out of whichever number of photons you decide to use. I think you observed elsewhere that the best way to approach picture taking was to determine the exposure for photographic capture and then determine the capture conditions to optimise for that choice. The problem with a large pixel sensor is it predetermines your choice of capture conditions in a way that a small pixel sensor doesn't.

If that's a bit obtuse for some people, what it means in practice is that if there is plenty of light available and you have a pixel dense camera, you can choose to preserve all the detail. If there's less light available, you can choose to jettison the detail and the noise. With a pixel starved camera the former choice is denied you.
 
Which physics are you thinking of? If it's the old one about the standard deviation shot noise in a pixel being the square root of the number of photoelectrons captured, and therefore big pixels 'collect more light' and therefore 'have less noise', then I can take you through the physics that shows that when you aggregate the smaller pixels together to cover the same area as the bigger pixel, the noise is exactly the same. Or did you have some other 'physics' in mind?
You are not considering one very important detail.

The shot noise is the same only for 100% fill factor sensors, where the entire surface of the sensor is collecting light.

If that's the case, changing the pixel size does not indeed change the overall shot noise because the sensor still collects the same number of photons (note that this applies to shot noise only).

However, no DSLR sensor has a 100% fill factor (fill factor is the photodiode area vs pixel area ratio).

The photodiode shares the pixel real estate with the on-pixel circuitry - typically 3-4 transistors.
When you shrink the pixels, the on-pixel circuitry stays largely the same.

As a result, the photodiode shrinkage is much bigger than the pixel shrinkage.

For example, shrinking the pixel area with 10% might result in 50% shrinkage of the photodiode area.

Thus, shrinking the pixels on a normal DSLR sensor (one that does not have 100% fill factor) actually reduces the number of total collected photons - and hence increases shot noise.

It is a common myth that the number/size of pixels does not matter and only the sensor size matter.

This is incorrect, since the myth does not take into account the loss of photodiode area (and hence the loss of counted electrons) on sensors with less than 100% fill factor.

Note that this refers only to shot noise and does not even take into effect the broader inpact of pixel size on other sensor parameters.
 
There is no such thing as even light. Light is like vibrating raindrops of individual photons .
Precisely.

That's why smaller pixels are more prone to photon-counting errors (aka noise) than larger ones.
You really are clutching at straws and you have managed to get it precisely back to front.
Sorry. You got it backwards. It goes like this:
weak singal -> smaller SNR -> more noise (relatively)

This noise can then be amplified by the circuitry.
 
I tried out a D3s alongside my Canon equipment a few weeks ago... to be honest I was simply blown away by the quality of the images from the D3s. So much so, I bought one the next day. ISO 6400 looks like what I'm used to at iso 800. Color, black tones, grain like structrure, lack of noise, detail, etc. were the best I've ever seen from a DSLR. Even the AF was unbelievably accurate. I really like Canon equipment but Nikon really got this camera right.
Nikon recently indeed had the better sensors. Most important for me is that they have less banding pattern noise. But with the 7D Canon too finally provides decent pattern noise suppression.
I read (today) the new review of diglloyd.com (pay review site) and the examples Lloyd delivers for the d3s look gorgeous. They include ISOs from 3200 to 12800 and both, noise (also noise quality) and - even more than that - colour at high ISO are incredible. The pictures look so vibrant and clean, it is hard to believe that they were taken at those extreme iSOs.

Lloyd also concluded that the d3s is the best high ISO camera availabe as of now. But it is worth having a look at it.

bernie
 
There is no such thing as even light. Light is like vibrating raindrops of individual photons .
Precisely.

That's why smaller pixels are more prone to photon-counting errors (aka noise) than larger ones.
You really are clutching at straws and you have managed to get it precisely back to front.
Sorry. You got it backwards. It goes like this:
weak singal -> smaller SNR -> more noise (relatively)
No, it doesn't. That's the kiddies version, for people who don't understand electronics. If you do, you understand that what we've got in the pixel is charge (electrons). Those need to be translated into a voltage to be read. this is done by the gate capacitance of the source follower transistor, according to the formula V = Q/C. 'Q' is smaller for a small pixel, but 'C' is also, precisely in proportion if the pixel is uniformly scaled (they both go as the area). So the output voltage (signal 'strength' in your terminology) is exactly the same for a small or large pixel. Put it another way, the small pixel contains less charge, but is a much more sensitive charge detector.
This noise can then be amplified by the circuitry.
No extra amplification, no extra noise.

As I said, it's you who has it back to front. And you haven't countered any of the reasons I gave why small pixels 'count photons' better.
 
As I said, it's you who has it back to front. And you haven't countered any of the reasons I gave why small pixels 'count photons' better.
By the very definition of SNR (signal-to-noise), weak signal means more noise (relatively).

In order to have less noise, you need strong photoelectric signal.

Sleep on it.
 
But I will ask the question - PROVE THAT IQ of 5D MKII IS BAD!!!
Can someone do it?
No one is saying that the IQ of the 5D mkII is bad. The discussion is basically how much better the IQ of the 5D mkII could be with a lower pixel density, but all the technological advances that it has to make up for that same higher pixel density that causes issues. Think how much better it could be with the same pixel density of the older 1Ds mkII...
Canon has never been able to improve pixel-level read noise at low ISOs. It's stuck at the same level now for several years. There is no reason to believe that using bigger pixels would solve this DR issue; the way Canon's pixel-level base-ISO read noise is "stuck"; the more pixels, the higher the DR, as more of them integrate for a reduced image read noise.

The problem with the 5D2 has nothing to do with pixel density. It has to do with the sloppy, noisy electronics that give banding, and random noise higher than what the A900 and D3X are giving.

--
John

 
Which physics are you thinking of? If it's the old one about the standard deviation shot noise in a pixel being the square root of the number of photoelectrons captured, and therefore big pixels 'collect more light' and therefore 'have less noise', then I can take you through the physics that shows that when you aggregate the smaller pixels together to cover the same area as the bigger pixel, the noise is exactly the same. Or did you have some other 'physics' in mind?
You are not considering one very important detail.

The shot noise is the same only for 100% fill factor sensors, where the entire surface of the sensor is collecting light.

If that's the case, changing the pixel size does not indeed change the overall shot noise because the sensor still collects the same number of photons (note that this applies to shot noise only).

However, no DSLR sensor has a 100% fill factor (fill factor is the photodiode area vs pixel area ratio).

The photodiode shares the pixel real estate with the on-pixel circuitry - typically 3-4 transistors.
When you shrink the pixels, the on-pixel circuitry stays largely the same.

As a result, the photodiode shrinkage is much bigger than the pixel shrinkage.

For example, shrinking the pixel area with 10% might result in 50% shrinkage of the photodiode area.

Thus, shrinking the pixels on a normal DSLR sensor (one that does not have 100% fill factor) actually reduces the number of total collected photons - and hence increases shot noise.

It is a common myth that the number/size of pixels does not matter and only the sensor size matter.

This is incorrect, since the myth does not take into account the loss of photodiode area (and hence the loss of counted electrons) on sensors with less than 100% fill factor.

Note that this refers only to shot noise and does not even take into effect the broader inpact of pixel size on other sensor parameters.
Twice the pixel count on a given sensor-size will result in 41% higher shot/photon noise (variation in the light itself), both at pixel-level and at 'image-level', but not because of a lower fill-factor/efficiency, which is pretty much the same on for example 40D and 7D.
 
Steen Bay wrote:

Nonsense. It is illusive reality, because the ignorant observer is assuming that the natural, equalizing thing to do is to increase pixel display magnification with increased density.
Sorry, that should have been "image display magnification". Pixel magnification would be equal in that scenario.

--
John

 
Which physics are you thinking of? If it's the old one about the standard deviation shot noise in a pixel being the square root of the number of photoelectrons captured, and therefore big pixels 'collect more light' and therefore 'have less noise', then I can take you through the physics that shows that when you aggregate the smaller pixels together to cover the same area as the bigger pixel, the noise is exactly the same. Or did you have some other 'physics' in mind?
You are not considering one very important detail.

The shot noise is the same only for 100% fill factor sensors, where the entire surface of the sensor is collecting light.
Fill factor is included in the quantum efficiency measure, so indeed I did consider it, along with all the other things that effect quantum efficiency, like photodiode depth, CFA passband, etc, etc.
If that's the case, changing the pixel size does not indeed change the overall shot noise because the sensor still collects the same number of photons (note that this applies to shot noise only).

However, no DSLR sensor has a 100% fill factor (fill factor is the photodiode area vs pixel area ratio).
No, effective fill factor is determined in all modern sensors by the efficiency of the microlenses. that's what they're for.
The photodiode shares the pixel real estate with the on-pixel circuitry - typically 3-4 transistors.
When you shrink the pixels, the on-pixel circuitry stays largely the same.
Not if you shrink the pixel uniformly, it doesn't. The fill factor remains exactly the same. In fact, the general trend seems to be that quantum efficiency increases as pixel size shrinks. As a trend, smaller pixels are more efficient than bigger ones, P&S pixels the most efficient of all. My suspicion is that this is because it's easier to make effective small microlenses than big ones.
As a result, the photodiode shrinkage is much bigger than the pixel shrinkage.

For example, shrinking the pixel area with 10% might result in 50% shrinkage of the photodiode area.
Evidence for this please? It would only be the case if the sensor design was up against process geometry limits, and since 1.5 micron CMOS sensors are being made, it is hardly likely that DSLR sensors in the 4-6 micron range are up against process limits.
Thus, shrinking the pixels on a normal DSLR sensor (one that does not have 100% fill factor) actually reduces the number of total collected photons - and hence increases shot noise.

It is a common myth that the number/size of pixels does not matter and only the sensor size matter.
No, the common myth is that pixel denisty, by itself, determines sensor noise. The truth is more complex, but the sensor size is one of the main factors in the noise and DR performance for any given output image size. QE and read noise are also important. Pixel size is only important insomuch as it affects the other factors, which it doesn't very much.
This is incorrect, since the myth does not take into account the loss of photodiode area (and hence the loss of counted electrons) on sensors with less than 100% fill factor.
This loss of photodiode area is a myth, unless the sensor design is up against process geometry limits.
Note that this refers only to shot noise and does not even take into effect the broader inpact of pixel size on other sensor parameters.
It is interesting that the loss of QE you are speculating about is not actually observed in real sensors. This is because it is pure fiction. Once again, clutching at straws.

Edit: Just to take this opportunity also to reply to your other post:

You repeat exactly what you said before (weaker photoelectric signal = more noise) without even considering the well founded electronic reasons I gave as to why you're operating in the wrong paradigm. You invite me to 'sleep on it'. Believe me, I have 'slept on it' on a number of occasions, which is why I understand it better than you. Perhaps you should undertake a short course in microelectronics, sleep on it a bit yourself, and then you might understand better than you do at present.
 
There is no such thing as even light. Light is like vibrating raindrops of individual photons .
Precisely.

That's why smaller pixels are more prone to photon-counting errors (aka noise) than larger ones.
They're not really errors. They are only "errors" in the delusion of an ideal. They are exactly what is REALLY falling on the sensor. It is not something that the pixels do, per se.

Your "prone" couldn't be more wrong. The only difference as far as shot noise is concerned, for two sensors varying only in photosite density, is resolution. Lower density obfuscates the original location of photon strikes more.

Under-standing that, how could one call higher density "noisier" from a standpoint of information acquisition?

Of course it looks noisier, if you magnify it more, and convert it literally. That doesn't mean that it is noisier.

--
John

 

Keyboard shortcuts

Back
Top