Relative Pixel Density - don't expect miracles from 40D

they did at least keep the collectors the same size despite going to
10MP.
but yeah, i probably got way optimistic about the the 1 stop. i
forgot that the 1dmkii wasn't a noise king despite having a 1.3x crop.
The further spacing of the pixels, however, means that the signal can have more sharpness with the same lens on the mk2, as compared to the 20D/30D. The mk2 does, however, have much lower read noise at ISOs 100 and 200.

--
John

 
"The lower pixel noise of a bigger pixel, displayed bigger, is wrong over a larger area, and it is also wrong in that it doesn't reflect the detail beneath it like more smaller pixels would; another kind of noise that generally isn't talked about."

I get this. But why we do not see high end solutions based on this thinking, say a full frame or 1.5 sensor with a small digicam pixel density?
 
"The lower pixel noise of a bigger pixel, displayed bigger, is wrong
over a larger area, and it is also wrong in that it doesn't reflect
the detail beneath it like more smaller pixels would; another kind of
noise that generally isn't talked about."

I get this. But why we do not see high end solutions based on this
thinking, say a full frame or 1.5 sensor with a small digicam pixel
density?
High expense and low demand. I'm trying to build the demand.

--
John

 
Both in the camera and out of the camera, we'd need some serious storage space and some awesome computing power to support a full-frame camera with P&S pixel density.

Still, I'd like to see it tried. The results should be amazing.

Bring on the 40 megapixel 1.6X body and the 100 megapixel FF body.

Maybe it'd be harder on the FF body due to alignment issues with their multi-exposure stepper process, but surely a 1.6 or 1.3 sensor could be made with ultra-high pixel density.

--
Jim H.
 
Noise suppression software already combines the signals from adjacent photosites- that is why the detail is reduced. Combining small photosites does not get back all of the lost signal to noise from shrinking pixels. Read noise is not insignificant for photosites that only contain a small to moderate number of electrons. Shrinking the photosite makes read noise issues worse because it reduces the ratio of detected photons to read noise in each pixel- pixel binning only recovers half of this degradation unless it occurs before the first read amplifier. Shrinking photosites also tends to have a negative effect on fill factor because each pixel needs some non-sensing electronics that don't get smaller as the pixels are shrunk. Canon's CMOS sensors have very good performance partly because of the large non-sensing circuitry that each pixel contains.
 
Some of us are old enough and with good enough memories to remember the 10D to 20D transisition.

Even though the 10D had bigger cells, the 20D had lower noise.

Oh, my gosh, how did they do that?

A) the 10D used 2 channels of A/Ds operating at 24 MHz
....the 20D uses 4 channels of A/Ds operating at 16 MHz
....Most of the noise improvement comes from the reconfiguration of A/Ds

B) a host of imporvements in sensor technology
...1) better microlenses
...2) different metalization
...3) smaller transistor footprint compared to cell area
...4) different color microfilters

There is nothing preventing Canon from doing more A/Ds, and indeed if higher speeds (7 fps) or more bits/read (12-> 14) then slower A/Ds will be "de rigor".

Also, there is nothing preventing even better miroclensing, and better transistor overheads, or better wiring overheads.

Finally, there is nothing preventing Canon from doing something in the Noise Abatement catagory either.
--
Mitch
 
In fact, any crop from a DSLR the size of a 10MP P&S camera's full
sensor will capture what looks like garbage compared to what the P&S
sensor captures, AOTBE.
I'm not sure what that proves.
It proves that small pixels packed into a given area give better
images than a smaller number of larger pixels.
But are you not ignoring the intention of the sensor design in this case? You are comparing 100% of the small sensor vs a small portion of the larger sensor. The small sensor was designed to capture the best picture it can at 100%. The larger sensor was designed to capture the best picture it can at 100%. So taking 100% of the output of the small sensor and comparing it to a small piece of the larger sensor is an apples and oranges comparison.

I understand you are trying to compare pixel area, but you also have to keep in mind that the sensors are designed to be used as a whole.

--

Comprehensive 2007 speculation and predictions: http://1001noisycameras.blogspot.com
 
In fact, any crop from a DSLR the size of a 10MP P&S camera's full
sensor will capture what looks like garbage compared to what the P&S
sensor captures, AOTBE.
I'm not sure what that proves.
It proves that small pixels packed into a given area give better
images than a smaller number of larger pixels.
But are you not ignoring the intention of the sensor design in this
case? You are comparing 100% of the small sensor vs a small portion
of the larger sensor. The small sensor was designed to capture the
best picture it can at 100%. The larger sensor was designed to
capture the best picture it can at 100%.
Is it really? It is designed to give the best image it can with a limited number of pixels. More pixels could give a better image, but might be more expensive, and require faster data transfer, and
So taking 100% of the output
of the small sensor and comparing it to a small piece of the larger
sensor is an apples and oranges comparison.
Not at all; not for the sake of the argument I'm making. You seem to be assuming that I am making a point that I am not making. The point is that more and smaller pixels can give a better image, without any extra image noise or loss of DR, as is commonly assumed. The ONLY way to demonstrate this fact is by comparing a crop of a DSLR with a full frame of a tiny-pixel/sensor camera, or equal area crops of both sensors. The OP is using the whole image of cameras with different amounts of pixels and different sensor sizes to make a point about pixel density, which is illogical. The only way to compare pixel densities is to compare them per unit of area.
I understand you are trying to compare pixel area, but you also have
to keep in mind that the sensors are designed to be used as a whole.
That's in my mind, but it is totally irrelevant in this context.

--
John

 
"The lower pixel noise of a bigger pixel, displayed bigger, is wrong
over a larger area, and it is also wrong in that it doesn't reflect
the detail beneath it like more smaller pixels would; another kind of
noise that generally isn't talked about."

I get this. But why we do not see high end solutions based on this
thinking, say a full frame or 1.5 sensor with a small digicam pixel
density?
High expense and low demand. I'm trying to build the demand.

--
John

There is a noise floor that is inherent to the leakage currents in the IC process they are using for the sensors. You can't expect that the read noise will lower enough with smaller photosites to offset the reduced well depth and DR will suffer. The current P&S cameras all have smaller DR than the current SLRs.

There are ways to increase the well depth without increasing the photodiode size, though, which I don't think have been tried yet. Such changes would reduce the native ISO of the sensor, though.

I'm confused by your comment about properly exposing at higher ISOs. Higher ISOs never fill up the conceptual "well" in a correct exposure and always give up DR. Are you just pointing out that underexposing at higher ISOs causes the photon noise to be amplified?
 
sensors. The OP is using the whole image of cameras with different
amounts of pixels and different sensor sizes to make a point about
pixel density, which is illogical. The only way to compare pixel
densities is to compare them per unit of area.
No - I was doing completely the opposite - i.e. using relative pixel density to make a point about relative (but whole) Image Quality.

The only way to compare the Image Quality produced by two different cameras is to compare the two complete images the cameras produce in a realistic range of operating conditions. Who cares if a small fraction of the DSLR sensor does not produce an equivalent image to the P & S sensor? I'm never going to try to produce an image from that small fraction of the DSLR sensor.

You seem to have a bizarre notion that it is unfair to compare a large sensor with a small one, so to even things up you will only compare them on the basis of equally sized portions of sensor. This make no sense to me

My original post was about relative (whole) Image Quality between different cameras and how that appears to be inversely related to the number of pixels per square inch on their sensors. The actual size of the sensors is irrelevant to this comparison except in as much as the larger sensors allow bigger pixels for a given resolution, and if there wasn't an advantage in doing this then these larger sensors with larger pixels would not exist.

Fred
 
B) a host of imporvements in sensor technology
...1) better microlenses
...2) different metalization
...3) smaller transistor footprint compared to cell area
...4) different color microfilters
I don't disagree that improvements can and will be made.

However, the same improvements can be made to all sensor/pixel sizes so my original point still stands, that for a given resolution the larger sized sensor, i.e. the one with lower pixel density, will perform better with respect to total image quality.

Fred
 
sensors. The OP is using the whole image of cameras with different
amounts of pixels and different sensor sizes to make a point about
pixel density, which is illogical. The only way to compare pixel
densities is to compare them per unit of area.
No - I was doing completely the opposite - i.e. using relative pixel
density to make a point about relative (but whole) Image Quality.

The only way to compare the Image Quality produced by two different
cameras is to compare the two complete images the cameras produce in
a realistic range of operating conditions. Who cares if a small
fraction of the DSLR sensor does not produce an equivalent image to
the P & S sensor? I'm never going to try to produce an image from
that small fraction of the DSLR sensor.

You seem to have a bizarre notion that it is unfair to compare a
large sensor with a small one, so to even things up you will only
compare them on the basis of equally sized portions of sensor. This
make no sense to me

My original post was about relative (whole) Image Quality between
different cameras and how that appears to be inversely related to the
number of pixels per square inch on their sensors. The actual size of
the sensors is irrelevant to this comparison except in as much as the
larger sensors allow bigger pixels for a given resolution, and if
there wasn't an advantage in doing this then these larger sensors
with larger pixels would not exist.

Fred
The problem is that you are doing a comparison while changing two different variables at once. You can't be sure whether the superior image quality of the larger sensors is down to pixel size or sensor size because both have been changed. Any science teacher will tell you that makes it almost impossible to draw a worthwhile conclusion.

The purpose of comparing equal areas of the two sensors allows us to answer the question 'what would happen if we created a DSLR with the pixel density of a compact?'. The answer is, we would get a massive increase in image detail and while noise at the pixel level might increase, that doesn't matter because noise at the image level would not.

Certain imagers have always used large pixels and you see them today in specialist scientific (usually astronomical) cameras. They are designed to be used with lens systems with very large focal ratios so the image resolution will necessarily be low and adaquate sampling can be done even with large pixel sensors. Switching to a smaller pixel would heavily oversample the image without any real resolution gain but would tend to increase overall read noise. These conditions do not exist for any current DSLR so it cannot be inferred that they shouuld use similar pixel sizes.

The main reasons for current pixel densities come down to cost, preserving a decent fill ratio at any given process node, avoiding having too much data to process and store and to allow for a gradual improvement in technology to ensure a good ROI. So what if Canon could introduce a 200MP 1Ds series camera right now, it would be very expensive, would damage camera sales and would be so slow that most photographers would find it next to useless.
 
The problem is that you are doing a comparison while changing two
different variables at once. You can't be sure whether the superior
image quality of the larger sensors is down to pixel size or sensor
size because both have been changed. Any science teacher will tell
you that makes it almost impossible to draw a worthwhile conclusion.
The speculation in the rest of your post is interesting, but I take issue with the statement above. I deliberately calculated an entirely comparable figure for all the sensors precisely to get away from the confusion of different sensor sizes.

If you look again you will see that I worked out the number of pixels per unit of sensor area - i.e MP per square inch. This figure is effectively an analogue for pixel size, but can be directly compared without reference to actual sensor sizes.

Of course if you used the low 9.6 MP/sq in pixel density of the FF 5D on a 1.6 sensor you would get a rather low resolution image, but of course that is why you need bigger sensors if you want to get the benefits of using bigger pixels, while retaining reasonably high resolution.

Fred
 
B) a host of imporvements in sensor technology
...1) better microlenses
...2) different metalization
...3) smaller transistor footprint compared to cell area
...4) different color microfilters
I don't disagree that improvements can and will be made.

However, the same improvements can be made to all sensor/pixel sizes
so my original point still stands, that for a given resolution the
larger sized sensor, i.e. the one with lower pixel density, will
perform better with respect to total image quality.

Fred
Obviously you've oversimplified the problem and your blanket statement is, therefore, incorrect in many cases.

Imagine taking it to the limit. If you build a sensor with a single pixel, the total image quality will be terrible. Noise performance may be great, but who would care? If we want to make a 8x10 print, 4 mpix, or even 6 for some subjects may be seen to degrade image quality more than noise or DR.

Statements like better total image quality have too many components to judge without knowing the end use of the image and the subject. The DLSR strives to be all things to the most common uses at a particular price point. Each generation will probably improve on the previous in this respect.
 
I might have agreed if you had said "All things equal, the camera with the larger sensor can deliver the greatest image quality."
 
the best DSLR on the market should be a Nikon F3 with Kodak digital back. 1.3mp FF sensor. It was also the first dslr. And the quality by today's standards was horrible, although it was revolutionary for it's time.

Technology progresses, dude. I can't see Canon bumping up MP and pixel density if it would negatively impact IQ. That may work with low priced P&S's, but not a midrange DSLR.
 
fairly well, and a larger meg. file does some of that from an SLR (like 40D as well).
--

The choices you've made in the past and the ones you make today create your tomorrow.

See Cuba & San Francisco at http://www.jonrp.smugmug.com
 
In fact, any crop from a DSLR the size of a 10MP P&S camera's full
sensor will capture what looks like garbage compared to what the P&S
sensor captures, AOTBE.
I'm not sure what that proves.
It proves that small pixels packed into a given area give better
images than a smaller number of larger pixels.
But are you not ignoring the intention of the sensor design in this
case? You are comparing 100% of the small sensor vs a small portion
of the larger sensor. The small sensor was designed to capture the
best picture it can at 100%. The larger sensor was designed to
capture the best picture it can at 100%.
Is it really? It is designed to give the best image it can with a
limited number of pixels. More pixels could give a better image, but
might be more expensive, and require faster data transfer, and
So taking 100% of the output
of the small sensor and comparing it to a small piece of the larger
sensor is an apples and oranges comparison.
Not at all; not for the sake of the argument I'm making. You seem to
be assuming that I am making a point that I am not making. The point
is that more and smaller pixels can give a better image, without any
extra image noise or loss of DR, as is commonly assumed. The ONLY
way to demonstrate this fact is by comparing a crop of a DSLR with a
full frame of a tiny-pixel/sensor camera, or equal area crops of both
sensors. The OP is using the whole image of cameras with different
amounts of pixels and different sensor sizes to make a point about
pixel density, which is illogical. The only way to compare pixel
densities is to compare them per unit of area.
I understand what you are trying to compare. I understand that the experiment you came up with maybe the best available option to compare pixel areas.

However, what I am trying to say is that the question you are trying to answer is not answerable by this test, because pixel size is not independent of sensor size. Pixel size is bound by the sensor size. Pixels dont exist outside of sensors.

So essentially your experiment is making the assumption that pixels can be compared independently of the sensor they live on (when dealing with different size sensors). What I am saying is that pixels are dependent on the sensor they live on, so they cannot be compared deterministically in such a fashion.

I hope this is more clear than the previous one :)

--

Comprehensive 2007 speculation and predictions: http://1001noisycameras.blogspot.com
 
I understand what you are trying to compare. I understand that the
experiment you came up with maybe the best available option to
compare pixel areas.

However, what I am trying to say is that the question you are trying
to answer is not answerable by this test, because pixel size is not
independent of sensor size. Pixel size is bound by the sensor size.
Pixels dont exist outside of sensors.

So essentially your experiment is making the assumption that pixels
can be compared independently of the sensor they live on (when
dealing with different size sensors). What I am saying is that pixels
are dependent on the sensor they live on, so they cannot be compared
deterministically in such a fashion.

I hope this is more clear than the previous one :)
It is clear what you're trying to say, but I don't see any truth to it.

Small pixels from small cameras show us what crops of large sensors with small pixels could be like. Pixels don't care what role their playing; they just build up charges.

--
John

 
Andrew dB wrote:
The speculation in the rest of your post is interesting, but I take
issue with the statement above. I deliberately calculated an
entirely comparable figure for all the sensors precisely to get away
from the confusion of different sensor sizes.
Then you should have compared pixels by taking the same area crop of each sensor, if you wanted to compare pixel size. You did not. The results would be almost the opposite, if you did.

--
John

 

Keyboard shortcuts

Back
Top