Relative Pixel Density - don't expect miracles from 40D

Don_D, There is no extra wasted space when you shrink large pixel
down to smaller ones. Like I said before progress in lithography is
always to shrink pitch and line width together at about the same
proportion.
carlk,

We are not talking about progress in lithography (like CPU design), but about making sensors with rather large features, the light gathering diodes. The diodes are quite large compared to the minimum line width and spacing. In a given technology, there would be no reason to adjust the minimum spacing as the pixel size changes. You would always use the minimum, if not, it would just use up more precious space. This will result in wasted space as I showed in my example.

There is another, perhaps more important reason why breaking up a large pixel into 4 smaller ones would result in more wasted space. In the CMOS sensor, each light gathering diode pixel would have to have it's own 3 or 4 transistor circuit, so that you would have 4x additional circuits, all taking up room that could have been used for light sensitive areas. There is no reason that these circuits would scale with the diode sensor area; certainly not reduce by 4x.

So breaking larger pixels into smaller ones will always result in wasted area and more random shot noise.

I admit that this random shot noise may not always be the biggest contributor to the overall noise.

Don
http://www.pbase.com/dond
 
Don, I’m assuming the sensor designers will want to shrink everything- sensor, spacing, transistors- down together proportionally. I don’t know a reason why this cannot be done. There is no difference between making microprocessors and cmos sensors. Name of the game is to make small features smaller. I’d be interested to see some real data comparing different sensors. I will be very surprised if a 10MP sensor does not have smaller spacing and transistors than an 8MP sensor.
Don_D, There is no extra wasted space when you shrink large pixel
down to smaller ones. Like I said before progress in lithography is
always to shrink pitch and line width together at about the same
proportion.
carlk,
We are not talking about progress in lithography (like CPU design),
but about making sensors with rather large features, the light
gathering diodes. The diodes are quite large compared to the minimum
line width and spacing. In a given technology, there would be no
reason to adjust the minimum spacing as the pixel size changes. You
would always use the minimum, if not, it would just use up more
precious space. This will result in wasted space as I showed in my
example.

There is another, perhaps more important reason why breaking up a
large pixel into 4 smaller ones would result in more wasted space.
In the CMOS sensor, each light gathering diode pixel would have to
have it's own 3 or 4 transistor circuit, so that you would have 4x
additional circuits, all taking up room that could have been used for
light sensitive areas. There is no reason that these circuits would
scale with the diode sensor area; certainly not reduce by 4x.

So breaking larger pixels into smaller ones will always result in
wasted area and more random shot noise.
I admit that this random shot noise may not always be the biggest
contributor to the overall noise.

Don
http://www.pbase.com/dond
 
Even better (if my math is right), the Sony Mavica FD-71 sported a
1/4" sensor (.15 x .2" at 4:3), and had a mere 640x480 (.3 MP),
giving it a relative pixel density of...

.009 MP/sq in! Nice!

-andy
I don't think your math is right. Those figures give a pixel density of 10MP/sg in. A sensor with dimensions of .15' X .2' is 0.03 sq in. 0.3MP per 0.03 sq in is 10MP per sq in.
--
Alastair
http://homepage.mac.com/anorcross/home.html
http://anorcross.smugmug.com
Equipment in profile

 
But the thing is, as long as we're only talking about the sensor itself, the larger sensor, even if it has larger (and less) photosites, will always cost a whole lot more than the smaller sensor.

So while many would love to have a full-frame sensor, and would gladly trade pixel density for sheer size, (as long as we still get decent final resolution) the fact is that we're always going to pay enormously more for that large sensor IC.

Making the photosites more dense doesn't have as much of a recurring cost associated with it (for the sensor IC itself) as you might think. And we can tolerate a lot more dead pixels if we've got so many more to spare.

So other than the computing power and memory you'd need to process these larger files, the cost to have them isn't all that great (if you keep the sensor the same size).

The real point, I think, is:

Would you rather have a 1.6X sensor with 10 or 12 Megapixels or one that has 6 or 8? It sounds as if some people are convinced that going above 8 megapixels for the 40D is going to result in lower image quality. I'm not convinced by their arguments.

--
Jim H.
 
It's limited, but it's there. I think you have to make the edit within 15 minutes of the post and you can only make one edit.

--
Jim H.
 
Don, I’m assuming the sensor designers will want to shrink
everything- sensor, spacing, transistors- down together
proportionally. I don’t know a reason why this cannot be done.
carlk,

The point you are missing is that the digital sensor has one LARGE feature, the light gathering diode. All the other transistors are ALREADY at their minimum dimensions. Just because the large feature, the diode area is cut in 1/4 doesn't mean that the transistors area can be cut in 1/4 too.
There is no difference between making microprocessors and cmos
sensors. Name of the game is to make small features smaller.
Not so. For the same reason that Moore's law doesn't apply to sensors; the light sensitive diodes are LARGE. The CMOS technology is already mature, driven by other applications. To a first order, sensor designers are not challenged by trying to make smaller devices.
I’d be
interested to see some real data comparing different sensors. I will
be very surprised if a 10MP sensor does not have smaller spacing and
transistors than an 8MP sensor.
If the CMOS process and the litho tooling changed for some reason this could be the case. However, to make such a process change is not required to move from 8mp to 10MP and it would be very costly with limited ROI.

Cheers,
Don
http://www.pbase.com/dond
 
Don, It can be done and people are doing that all the time. I’ve been involved in this area for so long I have learned not to listen to naysayers. What was almost impossible challenge not too long ago will soon becomes a standard process. It happens all the time.

I did a little search and found this comment about the 10MP 400D sensor. Hope this will erase any doubts you still have.

"Chuck Westfall, Canon USA's Director of Media and Customer Relationship, says that dynamic range and noise levels with the Rebel XTi/400D are nearly identical to the camera it replaces, despite the smaller pixel size of the new model (5.7µm square vs 6.4µm square for the Rebel XT/350D). This, says Westfall, is because of an improved fill factor - the gap between the sensor's microlenses has been reduced, and the light-sensitive area of each pixel has been further increased through other design improvements."
Don, I’m assuming the sensor designers will want to shrink
everything- sensor, spacing, transistors- down together
proportionally. I don’t know a reason why this cannot be done.
carlk,
The point you are missing is that the digital sensor has one LARGE
feature, the light gathering diode. All the other transistors are
ALREADY at their minimum dimensions. Just because the large feature,
the diode area is cut in 1/4 doesn't mean that the transistors area
can be cut in 1/4 too.
There is no difference between making microprocessors and cmos
sensors. Name of the game is to make small features smaller.
Not so. For the same reason that Moore's law doesn't apply to
sensors; the light sensitive diodes are LARGE. The CMOS technology
is already mature, driven by other applications. To a first order,
sensor designers are not challenged by trying to make smaller devices.
I’d be
interested to see some real data comparing different sensors. I will
be very surprised if a 10MP sensor does not have smaller spacing and
transistors than an 8MP sensor.
If the CMOS process and the litho tooling changed for some reason
this could be the case. However, to make such a process change is
not required to move from 8mp to 10MP and it would be very costly
with limited ROI.

Cheers,
Don
http://www.pbase.com/dond
 
Don, It can be done and people are doing that all the time. I’ve
been involved in this area for so long I have learned not to listen
to naysayers. What was almost impossible challenge not too long ago
will soon becomes a standard process. It happens all the time.

I did a little search and found this comment about the 10MP 400D
sensor. Hope this will erase any doubts you still have.

"Chuck Westfall, Canon USA's Director of Media and Customer
Relationship, says that dynamic range and noise levels with the Rebel
XTi/400D are nearly identical to the camera it replaces, despite the
smaller pixel size of the new model (5.7µm square vs 6.4µm square for
the Rebel XT/350D). This, says Westfall, is because of an improved
fill factor - the gap between the sensor's microlenses has been
reduced, and the light-sensitive area of each pixel has been further
increased through other design improvements."
carlk,

With all due respect, our exchanges here seem to be suffering from a disconnect. I give you specifics to which you never respond directly but you do respond with some generalities.

Did you notice what Westfall said " the noise levels are nearly identical DESPITE the smaller pixel size"? ie...this implies to me that without innovation he would expect a higher noise from smaller pixels. Hello?
Exactly my point.

On the other hand, nowhere have I ever said that I didn't expect improvement and innovation. Who doesn't?
Don
http://www.pbase.com/dond
 
Exactly. So the point I was(clumsily) trying to make is that these
type of design decisions/trade-offs affect the photosites/pixels of
each sensor. The photosites/pixels are bound to each sensor. Because
of these restrictions,, the type of conclusions one can arrive at by
comparing pixel areas from sensors of very different sizes are also
bound by the above restrictions.
No matter how you choose to look at it, the fact is, 40 - 200 MP
sensors can be made, and will be as good as multiple small sensors
stiched together, even if parallelized readout is necessary for a
good return on the read speed vs read noise compromise.
I am agreeing more than disagreeing with most of the things you are saying. It is only a subset that I was objecting to and it took me a few retries to write it in a way that others were able to understand what i was saying :)
According to you, we can't projecct image quality because they aren't
already being manufactured.
I haven't said that! It was not the projections, but the comparisons among actual existing sensors that I was referring to (mostly). What I said is that the conclusions from those comparisons are restricted by the restrictions of the comparison. An extreme case example (and I'm not saying you said this): one cannot say that an existing APS-C sensor is worse than an existing 1/2.5" sensor because a pixel area comparison shows a better image from the 1/2.5" sensor. This is different than comparing a projected/hypothetical/theoretical/future sensors stiched from 1/2.5"sensors, I was trying to say.

--

Comprehensive 2007 speculation and predictions: http://1001noisycameras.blogspot.com
 
Don, No I totally understand where you're coming from. Your only argument is smaller pixel will result in more wasted space. Other than that there is no reason it will get less light/signal. To prove your point you have to design a sensor with small pixels and large spacing. In reality that will never happen. OK never say never but it will be a lousy engineering work.
Don, It can be done and people are doing that all the time. I’ve
been involved in this area for so long I have learned not to listen
to naysayers. What was almost impossible challenge not too long ago
will soon becomes a standard process. It happens all the time.

I did a little search and found this comment about the 10MP 400D
sensor. Hope this will erase any doubts you still have.

"Chuck Westfall, Canon USA's Director of Media and Customer
Relationship, says that dynamic range and noise levels with the Rebel
XTi/400D are nearly identical to the camera it replaces, despite the
smaller pixel size of the new model (5.7µm square vs 6.4µm square for
the Rebel XT/350D). This, says Westfall, is because of an improved
fill factor - the gap between the sensor's microlenses has been
reduced, and the light-sensitive area of each pixel has been further
increased through other design improvements."
carlk,
With all due respect, our exchanges here seem to be suffering from a
disconnect. I give you specifics to which you never respond directly
but you do respond with some generalities.
Did you notice what Westfall said " the noise levels are nearly
identical DESPITE the smaller pixel size"? ie...this implies to me
that without innovation he would expect a higher noise from smaller
pixels. Hello?
Exactly my point.

On the other hand, nowhere have I ever said that I didn't expect
improvement and innovation. Who doesn't?
Don
http://www.pbase.com/dond
 
Exactly. So the point I was(clumsily) trying to make is that these
type of design decisions/trade-offs affect the photosites/pixels of
each sensor. The photosites/pixels are bound to each sensor. Because
of these restrictions,, the type of conclusions one can arrive at by
comparing pixel areas from sensors of very different sizes are also
bound by the above restrictions.
No matter how you choose to look at it, the fact is, 40 - 200 MP
sensors can be made, and will be as good as multiple small sensors
stiched together, even if parallelized readout is necessary for a
good return on the read speed vs read noise compromise.
While I agree that a 10 or even 20 megapixel sensor is likely to give better IQ than a 8 mp sensor, the math and physics do not agree with a 200mp sensor. Please correct my figures if they are wrong. A 8mp 30D sensor counts approximately 50,000 electrons at ISO 100 full well capacity. That would relate to about 2000 electrons on a 200mp sensor at ISO 100 (50000 * 8/200), or 125 photons at ISO 1600. To get read noise close to the same level you need 25x as many converters. Given 125 electrons and shot noise and likely higher proportional read noise there would be a much higher noise level on the 200mp sensor (no matter how random it was). Given that lenses are harder to make for large sensors it is unlikely that any reasonably priced lenses would be able to give that resolution.
According to you, we can't projecct image quality because they aren't
already being manufactured.

--
John

 
The real point, I think, is:

Would you rather have a 1.6X sensor with 10 or 12 Megapixels or one
that has 6 or 8? It sounds as if some people are convinced that
going above 8 megapixels for the 40D is going to result in lower
image quality. I'm not convinced by their arguments.
Yeah, I'm not convinced either. There could be a point there about dynamic range, I guess, but from 8 to 12 MP and with advancing sensor technology I don't think it would be very relevant. But also I wouldn't want much more than 12MP, the low to no gain in resolution for 99.9% of my images vs. the increase in computer power and storage isn't worth it at all (unless you are doing a lot of cropping, but this reduces the effective "sensor size" and image-level quality as well, so it's pretty much moot). I would probably go for 14-bit color or something like that, that could justify the increase in PC power and storage.
 
this implies to me
that without innovation he would expect a higher noise from smaller
pixels. Hello?
Exactly my point.
We have no idea whether he is talking about image noise, pixel noise, shot noise, or read noise, so the statement is difficult to comment on. A smaller pixel does mean more pixel-level shot noise at the same exposure level.
On the other hand, nowhere have I ever said that I didn't expect
improvement and innovation. Who doesn't?
Well, some limts are being approached. Shot noise is not something that technology can overcome. Shot noise isn't really noise; it's really signal, and its "noise" status is only because we don't want to see it; we'd rather see an idealized, even capture like the ones our brains fabricate for us. However, shot noise does not increase at the image level when we subdivide the image into more and smaller pixels, except as photosite border areas start losing photons, and even then, it takes a drastic loss to make a visible difference. To lose 20% of photons to have 4x the pixels would not be a great loss; shot noise would only increase by 11% at the image level. Based on the quantum efficiency of 2-micron sensors, like the one in the Panasonic FZ50, there doesn't seem to be much of an issue at such pixel pitches, at least for simple CCD designs. Canon DSLR sensors have more to deal with, however, with their complex amplifiers at the photosites; this technology may need to be simplified for much higher pixel densities, but actually, once the full well counts start dropping significantly, you really don't need to have a wide range of amplifications at the photosites.

--
John

 
Don, No I totally understand where you're coming from. Your only
argument is smaller pixel will result in more wasted space.
Correct. Some people had started asserting that there was no noise penalty on a given size sensor when reducing the pixel size to increase mp. With all things remaining equal, there is an inherent noise penalty because of the wasted space. I think this is an important point, as both compact camera and DSLRs manufacturers are crowding more pixels onto their sensors.

You have correctly pointed out that this penalty can sometimes be overcome by clever innovative improvements.

I think we all have high hopes that Canon will do so with the 40D sensor and improve the overall IQ.
Don
http://www.pbase.com/dond
 
While I agree that a 10 or even 20 megapixel sensor is likely to give
better IQ than a 8 mp sensor, the math and physics do not agree with
a 200mp sensor. Please correct my figures if they are wrong. A 8mp
30D sensor counts approximately 50,000 electrons at ISO 100 full well
capacity. That would relate to about 2000 electrons on a 200mp
sensor at ISO 100 (50000 * 8/200), or 125 photons at ISO 1600. To get
read noise close to the same level you need 25x as many converters.
You don't have to maintain the same read noise at the pixel level. If you have 25x as many pixels, you can have 5x the read noise at the pixel level without increasing the read noise of the image. You might get not much more than that, even without parallelizing the readout.
Given 125 electrons and shot noise and likely higher proportional
read noise there would be a much higher noise level on the 200mp
sensor (no matter how random it was).
Not at all. Most people look at shot noise completely backwards. Shot noise IS the true nature of light; it's not a problem created by cameras. Unless you have a coarse-grained film, a sensor, or a retina, all of which bin photons from large areas, shot noise everywhere in the universe is nearly infinite.

If you toss a handfull of marbles into a tray with 4 square compartments, thinking that you will have four samples , but some jokester had placed little dividers to turn each compartment into 4 smaller ones, for a total of 16 small compartments, do you have more noise in your sampling? Do you really lower noise by pulling out the extra dividers and letting four small compartments become one? No; all you have done is discarded resolution information. The situation is not different for photons.
Given that lenses are harder
to make for large sensors it is unlikely that any reasonably priced
lenses would be able to give that resolution.
There are already many lenses being wasted on the coarse resolution of DSLRs.

Any lens worth putting a TC on is such a lens.

--
John

 
Yeah, I'm not convinced either. There could be a point there about
dynamic range, I guess, but from 8 to 12 MP and with advancing sensor
technology I don't think it would be very relevant. But also I
wouldn't want much more than 12MP, the low to no gain in resolution
for 99.9% of my images
Oversampled images don't need to suffer demosaicing artifacts (you don't need to demosaic, except to get pixel-level luminance detail), and stand up much better to geometrical editing, such as rotation, downsampling, perspective correction, etc.
vs. the increase in computer power and storage
isn't worth it at all (unless you are doing a lot of cropping, but
this reduces the effective "sensor size" and image-level quality as
well, so it's pretty much moot). I would probably go for 14-bit color
or something like that, that could justify the increase in PC power
and storage.
14-bit RAW data is a marketing gimmick, IMO. Whether or not a particular 14-bit ADC does a better job is unknown, but you don't need more than 12 bits of RAW data coming out of the camera, with the levels of read noise present in current cameras, even at low ISOs with many cameras.
--
John

 
If you toss a handfull of marbles into a tray with 4 square
compartments, thinking that you will have four samples , but some
jokester had placed little dividers to turn each compartment into 4
smaller ones, for a total of 16 small compartments, do you have more
noise in your sampling?
Nice analogy, John.
 
While I agree that a 10 or even 20 megapixel sensor is likely to give
better IQ than a 8 mp sensor, the math and physics do not agree with
a 200mp sensor. Please correct my figures if they are wrong. A 8mp
30D sensor counts approximately 50,000 electrons at ISO 100 full well
capacity. That would relate to about 2000 electrons on a 200mp
sensor at ISO 100 (50000 * 8/200), or 125 photons at ISO 1600. To get
read noise close to the same level you need 25x as many converters.
You don't have to maintain the same read noise at the pixel level.
If you have 25x as many pixels, you can have 5x the read noise at the
pixel level without increasing the read noise of the image. You
might get not much more than that, even without parallelizing the
readout.
Given 125 electrons and shot noise and likely higher proportional
read noise there would be a much higher noise level on the 200mp
sensor (no matter how random it was).
Not at all. Most people look at shot noise completely backwards.
Shot noise IS the true nature of light; it's not a problem created by
cameras. Unless you have a coarse-grained film, a sensor, or a
retina, all of which bin photons from large areas, shot noise
everywhere in the universe is nearly infinite.
If we are at the level of 125 electrons we are looking at a noise standard deviation on the order of 10% of peak signal. It doesn't mater how smoothed the result looks with many pixels, this high amount of noise will destroy any shadow detail. Assuming very small read noise the noise floor should be at least 1 electron, there is only 6 ev of dynamic range at ISO 1600, This is 1.8 stops less than phil measured on the Xti.
If you toss a handfull of marbles into a tray with 4 square
compartments, thinking that you will have four samples , but some
jokester had placed little dividers to turn each compartment into 4
smaller ones, for a total of 16 small compartments, do you have more
noise in your sampling? Do you really lower noise by pulling out the
extra dividers and letting four small compartments become one? No;
all you have done is discarded resolution information. The situation
is not different for photons.
I like that example, but you forgot somethings. The pixels in a physical sensor have width. So if you make the dividers very close together there is a greater chance that the marble will bounce off and end up in a different compartment, or outside the grid completely. Make the grid spacing smaller than the width of the marble and you measure nothing. There is also read noise, it is more difficult to count individual marbles correctly than a big group. You need more counters if you are going to count the same number of marbles at the same rate and accuracy.
Given that lenses are harder
to make for large sensors it is unlikely that any reasonably priced
lenses would be able to give that resolution.
There are already many lenses being wasted on the coarse resolution
of DSLRs.
I agree, but there are engineering tradeoffs. By adding more pixels there is an increased read noise level given similar levels of technology. This is offset by lower quantization noise. At the order of magnitude of 10 megapixels, IMHO the increased read noise is not noticable, so it is an improvement to reduce quantization noise. At some point the pixels become small enough that the read noise is increasing faster than the quantization noise decreases. I do not know what the number is but someone with a better knowledge of the noise floor should be able to calculate it. If dynamic range is important the limit should be 50 mp or smaller.
Any lens worth putting a TC on is such a lens.

--
John

 
Then you should be comparing equal physical size crops, to isolate that difference.
Are you saying a larger sensor with the same pixel count but lower pixel density gives a better IQ than a smaller one due to the sensor size difference instead of the pixel size difference, assuming they share the same modern technology?

If not that, why is it so important to isolate these things (pixel density in a given sensor size vs. pixel density in different sized sensors)?

I'm not sure if any real life examples of a given sensor size with equal technology - including equal noise reduction - but different pixel density can be found.

Timo
 
While I agree that a 10 or even 20 megapixel sensor is likely to give
better IQ than a 8 mp sensor, the math and physics do not agree with
a 200mp sensor. Please correct my figures if they are wrong. A 8mp
30D sensor counts approximately 50,000 electrons at ISO 100 full well
capacity. That would relate to about 2000 electrons on a 200mp
sensor at ISO 100 (50000 * 8/200), or 125 photons at ISO 1600. To get
read noise close to the same level you need 25x as many converters.
You don't have to maintain the same read noise at the pixel level.
If you have 25x as many pixels, you can have 5x the read noise at the
pixel level without increasing the read noise of the image. You
might get not much more than that, even without parallelizing the
readout.
Given 125 electrons and shot noise and likely higher proportional
read noise there would be a much higher noise level on the 200mp
sensor (no matter how random it was).
Not at all. Most people look at shot noise completely backwards.
Shot noise IS the true nature of light; it's not a problem created by
cameras. Unless you have a coarse-grained film, a sensor, or a
retina, all of which bin photons from large areas, shot noise
everywhere in the universe is nearly infinite.

If you toss a handfull of marbles into a tray with 4 square
compartments, thinking that you will have four samples , but some
jokester had placed little dividers to turn each compartment into 4
smaller ones, for a total of 16 small compartments, do you have more
noise in your sampling? Do you really lower noise by pulling out the
extra dividers and letting four small compartments become one? No;
all you have done is discarded resolution information. The situation
is not different for photons.
Ok, but what about Dynamic Range? Wouldn't a 100 MP FF-sensor have a lousy DR compared to a 10 MP FF-sensor?
 

Keyboard shortcuts

Back
Top