Pixel density - can the playing field be leveled???

Started Jun 6, 2009 | Discussions
briander
New MemberPosts: 14
Like?
Pixel density - can the playing field be leveled???
Jun 6, 2009

I have read several posts that kind of cover this subject, but I can't seem to find the answer to my question, so forgive me, if it has already been covered. I did try to search first!

I will try to ask my question as simply as possible. If two camera's have the same size sensor, and one is a 10 MP, and the other is a 15 MP, and I lower the resolution on the 15 MP to 10, can I expect the same picture quality for a noise perspective? Of course, this is putting all of the other camera functions (noise reduction, etc) aside.

I am tired of reading "if this company would have resisted the megapixal war, this would be a great camera". What the hell does that mean?

Can I lower the resolution to get the same reduction in noise, that the lower res camera has?

Oly Canikon
Contributing MemberPosts: 978
Like?
Re: Pixel density - can the playing field be leveled???
In reply to briander, Jun 6, 2009

briander wrote:

I have read several posts that kind of cover this subject, but I
can't seem to find the answer to my question, so forgive me, if it
has already been covered. I did try to search first!

I will try to ask my question as simply as possible. If two camera's
have the same size sensor, and one is a 10 MP, and the other is a 15
MP, and I lower the resolution on the 15 MP to 10, can I expect the
same picture quality for a noise perspective? Of course, this is
putting all of the other camera functions (noise reduction, etc)
aside.

I am tired of reading "if this company would have resisted the
megapixal war, this would be a great camera". What the hell does
that mean?

Can I lower the resolution to get the same reduction in noise, that
the lower res camera has?
--

Look up posts by ejmartin.
--
Did you photoshop that?

Reply   Reply with quote   Complain
Iseewhatyoudidthere
New MemberPosts: 2
Like?
The answer to your question is right here
In reply to briander, Jun 6, 2009

From the horse's mouth:

http://blog.dpreview.com/editorial/2008/11/downsampling-to.html

Short answer, you can't bypass "the disadvantages that come with higher pixel densities such as diffraction issues, increased sensitivity towards camera shake, reduced dynamic range, reduced high ISO performance", according to http://www.dpreview.com/reviews/canoneos50d/page31.asp . It's physics, man!

It's understandable that you can be a bit confused because misleading posts of some troublemakers (fortunately banned now owing to renowned DPR's moderation system)

Reply   Reply with quote   Complain
briander
New MemberPosts: 14
Like?
Re: The answer of your question is right here
In reply to Iseewhatyoudidthere, Jun 6, 2009

Iseewhatyoudidthere wrote:

From the horse's mouth:

http://blog.dpreview.com/editorial/2008/11/downsampling-to.html

Short answer, you can't bypass "the disadvantages that come with
higher pixel densities such as diffraction issues, increased
sensitivity towards camera shake, reduced dynamic range, reduced high
ISO performance", according to
http://www.dpreview.com/reviews/canoneos50d/page31.asp . It's physics,
man!

It's understandable that you can be a bit confused because misleading
posts of some troublemakers (fortunately banned now owing to renowned
DPR's moderation system)

-- hide signature --

Thanks very much. This was what I was looking for... I didn't think to search blogs.

Reply   Reply with quote   Complain
ejmartin
Veteran MemberPosts: 6,274
Like?
Re: Pixel density - can the playing field be leveled???
In reply to briander, Jun 6, 2009

briander wrote:

I will try to ask my question as simply as possible. If two camera's
have the same size sensor, and one is a 10 MP, and the other is a 15
MP, and I lower the resolution on the 15 MP to 10, can I expect the
same picture quality for a noise perspective? Of course, this is
putting all of the other camera functions (noise reduction, etc)
aside.

This issue was studied around the time of (and in response to) Phil's blog post from last Fall, which spouts a lot of misinformation. Relevant posts are

http://forums.dpreview.com/forums/read.asp?forum=1018&message=30211624
http://forums.dpreview.com/forums/read.asp?forum=1000&message=31708796

Noise has two major sources, photon shot noise and electronic noise incurred in reading the sensor. Photon noise is purely an issue of the statistics of counting, as is explained here:

http://forums.dpreview.com/forums/read.asp?forum=1019&message=31922352
http://forums.dpreview.com/forums/read.asp?forum=1019&message=31922793

Smaller pixels mean smaller samples of the image, so noise per sample relative to sample size goes up. However aggregating samples to the same size by downsampling recovers the sampling statistics of the larger pixel sensor.

Read noise is more complicated. If the two 10/15MP cameras under discussion are the 40D and 50D, then read noise went down enough between the two that the 50D is has a slight advantage.

I am tired of reading "if this company would have resisted the
megapixal war, this would be a great camera". What the hell does
that mean?

Can I lower the resolution to get the same reduction in noise, that
the lower res camera has?
--

I would put it differently: does a higher MP camera have the same noise at the same spatial frequency (ie scale in the image) as a lower MP camera? The answer is yes unless their read noises are wildly different (when normalized to the same scale). Phrased this way, no resampling need be done, since image scale and spatial frequency are defined without resampling the image -- it's just that the image scale which is the pixel level of the lower resolution camera, is somewhat coarser than pixel level of the higher resolution camera, and one needs to take this into account in making comparisons. DPR does not do this, and it is a major flaw in their testing protocol; it makes many of their noise comparisons meaningless (unless comparing cameras of equal resolution). For a proper testing methodology, go to DxOmark.com.

-- hide signature --
Reply   Reply with quote   Complain
ejmartin
Veteran MemberPosts: 6,274
Like?
Re: The answer to your question is right here
In reply to Iseewhatyoudidthere, Jun 6, 2009

Iseewhatyoudidthere wrote:

From the horse's mouth:

http://blog.dpreview.com/editorial/2008/11/downsampling-to.html

Short answer, you can't bypass "the disadvantages that come with
higher pixel densities such as diffraction issues, increased
sensitivity towards camera shake, reduced dynamic range, reduced high
ISO performance", according to
http://www.dpreview.com/reviews/canoneos50d/page31.asp . It's physics,
man!

It's nothing of the sort. It's disinformation to further an agenda. If you want to understand why,

http://forums.dpreview.com/forums/read.asp?forum=1000&message=30176643
http://forums.dpreview.com/forums/read.asp?forum=1031&message=31560647

A nice summary of the relevant issues can be found in a pair of posts by Daniel Browning, with relevant references:

http://forums.dpreview.com/forums/read.asp?forum=1000&message=31703921
http://forums.dpreview.com/forums/read.asp?forum=1000&message=31703977

-- hide signature --
Reply   Reply with quote   Complain
Ola Forsslund
Regular MemberPosts: 426Gear list
Like?
No, DPR is WRONG on this issue!
In reply to Iseewhatyoudidthere, Jun 6, 2009

Iseewhatyoudidthere wrote:

From the horse's mouth:

http://blog.dpreview.com/editorial/2008/11/downsampling-to.html

Short answer, you can't bypass "the disadvantages that come with
higher pixel densities such as diffraction issues, increased
sensitivity towards camera shake, reduced dynamic range, reduced high
ISO performance", according to
http://www.dpreview.com/reviews/canoneos50d/page31.asp . It's physics,
man!

I'm sorry to tell you that DPR is completely wrong on this issue. Of course, if you use their method for downsampling, they might be right. But their method is clearly flawed!

It's understandable that you can be a bit confused because misleading
posts of some troublemakers (fortunately banned now owing to renowned
DPR's moderation system)

That is hilarious! They ban at random! I guess they should ban Eric Fossum, who developed the first CMOS sensor as well. Because he is very clear on this issue, "IQ isn't all about the number of pixels, but number of pixels helps, all else being equal." ( http://forums.dpreview.com/forums/read.asp?forum=1000&message=26737153 )

 Ola Forsslund's gear list:Ola Forsslund's gear list
Nikon D800 Nikon AF-S Nikkor 70-200mm f/2.8G ED VR Nikon AF-S DX Nikkor 12-24mm f/4G ED-IF Nikon AF Nikkor 20mm f/2.8D Nikon AF Nikkor 50mm f/1.8D +2 more
Reply   Reply with quote   Complain
Graystar
Veteran MemberPosts: 8,373
Like?
Re: Pixel density - can the playing field be leveled???
In reply to briander, Jun 6, 2009

briander wrote:

I will try to ask my question as simply as possible. If two camera's
have the same size sensor, and one is a 10 MP, and the other is a 15
MP, and I lower the resolution on the 15 MP to 10, can I expect the
same picture quality for a noise perspective?

It’s been clearly established that reducing resolution doesn’t reduce noise. Assuming the same sensor technology, you'd get the same noise level even if you didn't lower the resolution.

Reducing resolution doesn't reduce noise because noise is primarily based on sensor size. Same size sensors...same amount of noise.

Confusion arises when the noise is compared at various levels. Smaller pixels will have more noise per pixel than larger pixels. However, when comparing the printed images the noise levels will be the same.

All of that being said, I personally have a theory that a sensor with smaller pixels actually does produce an image with more noise than a sensor of the same size with larger pixels. But that type of thing is hard to prove, and it really doesn't matter because you can only buy what manufacturers are offering anyways.

In short, don’t worry about pixel size or noise. Sensor technology has matured to the point where there isn’t much difference in performance between the various manufacturers. Just get the camera with the features you desire.

Reply   Reply with quote   Complain
Ola Forsslund
Regular MemberPosts: 426Gear list
Like?
Yes, the same or better (by sqrt(2))
In reply to briander, Jun 6, 2009

briander wrote:

I will try to ask my question as simply as possible. If two camera's
have the same size sensor, and one is a 10 MP, and the other is a 15
MP, and I lower the resolution on the 15 MP to 10, can I expect the
same picture quality for a noise perspective? Of course, this is
putting all of the other camera functions (noise reduction, etc)
aside.

Let me quote Eric Fossum, who developed the first CMOS sensor: "IQ isn't all about the number of pixels, but number of pixels helps, all else being equal." http://forums.dpreview.com/forums/read.asp?forum=1000&message=26737153

I am tired of reading "if this company would have resisted the
megapixal war, this would be a great camera". What the hell does
that mean?

It means that people commonly getting fooled by comparing images of different sizes! They do this by watching pixels at '100% zoom', ignoring the fact that the higher MP image is bigger.

Can I lower the resolution to get the same reduction in noise, that
the lower res camera has?

Yes, but you will need to apply some filter (a blur filter for example) first, since the resampling algorithms in popular programs (e.g. Photo Shop) are bad.

 Ola Forsslund's gear list:Ola Forsslund's gear list
Nikon D800 Nikon AF-S Nikkor 70-200mm f/2.8G ED VR Nikon AF-S DX Nikkor 12-24mm f/4G ED-IF Nikon AF Nikkor 20mm f/2.8D Nikon AF Nikkor 50mm f/1.8D +2 more
Reply   Reply with quote   Complain
ejmartin
Veteran MemberPosts: 6,274
Like?
Re: Pixel density - can the playing field be leveled???
In reply to Graystar, Jun 6, 2009

Graystar wrote:

All of that being said, I personally have a theory that a sensor with
smaller pixels actually does produce an image with more noise than a
sensor of the same size with larger pixels. But that type of thing
is hard to prove, and it really doesn't matter because you can only
buy what manufacturers are offering anyways.

I would say it the following way. Taking the example of the 40D and 50D, I measured the noise as a function of spatial frequency for a midtone at ISO 1600 and found

They are the same at the same spatial frequency. However, the 50D has a higher Nyquist frequency, so the noise power spectrum extends further. The pixel level std deviation measures the area under the curve, and it is higher for the 50D because the spectrum extends further.

So noise at the same spatial scale is the same, noise at very fine scales is present on the 50D because the 50D has finer scales available than the 40D. The question is what is the visual impact of that extra noise power at fine scales. Well it's going to be similar to the visual impact of higher resolution -- if you can see the effect on resolution of the extra pixels of the 50D, then you're going to be able potentially to see more noise as well, but only at very fine scales. It should also be noted that interpolation errors are pushed off to finer scales, and are thus less objectionable, so there is somewhat of a compensating effect depending on how good the RAW converter is.

Now, what to do about it? One option is nothing, if the fine scale noise is not objectionable. Another is to apply a low pass filter, which is what downsampling does; this seems a bit silly, as it removes all image data beyond the cutoff of the low pass filter, both noise and image detail. Most sensible, if the noise is bothersome, is to use one of the better noise filters; that will dampen the high frequency noise while retaining much of the high frequency detail.

In short, don’t worry about pixel size or noise. Sensor technology
has matured to the point where there isn’t much difference in
performance between the various manufacturers. Just get the camera
with the features you desire.

Couldn't agree more.

-- hide signature --
Reply   Reply with quote   Complain
Derge
Forum MemberPosts: 96
Like?
Re: The answer to your question is right here
In reply to Iseewhatyoudidthere, Jun 6, 2009

Obvious troll is obvious.

-- hide signature --

'So what do you take pictures of?' 'Mostly nouns.'

Reply   Reply with quote   Complain
Graystar
Veteran MemberPosts: 8,373
Like?
Re: The answer of your question is right here
In reply to briander, Jun 7, 2009

briander wrote:

Iseewhatyoudidthere wrote:

From the horse's mouth:

http://blog.dpreview.com/editorial/2008/11/downsampling-to.html

Thanks very much. This was what I was looking for... I didn't think
to search blogs.

I don’t think that’s really what you’re looking for.

DPRs tests are technically correct for what they tested...the change in the statistical variation across a field of gray when changing resolution. The results are accurate. What’s questionable is the applicability of those results to everyday images.

In the DPR test there was only a single feature in the image...the color gray. I’d say that the vast majority of images have slightly more detail than that. When you downsample an image you’re not reducing the noise. You’re just making the noise smaller, along with everything else in the image. So what does that mean?

If you take a noisy image and print it at 11x14, then resize to a quarter of its original resolution, then print again at 11x14, you will have two images that will appear to contain exactly the same amount of noise, but one image will appear more jagged than the other (due to lower resolution.) This is an experimentally proven fact. So the words “just making the noise smaller” mean that you can’t actually reduce noise by resizing an image...you can only make it more difficult to see by making it smaller.

Reply   Reply with quote   Complain
Daniel Browning
Senior MemberPosts: 1,058
Like?
[1/6] Myth busted: small pixels bad, 4 legs good - part 1
In reply to briander, Jun 7, 2009

[Part 1 out of 6.]

It is often stated that an image sensor with small pixels will create digital images that have worse performance characteristics (more noise, less dynamic range, lower sensitivity, worse color depth, more lens aberrations, worse diffraction, and more motion blur.) than those created by a sensor with large pixels. I disagree.

One line of reasoning used by proponents of that position is that a single pixel, in isolation, when reduced in size, has lower performance; therefore, a sensor full of small pixels creates images that have worse performance than the same sensor full of large pixels. But that's not seeing the forest for the trees: the reality is that the resulting images are generally the same, as indicated in a paper by G. Agranov at 2007 International Image Sensor Workshop:

http://www.imagesensors.org/Past%20Workshops/2007%20Workshop/2007%20Papers/079%20Agranov%20et%20al.pdf

Again, it is possible for small pixel sensors to have worse performance per pixel, but the same performance when actually displayed or used for the same purpose as a large pixel sensor. This fact may be unbelievable or at least counter-intuitive to many people who work with digital images, but I believe that is only because of the following five types of mistakes that are frequently made in image analysis:

  • Unequal spatial frequencies

  • Unequal sensor sizes.

  • Unequal processing.

  • Unequal expectations.

  • Unequal technology.

Spatial Frequency

The first category, spatial frequency, is the most important and fundamental element of image analysis as it pertains to pixel size. This aspect of an image indicates the level(s) of detail under analysis: whether fine details (high frequencies) or or more coarse information (low spatial frequency). This is often ignored completely, other times poorly understood, but it always has a tremendous impact on the result of any comparison or performance analysis.

The great majority of image analysis is fundamentally based on the performance of a single pixel, so having worse performance per pixel and the same performance in the actual image, where it matters, would seem a contradiction. It isn't.

Performance scales with spatial frequency. In other words, the many important performance characteristics of a digital image are all a function of spatial frequency, including noise, dynamic range, color depth, diffraction, aberrations, and motion blur. Therefore, for any given sensor, analysis of higher spatial frequencies will never show better performance than analysis of lower spatial frequencies.

Every image sensor has a sampling rate, or Nyquist. This is the spatial frequency at which the image sensor samples information. But every resulting digital image also contains information at all other lower spatial frequencies. For example, Pixel A may have a native sampling rate of 30 lp/mm. But the resulting digital image also contains information corresponding to 20 lp/mm and 10 lp/mm, which are larger, coarser details. Pixel B may be much smaller, and may natively sample at 60 lp/mm, but the resulting image still contains all the information of Pixel A, it only has additional information.

100% crop is the most common way to compare image sensors, but it is very misleading when the sensors have different pixel sizes. The reason is that 100% means the maximum spatial frequency. But different pixel sizes sample different spatial frequencies. So 100% crop means higher spatial frequencies for small pixel sensors than it does for big pixel sensors. This results in comparisons of completely different portions of the image. A 100% crop of a small pixel image would show a single leaf, whereas a 100% crop in a large pixel image would show the entire shrub. It's a nonsensical comparison. Failing to account for that important and fundamental difference is one of the most common flaws in such comparisons.

This type of flaw is much more rare in optics analysis. There it is widely understood that standard optical measurements, such as MTF, are naturally and fundamentally a function of spatial frequency. If the MTF of lens A is 30% at 10 lp/mm, and the MTF of lens B is 20% at 100 lp/mm, that does not mean lens A is superior; in fact, the opposite is more likely true. It's necessary to measure the lenses at the same frequency, either 30 lp/mm or 100 lp/mm, before drawing conclusions. It's very likely that lens B has a much higher MTF at 10 lp/mm. Of course, comparing MTF without regard for spatial frequency is so obviously wrong that very few people ever make that mistake. However, those same people do not realize they are making the exact same error when they compare image sensors with 100% crops. They are comparing at their respective Nyquist frequencies, but they have different Nyquists, so they are not the same spatial frequency.

Take a 100% crop comparison of a high resolution image (e.g. 15 MP) with a low resolution image (e.g. 6 MP) for example. The high resolution image contains details at a very high spatial frequency (fine details), whereas the low-res image is at a lower spatial frequency (larger details). Higher spatial frequencies have higher noise power than low spatial frequencies. But at the same spatial frequency, noise too is the same.

[Continued in part 2.]
--
Daniel

Reply   Reply with quote   Complain
Daniel Browning
Senior MemberPosts: 1,058
Like?
[2/6] Myth busted: small pixels bad, 4 legs good - part 2
In reply to Daniel Browning, Jun 7, 2009

[Part 2 out of 6.]

Although it's not necessary, it is always possible to resample any two images (e.g. large pixel and small pixel images) to the same resolution for comparison. This would make it possible to compare 100% crops and draw conclusions about the spatial frequencies under analysis. However, sometimes the ability to resample is called into question, such as a blog post by Phil Askey at DPReview.com:

http://blog.dpreview.com/editorial/2008/11/downsampling-to.html

However, it was thoroughly debunked:

http://forums.dpreview.com/forums/read.asp?forum=1018&message=30190836

http://forums.dpreview.com/forums/read.asp?forum=1000&message=30176643

http://forums.dpreview.com/forums/read.asp?forum=1031&message=31560647

There is ample proof that resampling works in practice as well as in theory. Given that fact, as long as small pixel sensors have proportionately higher noise and higher spatial frequencies, it will always be possible to resample the image and get lower noise power at lower spatial frequencies, so that the image is the same as that created by large pixel sensors. For example:

http://forums.dpreview.com/forums/read.asp?forum=1018&message=30211624

http://forums.dpreview.com/forums/read.asp?forum=1018&message=30190836

Again, in cases where it's not necessary to resample the image, it is best not to, since the resolution often has a highly beneficial impact on the image.

Another way to think about it is performance per detail. Say one small but important detail in an image is an eye. A large pixel sensor has a certain performance ”per eye”, so that over the area of the eye there is a certain noise power, dynamic range, etc. A small pixel sensor, too, has a certian performance per eye, again over the same area, only there are many more pixels. The noise power per pixel is higher, but since each pixel contributes a smaller portion of the eye, the noise power per eye is the same.

Here are some very good references about noise and spatial frequency:

http://luminous-landscape.com/forum/index.php?showtopic=29801&st=20&p=241562&#entry241 562

http://forums.dpreview.com/forums/read.asp?forum=1000&message=30394220

http://forums.dpreview.com/forums/read.asp?forum=1034&message=31584345

http://forums.dpreview.com/forums/read.asp?forum=1000&message=30394220

Here are some images comparing small pixels and large pixels at low analog gain and high analog gain:

http://forums.dpreview.com/forums/read.asp?forum=1019&message=31512159

This error may have roots in the fact that the standard engineering measurements for sensor characteristics such as noise is necessarily at the level of the pixel. Sensitivity is measured in photoelectrons per lux second per pixel. Read noise is measured in RMS e- or ADU per pixel. Dynamic range is measured in stops or dB per pixel. There is nothing wrong with per-pixel measurements per se, but they cannot be compared with different pixel sizes without understanding the difference in spatial frequency.

Image sensor performance, like MTF, cannot be compared without understanding differences in spatial frequency.

[Continued in part 3.]

-- hide signature --

Daniel

Reply   Reply with quote   Complain
Daniel Browning
Senior MemberPosts: 1,058
Like?
[3/6] Myth busted: small pixels bad, 4 legs good - part 3
In reply to Daniel Browning, Jun 7, 2009

[Part 3 out of 6.]

Unequal sensor sizes

Sensor size is separate from pixel size. Some assume that the two are always correlated, so that larger sensors have larger pixels, but that is not the case. Sensor size is generally the single most important factor in image sensor performance; therefore, it's always necessary to consider its impact on a comparison of pixel size. The most common form of this mistake goes like this:

  • Compacts have more noise than DSLR cameras.

  • Compacts have smaller pixels than DSLR cameras.

  • Therefore smaller pixels cause more noise.

The logical error is that correlation is not causation. The reality is that it is not the small pixels that cause the noise, but small sensors. A digicam-sized sensor (5.6x4.15mm) sensor with super-large pixels (0.048 MP) will not have superior performance to a 56x41.5mm sensor with super-tiny pixels (48 MP). Even the size of the lens points to this fact: the large sensor will require a lens that is perhaps 50 times larger and heavier for the same f-number and angle of view, and that lens will focus a far greater quantity of light than the very tiny lens on a digicam. When they are both displayed or used in the same way, the large sensor will have far less noise, despite the smaller pixels.

Unequal processing.

Unequal processing is essentially all about the handling of the raw data from the ADC to the output of the converter. The best method to draw conclusions about pixel size is to analyze just the raw data itself, before a raw converter can introduce inequalities, bias, and increase experimental error; however, it is possible to get useful information after a raw conversion in certain conditions. The types of errors in this category include the following.

Out-of-camera images. Processed formats that come directly out of the camera, such as JPEG (and including video such as HDV, HDCAM-SR, etc.) are good for drawing conclusions about the utility of that processed format for whatever purpose is needed, but it does not accurately reflect on the sensor itself. Furthermore, any conclusions, which are necessarily subjective, cannot be generalized to pixel sizes in all cameras. Too much processing has already been applied to the raw data, including noise reduction, saturation, black point, tone curve, and much more, which all have an affect on apparent noise, sensitivity, color, and dynamic range.

Unequal raw preconditioning. Most cameras apply a certain amount of processing before the raw file is written. Typically it includes at least hot/dead pixel remapping, but sometimes also other things. Many apply a black clip at or near the mean read noise level. Some remove the masked pixels. Nikon performs a slight white balance. Canon compensates for the sensor's poor angle of response by increasing brightness for f-numbers wider than f/2.8. Pentax applies slight median-filter based noise reduction on raw files when set to ISO 1600. Etc. Much of this pre-processing of the raw file can be factored into the comparison; the chief thing is to be aware of its occurance.

Unequal raw converters. One raw converter may use totally different processing than another, even if the settings look the same. For example, setting noise reduction to "off" in most converters does not actually turn off all noise reduction; it just reduces noise reduction to the lowest level the creator is willing to allow.

Another common mistake is to think that a given raw converter will process two different cameras equally. None of the popular raw converters do. There are some converters that do, e.g. dcraw, IRIS, Rawnalyze, etc. One of the most popular converters, ACR, for example, varies greatly in its treatment of cameras, with different styles for different manufacturers, models from the same manufacturer, and even from one minor version of ACR to the next.

Furthermore, even if a raw converter is used that can be proven to be totally equal (e.g. dcraw), the method it uses might be better suited to one type of sensor (e.g. strong OLPF, less aliases) more than another (e.g. weak OLPF, more aliases). Even this can lead to incorrect conclusions because some cameras combined with certain scenes may benefit more from certain styles of conversion. For example, one demosiac algorithm may give the best results for a sensor with slightly higher noise at Nyquist. One way to workaround this type of inequality is to examine and measure the raw data itself before conversion, such as with IRIS, Rawnalyze, dcraw, etc. The important thing to be aware of is the possibility for inequalities to arise from processing.

[Continued in part 4.]

-- hide signature --

Daniel

Reply   Reply with quote   Complain
Daniel Browning
Senior MemberPosts: 1,058
Like?
[4/6] Myth busted: small pixels bad, 4 legs good - part 4
In reply to Daniel Browning, Jun 7, 2009

[Part 4 out of 6.]

Unequal expectations.

This is the type of inequality that stems from having a different standard, expectation, or purpose for different pixel sizes. It unlevels the playing field. For example, there are some who claim in order for small pixels to be equal to large pixels, they must have the same performance (e.g. noise power) when displayed at much larger sizes (finer scales). But in reality, to be equal, they only need to have the same noise power when displayed at the same size. One might expect that a camera with 50% higher megapixels could be displayed 50% larger without any change in the visibility of noise, despite the same low scene luminance. That is an unequal expectation.

This can also be manifested in the idea that in order to be equal, one should be able to crop a smaller portion of the image from a small pixel sensor and still have the same performance (e.g. noise). The correct method is to crop the same portion out of both sensor for comparison. If one is cropping the center 10% from the large pixel sensor (e.g. 500x500), then one should crop the same 10% from the small pixel sensor (1000x1000).

If one only expects to it be at least display the same size, or the same portion of the image, and have the same performance (noise) for the same light, then that would be equal expectations.

Unequal technology.

This type of mistake is almost never made in support of large pixels, since large pixels are almost invariably the older technology. However, it is worth pointing out anyway. In one sense, it will never be possible to compare any two cameras with completely equal technology, because even unit-to-unit manufacturing tolerances of the same model will cause there to be inequalities.

Having now discussed the many factors that cause flaws in image analysis as it pertains to pixel size, let's look at some specific topics.

Fill factor

One topic that often comes up in a discussion of pixel size is fill factor, which is the relative area of the photoreceptive portion of the pixel, e.g. photodiode. It is commonly asserted that fill factor gets worse as pixel size shrinks (”smaller buckets means more space between the buckets”). In fact, this has not occurred. ”Fill factor pretty much has scaled with technology, ...” (CMOS sensor architect)

http://forums.dpreview.com/forums/read.asp?forum=1000&message=30060428

Comparing the Sony ICX495AQN to ICX624, for example, pixel area shrunk from 4.84µm to 4.12µm, a decrease of 15%. But instead of losing 15% of the photoreceptive area, it actually increased by 7% (22% total):

Another assertion is that full well capacity decreases with pixel size. In fact, sensor designers say the reverse is true: "smaller pixels have greater depth (per unit area) and saturate 'later in time'".

http://forums.dpreview.com/forums/read.asp?forum=1000&message=30017021

Read noise

Sensors vary greatly in read noise. There are some large pixel sensors that have a high gain mode with lower read noise than other large pixel sensors and small pixel sensors, but they invariably have much higher read noise at base gain. On the other hand, at base gain, small pixel sensors have much less read noise than any of the large pixel sensors, with or without variable gain amps. Furthermore, generally, as large pixels are made smaller, with or without variable gain amps, read noise is improved.

Voltage

Some assert that smaller pixels put out lower voltages, and the increased amplification necessary results in worse noise. However, Eric Fossum has stated that no sensor designer would allow this to be the case. The simplified description of how it works is given here by an electrical engineer:

The accumulated charge of photoelectrons collects on the gate of the source follower transistor which has a gain of something less than one. The output of this amplifier is the voltage output of the cell (actually transferred via the column amplifier). This voltage is determined by the voltage on the source follower gate, which in turn is given by the value of the accumulated charge divided by the capacitance of the gate (by the well known expression V = Q/C).

If you take a cell and scale it uniformly by a factor s. Now the cell is s^2 times smaller in terms of area, so the accumulated charge is s^2 times smaller. However, the gate capacitance is s^2 times smaller also. So the output voltage is now (Q/s^2) (C/s^2) = Q/C = V. Scaling has not changed the output voltage of the cell, so no extra amplification is needed.

Optical and mechanical issues.

There are many things that can affect the resolution of an image, including diffraction, aberrations, manufacturing tolerances, and motion blur (from camera shake or subject movement). In the face of these issues, some will claim that small pixels are actually worse than large pixels. This is easily proven false. The reality is that all of these factors may cause diminishing returns, but returns never diminish below 0%.

Diffraction, in particular, is often misunderstood. As pixel size decreases there are two points: one at which diffraction is just barely beginning to have an affect on the image, and another where the resolution improvement is so small that it's immeasurable. The most common mistake is to think both are the same point, or that they are anywhere near eachother. In fact, they are very far apart.

[Continued in part 5.]

-- hide signature --

Daniel

Reply   Reply with quote   Complain
Daniel Browning
Senior MemberPosts: 1,058
Like?
[5/6] Myth busted: small pixels bad, 4 legs good - part 5
In reply to Daniel Browning, Jun 7, 2009

[Part 5 out of 6.]

Other considerations

There are at least three things to consider with regard to pixel size

  • File size and workflow

  • Magnification value

  • In-camera processing (JPEG, etc.)

File size is an obvious one. Magnification is what causes telephoto (wildlife, sports, etc.) and macro shooters to often prefer high pixel density bodies.

Out-of-camera JPEGs are affected by pixel density because manufacturers have added stronger noise reduction which may not be desired and may be difficult to tune with in-camera settings.

Higher pixel densities may require bigger files, slower workflow, and longer processing times. Lower pixel densities may result in smaller files, faster workflow, and shorter processing times. This is an area where there are many possible software solutions for having most of the benefit of smaller pixels without the size/speed downsides. REDCODE is a good example.

Finally, here is the math on a comparison of dynamic range between the LX3 and 5D2.

Compare the 2-micron pixels of the LX3 (10.7 stops DR) with the immensely larger 6.4 micron pixels of the 5D2 (11.1 stops DR). Going by the per-pixel numbers, it seems that the smaller LX3 pixels have less dynamic range. But remember that the LX3-sized pixel samples a much, much higher spatial frequency.

At the same spatial frequency, the scaled LX3 pixels have 12.3 stops of dynamic range, 1.2 stops greater.

5D2 maximum signal: 52,300 e- LX3 maximum signal: 9,000 e-
5D2 read noise at base ISO: 23.5 e- LX3 read noise at base ISO: 5.6 e-
5D2 per-pixel DR at base ISO: 11.1 stops (log_2(52300/23.5))
LX3 per-pixel DR at base ISO: 10.7 stops (log_2(9000/5.6))
LX3 scaled maximum signal: 92200 (9000 e- * (6.4µm/2.0µm)^2)
LX3 scaled read noise at base ISO: 17.92 (sqrt(5.6 e-^2 * ((6.4µm/2.0µm)^2)))
LX3 scaled DR at base ISO: 12.3 stops (log_2(92200/17.92))

Now you might be wondering: when does the law of diminishing returns kick in? Small pixels have to get worse at some point, even if they aren't now. It's a tricky question.

First, there's diffraction, lens aberrations, and mechanical issues such as collimation, alignment, backfocus, tilt, manufacturing tolerances, etc. Diminishing returns have definitely kicked in here already.

For diffraction, if anyone is shooting 5 micron pixels at f/32 because they really need DOF (e.g. macro), they are not going to get any benefit from smaller pixels: the returns will be close to 0%. Even f/11 will have returns diminished slightly by smaller-than-5-micron pixels.

Lens aberrations can be an issue too. Usually even the worst lenses will have pretty good performance in the center, stopped down. But their corners wide open will sometimes not benefit very much from smaller pixels, so the returns in those mushy corners may be 0-5% due to aberrations.

And there's the mechanical issues. If your collimation is not perfect, but it's good enough for large pixels, then it will have to be better to get the full return of even smaller pixels. This relates to manufacturing tolerances of everything in the image chain: the higher the resolution, the more difficult it is to get full return from that additional resolution. Even things like tripods have to be more steady to prevent diminishing returns.

So essentially the diminishing returns depend on the circumstances, but the higher the resolution, the more often the returns will be diminished. Some people don't need more resolution than they currently are getting, so as long as the improvement from smaller pixels is 0% or more, that's OK for them.

[Continued in part 6.]

-- hide signature --

Daniel

Reply   Reply with quote   Complain
Daniel Browning
Senior MemberPosts: 1,058
Like?
[6/6] Myth busted: small pixels bad, 4 legs good - part 6
In reply to Daniel Browning, Jun 7, 2009

[Part 6 out of 6.]

On the other hand we have noise, dynamic range, and QE. For these, G. Agranov's paper showed no difference from 5.6 microns to 1.7 microns in typical situations, which agrees with my own reading of available sensor data and image analysis. Now data on 1.4 micron sensors is available and they too appear to have performance that scaled in proportion with their size. I chalk this up to the ingenuity of sensor designers. I think we're getting close, though, so perhaps 1.2 microns will finally be the point at which diminishing returns kick in for typical (photon shot noise limited) situations.

One important consideration is performance in low light across all types of cameras. There is a certain sensor design that reduces read noise to very low levels, but it only happens to occur in some sensors with big pixels (4.7+ microns), analog gain, and is accompanied by high read noise at low gain (compared to small pixels). Not all big pixels have this design, and not all big pixels with analog gain have this characteristic either. Yet for every decrease in pixel size, this design continues to have effect. Since pixel size in large sensors is limited by the processing power, the pixel size is much larger than it could be. This limitation is moved yearly thanks to Moore's law, but we can't know where it will finally end until it does. So until there is any evidence of that happening, it would be premature to worry that the benefit will disappear if sensor designers continue shrinking pixels.

Another consideration is angle of response. Pixels of all sizes tend to have lower response from oblique angles, such as an ultra wide angle f/1.4 lens, but smaller pixels have even more difficulty because it's hard to scaling the depth (z dimension) in proportion with the area (x and y, width and height).

There is a strong correlation between all the performance metrics and sensor size; such that larger sensors (with a proportionately larger lens and thinner DOF) have much improved image quality in low light.

Another interesting consideration is that, for one, Canon's DSLR sensitivity (QE) improved only 5% in the last 1 year, 30% in the last 3 years, and 115% in the last 6 years. [10D: 0.042 photo-electrons per 12-bit ADU per square micron. 50D: 0.100 e- adu/um^2.] The dynamic range improvement on Canon was a bit less a single stop in 4.5 years. [Area-referred base ISO read noise 20D: 1.81, 40D: 1.15, 50D: 0.97. Area referred full well stayed about the same.]

Going back to the main point again: performance is correlated with the level of detail. DPR editors have previously stated:

Ultimately the main dispute is between whether you measure
noise at the 1:1 level or at an arbitrary image size. dpreview
has always worked at the pixel level because it doesn't make
any assumptions for what you're going to do with the images.

I think that is an incorrect description of the dispute. On one side the level of detail is ignored, and noise is measured at an arbitrary size image size (1:1 AKA 100% crop); the other advocates any method that accounts for the level of detail, such as comparing noise at the same image size (or fixed spatial frequency).

Imagine if the same were said about a car reviews:

Ultimately the main dispute is whether you measure noise at the
100% speed level or at an arbitrary speed. dpreview has always
worked at 100% speed because it doesn't make any assumptions
for what you're going to do with the car.

100% speed on a sports car might be 200 MPH, while a commuter sedan may top out at 100 MPH. It would be wrong to say that the sports car is noisier than the sedan, because you haven't even driven them at the same speed. At 100 MPH, the sports car might be quieter than the sedan.

Analogies (especially cars) can go awry quickly, but after all the attempts at explaining this concept, anything is worth a shot.

Overall, DPR is a great site and highly informative; but there are some important flaws, and the DPR war on pixel density is one of them. I'm disappointed to see that Bob Newman has been banned from DPR. I hope I wont be the next one up on the chopping block.

Four legs good, two legs better.

Kind regards,
--
Daniel

Reply   Reply with quote   Complain
Jay Turberville
Forum ProPosts: 12,646
Like?
Re: Pixel density - can the playing field be leveled???
In reply to briander, Jun 7, 2009

As a general rule, yes. If all other things could be held equal, they would be equivalent. Of course, in a real world example, all other things are never held equal.

In the case of a 15Mp vs. a 10Mp sensor from the same maker, odds are good that the 15Mp sensor will do just as well via downsizing while offering higher resolution when desired. This is especially more likely since the 15Mp sensor is almost surely newer. So its a win/win.

BTW, the DPReview blog on the topic has been pretty thoroughly debunked. My own experiments show that I get very close to the same DR and noise on a sensor area basis when comparing an 8Mp 2/3" CCD to an 8Mp 4/3" CCD. The smaller 2/3" CCD has four times the pixel density.

I can post those examples if you wish. I've done so in the past. They don't prove the point, but given the dramatic difference in pixel densities, I think it is very strong supporting evidence.

And, BTW, the pixel density difference between a 15Mp and 10Mp sensor really isn't that great. So as a practical matter, I would worry too much about it either way if it is a choice like that you are concerned about.

-- hide signature --
Reply   Reply with quote   Complain
Jay Turberville
Forum ProPosts: 12,646
Like?
Re: The answer of your question is right here
In reply to Graystar, Jun 7, 2009

In the DPR test there was only a single feature in the image...the
color gray. I’d say that the vast majority of images have slightly
more detail than that. When you downsample an image you’re not
reducing the noise. You’re just making the noise smaller, along with
everything else in the image. So what does that mean?

Well, to properly downsample an image, you should first run it through an appropriate low pass filter. Doing that definitely changes the level of detail in the image and will result in a lower standard deviation and a lower measured SNR.

If you take a noisy image and print it at 11x14, then resize to a
quarter of its original resolution, then print again at 11x14, you
will have two images that will appear to contain exactly the same
amount of noise, but one image will appear more jagged than the other
(due to lower resolution.)

Two points. What you will get has everything to do with how you scaled the image. Also, if properly sized up, the image should not have any more jagged features. It should actually have less if you downsized the image properly because the low pass you should have done when downsizing would have reduced any aliasing that might have been present. If you then uprezzed with a halfway decent scaling algorithm, you should see no jagged anything as a result of the resizing.

So the words “just making the noise smaller” mean that you can’t
actually reduce noise by resizing an image...you can only make it
more difficult to see by making it smaller.

We may have a terminology issue here about what is meant when we say "noise." I seem to recall that we went through this in the past. So I'm open to hearing a further explanation from you (again). But as a practical matter as we'd experience in the real world use of a camera and the resizing of its digital image, proper resizing of an image will reduce the visual impression of noise at the same time that resolution is also decreased.

This image shows what happens to a 8Mp step wedge image when it is downrezzed with and without low pass filtering. We see a reduction in noise per image pixel and of course the image itself experiences a reduction in image detail. Note how the resized image without the low pass filter has a lower reduction in image noise than the one with - though it still has a reduction.

The image comparison that follows is a comparison of equal sensor areas from two different CCDs. One CCD has four times the pixel density as the other. The same aperture, shutter speed, and focal length were used. Both images were taken as raw and developed in dcraw with no noise reduction. The smaller and higher density sensor was low pass filtered using Photoshop's gaussian blur at a value that I had experimentally determined (using SFR tests) would result in a level of detail per area equal to the level of detail per area in the larger sensor. With this radical pixel density difference, we end up with quite similar images. If anything, the higher pixel density image seems to be somewhat higher quality than the lower pixel density one. Of course, if you look at the high pixel density sensor's output an an equivalent pixel basis, it looks much worse. But when you compare on an area basis, as a practical matter, the lower density sensor shows no advantage. My guess is that the smaller, higher density CCD use is actually more efficient on an area basis than the larger low density sensor.

BTW, I agree that too much is made of this. My main interest in the topic is in dispelling the general misconception that higher density sensors are a bad design direction. I find that this is not correct. Camera buyers, in general, shouldn't worry much about it. The simple fact is that across the wide range of cameras with all of their various sensor sizes, image quality has slowly trended up as pixel densities have increased.

If you really want a dramatic improvement in image quality, you go for a larger sensor. Higher pixel density, if well done, also tends to improve image quality, though not as dramatically as does a bigger sensor area. Higher pixel density mostly delivers a more versatile sensor. Though with pixel density differences of only 50%, it really doesn't matter much.

The main downside of higher pixel densities is that you need more storage space and faster in and out of camera processing to deal with the higher pixel counts. So depending on your use, you might just choose to go with a lower density camera - though some of the potential advantage is often lost since the lower density sensor is often in an older model with slower processing.

-- hide signature --
Reply   Reply with quote   Complain
Keyboard shortcuts:
FForum MMy threads