Pixel density - can the playing field be leveled???

I was chastised a couple of weeks ago for pressing R Butler for a reply on this same topic, on the weekend no less. But I think DPR owes the faithful a response to Daniel's treatise.
What say you DPR? We can wait until Monday for a reply.

--
Did you photoshop that?
 
From the horse's mouth:

http://blog.dpreview.com/editorial/2008/11/downsampling-to.html

Short answer, you can't bypass "the disadvantages that come with
higher pixel densities such as diffraction issues, increased
sensitivity towards camera shake, reduced dynamic range, reduced high
ISO performance", according to
http://www.dpreview.com/reviews/canoneos50d/page31.asp . It's physics,
man!
There are no diffraction issues with higher densities.

If you think that the f-stop at which a sensor stops giving sharper results and turns around is a direct point of comparison to other sensors, your logical facilities are broken. The fact is, even though a sensor with 4x the density might turn around at half the f-stop, the denser sensor will have more resolution at all f-stops! A lot more, wider than the turning point, and a little more, beyond it.

Same goes for camera shake; just because you can see more clearly that there is camera shake, does NOT subtract from image quality. Lower pixel densities BLUR the same shaky analog image more than the higher density; you simply have so much lack of resolution with the lower density that you don't see all the components that contributed to it. If higher density lets you isolate and see the shake, that's a good thing (the ability to see it; not the shake itself). That means the sensor is doing its job, and the lens is in focus. Same for diffraction; if the density lets you see diffraction more clearly, it is showing you the subject more clearly, too.
It's understandable that you can be a bit confused because misleading
posts of some troublemakers (fortunately banned now owing to renowned
DPR's moderation system)
I was saying most of what Bob's been saying, in a slightly different way, for a long time now. I was singing the praises of pixel density when no one agreed with me.

--
John

 
And, BTW, the pixel density difference between a 15Mp and 10Mp sensor
really isn't that great. So as a practical matter, I would worry too
much about it either way if it is a choice like that you are
concerned about.
The ironic thing is that the 50D has a much weaker AA filter than and previous xxD Canon, and I mean weaker relative to the pixel spacing. The 50D is capable of higher neighboring-pixel RAW contrast than the 40D is, just the opposite of what many have claimed. What they have claimed only happens when the lens and/or technique are the limiting factors!

--
John

 
And, BTW, the pixel density difference between a 15Mp and 10Mp sensor
really isn't that great. So as a practical matter, I would worry too
much about it either way if it is a choice like that you are
concerned about.
The ironic thing is that the 50D has a much weaker AA filter than and
previous xxD Canon, and I mean weaker relative to the pixel spacing.
The 50D is capable of higher neighboring-pixel RAW contrast than the
40D is, just the opposite of what many have claimed. What they have
claimed only happens when the lens and/or technique are the limiting
factors!
Yes, its been a while since I looked at it, and I don't own Canon cameras anyway, but from the standpoint of image quality I'd always take the 50D over the 40D. Of course, image quality is usually only one factor to consider. So the 40D can be fine choice also.

--
Jay Turberville
http://www.jayandwanda.com
 
Well, to properly downsample an image, you should first run it
through an appropriate low pass filter. Doing that definitely
changes the level of detail in the image and will result in a lower
standard deviation and a lower measured SNR.
Two points. What you will get has everything to do with how you
scaled the image. Also, if properly sized up, the image should not
have any more jagged features. It should actually have less if you
downsized the image properly because the low pass you should have
done when downsizing would have reduced any aliasing that might have
been present. If you then uprezzed with a halfway decent scaling
algorithm, you should see no jagged anything as a result of the
resizing.
Yes, there have been many discussions on resizing, the image below being an example of printer resizing vs. resizing images yourself. That’s all fascinating, but I don’t think it has anything to do with what the OP asked about. The fact of the matter is that a change in resolution, in of itself, has no effect on apparent noise in an image.


We may have a terminology issue here about what is meant when we say
"noise." I seem to recall that we went through this in the past. So
I'm open to hearing a further explanation from you (again).
My definition of noise is adjusted to suit the apparent knowledge level of the person with whom I'm discussing the issue. In this case, my response was to the OP so noise means the speckling in an image that is "obviously wrong and not supposed to be there." When discussing noise with someone like ejmartin I define noise as a statistical deviation in the recorded signal of a pixel from the true signal (itself a statistical determinant) caused by various physical phenomena.
But as a
practical matter as we'd experience in the real world use of a camera
and the resizing of its digital image, proper resizing of an image
will reduce the visual impression of noise at the same time that
resolution is also decreased.
I’ll believe when I see it. And so far, I haven't seen it.
This image shows what happens to a 8Mp step wedge image when it is
downrezzed with and without low pass filtering. We see a reduction
in noise per image pixel and of course the image itself experiences a
reduction in image detail. Note how the resized image without the
low pass filter has a lower reduction in image noise than the one
with - though it still has a reduction.
You’re doing the same thing that DPR did...taking a large area and treating it as a single bit of detail (in your case many more divisions, but relative to the pixels they’re large areas just the same.) If you take a large area, resize, then treat it as if no detail has been lost then sure, it’s gonna look like you reduced noise. Unfortunately, it’s an exercise with no practical application. That’s not how things work in the real world.
The image comparison that follows is a comparison of equal sensor
areas from two different CCDs. One CCD has four times the pixel
density as the other. The same aperture, shutter speed, and focal
length were used. Both images were taken as raw and developed in
dcraw with no noise reduction. The smaller and higher density sensor
was low pass filtered using Photoshop's gaussian blur at a value that
I had experimentally determined (using SFR tests) would result in a
level of detail per area equal to the level of detail per area in the
larger sensor. With this radical pixel density difference, we end up
with quite similar images.
They may be similar to you but not to me. I think the image on the right has a great deal more detail. But comparisons like this are pointless unless you also provide a third image to act as a control. The control should be taken with a dslr at the lowest ISO with plenty of light so that viewers can determine what is truly detail and what isn’t. Guessing at what’s detail is a futile exercise.
BTW, I agree that too much is made of this. My main interest in the
topic is in dispelling the general misconception that higher density
sensors are a bad design direction. I find that this is not correct.
Camera buyers, in general, shouldn't worry much about it. The simple
fact is that across the wide range of cameras with all of their
various sensor sizes, image quality has slowly trended up as pixel
densities have increased.
I don’t agree just yet. Imaging Resource’s comparisons of Canon XSi test images vs. the T1i’s test images makes we want an XSi. DPR’s review of the 50D suggests that it’s not any better, and possibly not as good, as the 40D. And DxO’s evaluation of the LX3 sensor shows it has greater dynamic range vs. the G10 sensor, even though they also claim the G10 does better with noise levels. These are all comparisons between same sized sensors with different pixel densities, and if there’s one thing that’s certain, it’s that increased pixels densities does NOT give indisputably equivalent, or better, results.

And on a totally different subject that I always forget to mention…thanks for your digiscope calculator page. It was very useful in figuring out what was going on with my digiscoping.
 
But as a
practical matter as we'd experience in the real world use of a camera
and the resizing of its digital image, proper resizing of an image
will reduce the visual impression of noise at the same time that
resolution is also decreased.
I’ll believe when I see it. And so far, I haven't seen it.
I just showed you an example of it, but have now included another image that shows detail that is more easily evaluated. The first example doesn't show a detailed subject as clearly because the point was to illustrate that at 4x the pixel density, the noise and DR between two sensors on a per area basis is similar given that they are compared showing the same level of detail. The apparent noise of the higher density, CP8400 image is reduced substantially by low pass filtering and reducing resolution - as compared to what you'd see if instead you were looking at an unmodified 100% pixel image. But you lose half of the linear resolution in the process.

In other words, when you properly reduce the actual linear resolution by half, the apparent per pixel noise and the measured per pixel noise are reduced as compared to the image at its the original size. Equalize for scale and the noise tends to be similarly equal. Just as the various technical theories explained by others would lead a person to expect. You can reduce high spatial frequency noise at the expense of real resolution. Or, to be more on point to the original intent of my test, the dominant factor in noise performance is not pixel density, it is sensor area.
You’re doing the same thing that DPR did...taking a large area and
treating it as a single bit of detail (in your case many more
divisions, but relative to the pixels they’re large areas just the
same.) If you take a large area, resize, then treat it as if no
detail has been lost then sure, it’s gonna look like you reduced
noise.
No, I'm not. I'm processing the image specifically to reduce the actual image detail by half. I was careful to decrease the MTF response of the resized image to 50% for a fairly wide range of spatial frequencies as well as for the Nyquist crossing point (see below). The third panel shows that the noise is not reduced as much if you don't do the low pass filtering first. I also reach a different conclusion than DPReview precisely because I am doing a different test. I'm using real images from real sensors with a huge difference in pixel density, and I'm doing carefully controlled resolution reduction.

The following image shows the MTF response of my downrezzing process. It includes the SFR response without low pass filtering as well. A step commonly ommitted. Simply rescaling an image from a CFA sensor by 50% almost never reduces the actual resolution by 50%. Note how the SFR response is much greater beyond Nyquist when LP filtering is omitted. This means you've probably got some aliasing artifacts in the image. In real world practical applications high spatial frequency noise gets passed on as lower spatial frequency (aliased) noise. This is why we see more measured noise and less DR in the step wedge test that was not LP filtered.


They may be similar to you but not to me. I think the image on the
right has a great deal more detail.
I suspect you are confusing noise for detail. But maybe that's my fault. The image isn't exactly packed with lots of detail. I was trying mostly to illustrate the noise differences. This image shows the similarities in detail better.


But comparisons like this are
pointless unless you also provide a third image to act as a control.
The control should be taken with a dslr at the lowest ISO with plenty
of light so that viewers can determine what is truly detail and what
isn’t. Guessing at what’s detail is a futile exercise.
A control for detail is a good idea. But I think including objects that are more easily understood visually helps - as shown in my second example above. But as I said, I already took pretty good steps to ensure that the MTF response of the resized higher density sensor was very close to exactly half of an original full resolution image and very close to the same linear resolution that the larger sensor can deliver.
I don’t agree just yet. Imaging Resource’s comparisons of Canon XSi
test images vs. the T1i’s test images makes we want an XSi. DPR’s
review of the 50D suggests that it’s not any better, and possibly not
as good, as the 40D.
I recall coming to a different conclusion in comparing 50D to 40D images.
And DxO’s evaluation of the LX3 sensor shows it
has greater dynamic range vs. the G10 sensor, even though they also
claim the G10 does better with noise levels.
These are all comparisons being made between sensors of the same or next generations. It is harder to see the general trend from such a limited timeline, and in fact, there may be no benefit or no benefit that really matters between generations that are so close. Take a look at the trend from even lower pixel densities and the long term general trend is pretty clear. When I compare the output from my 3.4 Mp 1/1.8" Coolpix 995 to my 7Mp 1/1.8" C7070, the improvement is obvious. Likewise, the overall improvement of the 8Mp Coolpix 8400 over that of the 5Mp Coolpix 5000 is also obvious and dramatic. The demand for 3Mp and 6MP APS-C sensors is quite low.
And on a totally different subject that I always forget to
mention…thanks for your digiscope calculator page.
Cool. It's always nice to hear that someone is finding it useful. I've really fallen behind in updating it with different camera models though. I need to get back on top of that.

--
Jay Turberville
http://www.jayandwanda.com
 
John made an important point, the lens capabilities and performance has to be factored in. Having had a chance to run a 40D and 50D side by side for a bit, I see no need to update the 40D. In the real world using A3 prints there was not much in it.
 
As a general rule, yes. If all other things could be held equal, they
would be equivalent. Of course, in a real world example, all other
things are never held equal.
This seems to be the only clear, simple answer to the original question in the entire thread :)

The heated debate shows that the 15MP might be slightly better or the 10MP might be slightly better, but whatever the difference, it's not large enough to create consensus.
 
Emil, I might have missed information on "better noise filters." How do they reduce noise and yet not detail?

I'm just curious. For the first nine years of my electronics career (going back to 1963) I worked on various types of aircraft radars, mostly Doppler radars. Much of that involved finding and tracking weak return signals in a sea of white noise. Without doing anything to the transmitted signal we could find and track return signals when SNR was -4dB (as a point of reference, at -3dB SNR the signal had half the power of the noise in the same bandwidth).

If we injected one of several types of modulation signals on a 13.325 GHz radar carrier, such as a pseudo-random code, we could identify the modulation signal in the return signal easily and detect and lock on to even much weaker signals as a result. You can't do things like that with pictures unless you transmit something. (BTW, you could do that with a flash and get a good range finder. That's how radar altimeters work. That's also how echo-cancelers work in telephony. The modulation code in the latter case is your own speech.) But, back to pictures, without such tricks, you may know something else that the picture should contain beforhand. Does detail in the image stand out in a detectable way from the noise due to a lack of randomness?
--
John1940
 
Emil, I might have missed information on "better noise filters." How
do they reduce noise and yet not detail?
As an example, Noise Ninja uses wavelet methods according to the spiel on their website. Of course the precise version of their algorithm is proprietary, but a common NR method in the image processing literature is called "wavelet shrinkage".

For example, a simple wavelet filter splits the image into its 2x2 binned version, together with a set of differences between the binned version and the original. One can always get the original back by summing the binned image and the residual differences. One can then repeat this binning several times. This gives a representation of the image on successively coarser scales; the amplitudes of the various residual differences on this hierarchy of scales are called the "wavelet coefficients".

The heart of the NR is then to give a criterion for how much of the wavelet coefficient amplitudes are due to noise; then those coefficient amplitudes are shrunk, and the image is put back together. The result is a deniosed image.
Does detail in the image stand
out in a detectable way from the noise due to a lack of randomness?
--
Yes, an image feature is coherent across several pixels and if one can detect the correlation of pixel values above the random fluctuations of the noise then one can remove much of the noise leaving the feature relatively intact. It's similar to the way binning pixels reduces noise, in the example above indeed binning is part of the wavelet transform. For instance, if there is a horizontal edge and you bin in the horizontal direction, the edge sharpness is retained but the horizontal fluctuations are reduced.

The residuals carry the fine detail that is lost in the binning, and the trick is to figure out what fraction of those residuals is noise. Typically when the coefficients are large there is some noise present, and so shrinking the coefficient down (not necessarily to zero) gets rid of noise and leaves some of the detail. The threshold for when the wavelet coefficient is shrunk is determined adaptively based on the image content, and that is where the detail retention is (hopefully) optimized.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
As a general rule, yes. If all other things could be held equal, they
would be equivalent. Of course, in a real world example, all other
things are never held equal.
This seems to be the only clear, simple answer to the original
question in the entire thread :)

The heated debate shows that the 15MP might be slightly better or
the 10MP might be slightly better, but whatever the difference,
it's not large enough to create consensus.
The point of comparisons is to see the differences, not the samenesses. If one doesn't have the mental capacity to know when that matters, that is their problem.

All cameras' output can be reduced to nearly equal - the question is, what is the best that a camera can do. The 50D can clearly resolve much better than the 40D. Anyone who debates that is not being reasonable. You have to know how to focus and steady your lens, two things many seem to have problems with, especially if they rely upon AF.

Everything needs to come together at once for resolution to be fully realized.

--
John

 
Wow! This one seems to have struck a nerve in some people. When I looked yesterday, there were 50+ replies. I didn't have time to read them all, and it appears like was a good thing, since somebody deleted the riff-raff. :-)

Anyway, thanks to all that replied. Though it may not be correct, I can at least make sense out of the "physics" blog that Iseewhatyoudid pointed me too. On the other side of the coin, I see the point that Daniel Browning is making too. If nothing else, I at least understand the issue better! :-)
 
It means that people commonly getting fooled by comparing images of
different sizes! They do this by watching pixels at '100% zoom',
ignoring the fact that the higher MP image is bigger.
Another aspect of this problem is that many people who take sensor:monitor pixel unity as a given assume that higher pixel density sensors are "meant" to be cropped further, so they justify 100% pixel magnification to be the valid point of comparison. This is totally illogical, because this comparison represents a deeper crop of the higher-density sensor; it is held to a higher standard.

This is like saying that a Porsche is meant to accelerate a lot faster than a Prius, so it is a more disappointing car in traffic jams. The traffic (lack of focus, lens sharpness, camera stability, subject motion, and diffraction) are the limiting factors. The Porsche does as good or better as the Prius in traffic. The fact that it can do better is what is relevant; not the fact that it isn't always possible to take advantage.

A crop 20% as wide as the entire frame from a 50D has much more potential than a crop of 20% of a 40D, but these illogical folks think it would be fair to compare the 40D at 27%, instead of 20% as for the 50D.

--
John

 
Typically when the coefficients are large there is some noise
present, and so shrinking the coefficient down (not necessarily to
zero) gets rid of noise and leaves some of the detail.
This is backwards. When the coefficients are large, it means that some feature of the image is present (eg an edge standing out above the noise gives a wavelet coefficient that is larger than one would have with just the noise present). Shrinking the small coefficients relative to the large ones denoises the image.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
John made an important point, the lens capabilities and performance
has to be factored in. Having had a chance to run a 40D and 50D side
by side for a bit, I see no need to update the 40D. In the real world
using A3 prints there was not much in it.
My general rule of thumb is that you pretty much need to double pixel density to get what I would consider a truly significant improvement in image detail. So the jump from 3Mp to 6Mp was obviously worthwhile while the jump from 6Mp to 8Mp really wasn't that big of a deal. The jump from 10Mp to 15Mp is borderline in my book and would probably only matter much if you were frequently using very fine lenses near their optimal aperture. This is one reason I've been in no rush to update my 8Mp 4/3s cameras. 12Mp will provide a measurable and visible improvement in resolution, but in real world pictures, as you've experienced, there's not much in it.

If I had a 40D I'd only purchase a 50D if it had some new feature I wanted badly enough. I wouldn't purchase it for the extra resolution. If I was already unhappy with the resolution of the 40D, I'd be looking for something with at least 20Mp. If I was purchasing a Canon SLR for the first time, the extra resolution of the 50D would be a factor, but only a small one. The extra noise per pixel would not be a factor at all.

--
Jay Turberville
http://www.jayandwanda.com
 
As a general rule, yes. If all other things could be held equal, they
would be equivalent. Of course, in a real world example, all other
things are never held equal.
This seems to be the only clear, simple answer to the original
question in the entire thread :)

The heated debate shows that the 15MP might be slightly better or
the 10MP might be slightly better, but whatever the difference,
it's not large enough to create consensus.
The point of comparisons is to see the differences, not the
samenesses. If one doesn't have the mental capacity to know when
that matters, that is their problem.

All cameras' output can be reduced to nearly equal - the question is,
what is the best that a camera can do. The 50D can clearly resolve
much better than the 40D. Anyone who debates that is not being
reasonable. You have to know how to focus and steady your lens, two
things many seem to have problems with, especially if they rely upon
AF.

Everything needs to come together at once for resolution to be
fully realized.
The debate is so frequently about the noise performance. So determining if the picture noise can be equalized is a vital part of the question.

I agree that one has to do a lot of things right to get all the resolution that a modern sensor can deliver. And the higher the pixel density, the more care that is needed. And this is exactly why many people don't see much benefit to the 15Mp sensor. And that's fine with me, so long as in the process they don't blame the camera engineers for putting a sensor in a camera that outresolves their needs.

--
Jay Turberville
http://www.jayandwanda.com
 

Keyboard shortcuts

Back
Top