Pixel density revisited

Started Oct 22, 2008 | Discussions
bobn2
bobn2 Forum Pro • Posts: 61,383
Phil's gone quiet again

Funny how he ducks out of these discussions.

You'd think he'd be eager to improve the review methodology of DPR, and either seek to understand the suggestions people are making, or let them know why the existing methodology is superior.
--
Bob

DSPographer Senior Member • Posts: 2,464
Re: Print driver resizing

Here is a test with a real world image that Ernst Dinkla pointed out:
http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/example1.htm

The ImageMagick site discusses filters with examples using an image of a sinusoidal zone plate about midway down this page:
http://www.imagemagick.org/Usage/resize/#filter

By the way:

The reason that simple ratio downsizing with antialiasing like 1/2 doesn't yield better results than an arbitrary ratio like 0.573 is that a filter that produces an output point in the same location as an input point is no longer a special case. With upsampling it is a special case where all the non-central samples of the sinc kernel land exactly at the zero crossings resulting in a do nothing filter which is perfect. With anti-aliasing on downsampling the sinc function gets stretched so that even when the output point lands on an input point the ideal filter has many non-zero weights besides the central weight so that filter is no longer a special case and its performance is about the same as a case where the output point lands in between the input points.

 DSPographer's gear list:DSPographer's gear list
Canon PowerShot G7 X Canon EOS 5D Mark II Canon EF 24mm f/2.8 Canon EF 50mm f/1.8 II Canon EF 200mm f/2.8L II USM +4 more
DSPographer Senior Member • Posts: 2,464
Re: Phil's gone quiet again

I hope he is at least quietly lurking.

I think after reading the postings above a reasonable comparison method can be proposed. To emulate the anti-aliasing effect of the eyes' averaging when viewing a print and to prevent synchronous resize ratios from having better performance than arbitrary ratios I think downsampling with antialiasing to a standard size should be used. Using open source software like ImageMagick keeps the technique used transparent; and the Lanczos (a=3) method with anti-aliasing which it can use is widely regarded as one of the better downsampling methods. It is difficult to filter near borders so resizing should be performed first followed by cropping with the image borders avoided. Noise test images should contain a wide range of tonalities including both flat areas for examining noise and areas of subtle detail to reveal noise suppression effects. So how about this as a suggested precessing chain:

Test images should be created both with default jpeg processing and using raw conversion with common standard raw conversion software and parameter settings.

Test images should be resized using ImageMagic with the Lanczos filter option to a size where the long dimension has 3000 pixels (ex 2000x3000 or 2250x3000 etc).

Crops of the resulting image at a 1 to 1 pixel view of areas with no details for examining noise and subtle detail for examining noise suppression artifacts should be provided.

 DSPographer's gear list:DSPographer's gear list
Canon PowerShot G7 X Canon EOS 5D Mark II Canon EF 24mm f/2.8 Canon EF 50mm f/1.8 II Canon EF 200mm f/2.8L II USM +4 more
chuxter Forum Pro • Posts: 21,714
Re: Print driver resizing

Thanks. That 2nd link was great, but a bit "heavy" to read! I need to sleep on this...

-- hide signature --

Charlie Davis
Nikon 5700, Sony R1, Nikon D300
HomePage: http://www.1derful.info
Bridge Blog: http://www.here-ugo.com/BridgeBlog/
'Experience: Discovering that a claw hammer will bend nails.
Epiphany: Discovering that a claw hammer is two tools...'

 chuxter's gear list:chuxter's gear list
Nikon D810 Nikon D500 Tamron SP 90mm F2.8 Di VC USD 1:1 Macro (F004) Sigma 24-105mm F4 DG OS HSM Tamron SP 150-600mm F/5-6.3 Di VC USD +2 more
bobn2
bobn2 Forum Pro • Posts: 61,383
Re: Phil's gone quiet again

DSPographer wrote:

I hope he is at least quietly lurking.
I think after reading the postings above a reasonable comparison
method can be proposed. To emulate the anti-aliasing effect of the
eyes' averaging when viewing a print and to prevent synchronous
resize ratios from having better performance than arbitrary ratios I
think downsampling with antialiasing to a standard size should be
used. Using open source software like ImageMagick keeps the technique
used transparent; and the Lanczos (a=3) method with anti-aliasing
which it can use is widely regarded as one of the better downsampling
methods. It is difficult to filter near borders so resizing should be
performed first followed by cropping with the image borders avoided.
Noise test images should contain a wide range of tonalities including
both flat areas for examining noise and areas of subtle detail to
reveal noise suppression effects. So how about this as a suggested
precessing chain:
Test images should be created both with default jpeg processing and
using raw conversion with common standard raw conversion software and
parameter settings.
Test images should be resized using ImageMagic with the Lanczos
filter option to a size where the long dimension has 3000 pixels (ex
2000x3000 or 2250x3000 etc).
Crops of the resulting image at a 1 to 1 pixel view of areas with no
details for examining noise and subtle detail for examining noise
suppression artifacts should be provided.

Sounds very reasonable. I would always suggest a usability evaluation on anything like this. That is, try it and see whether it provides a good basis for people to judge a range of output sizes and workflows. That could be done by asking a sample of people to make a blind preference between a set of cameras on the basis of the crops using the method above, and then the same selection using their preferred media at their preferred size. If there's a good correlation between the choices, the method has some validity.

On the topic of blind testing, given the flak that the A900 is getting for noise, I found this thread interesting ( http://forums.dpreview.com/forums/readflat.asp?forum=1037&thread=29804929 ). With the two scaled to like output sizes, people were asked to pick which was which. The results suggest that whatever advantage the D700 is supposed to have, it wasn't enough for people to pick it out reliably.
--
Bob

SLove Contributing Member • Posts: 813
Re: Phil's gone quiet again

DSPographer wrote:

So how about this as a suggested
precessing chain:
Test images should be created both with default jpeg processing and
using raw conversion with common standard raw conversion software and
parameter settings.
Test images should be resized using ImageMagic with the Lanczos
filter option to a size where the long dimension has 3000 pixels (ex
2000x3000 or 2250x3000 etc).
Crops of the resulting image at a 1 to 1 pixel view of areas with no
details for examining noise and subtle detail for examining noise
suppression artifacts should be provided.

Very good suggestions. The new 50D review shows again that something like this is needed to stop the more pixels = more noise meme from spreading any further. Unfortunately the 50D review does nothing but reaffirms that false belief by punishing the 50D for having more per pixel noise than other DLSRs with lower pixel density. Unless new testing methods are introduced, most people will never understood what is really going on.

 SLove's gear list:SLove's gear list
Canon PowerShot E1 Canon PowerShot SD300 Canon PowerShot S90
John Sheehy Forum Pro • Posts: 21,710
Re: Pixel density revisited

bobn2 wrote:

These statements on noise and pixel density need to be qualified. The
message that people are getting from statements of this sort is that
increasing pixel density reduces image quality in an absolute way,
and there is no evidence to support that, rather the reverse, in fact.

In this particular instance, banding is the biggest disappointment in the highest ISOs, and banding has nothing directly to do with pixel density. Banding is the result of budget/shortcut engineering.

-- hide signature --

John

John Sheehy Forum Pro • Posts: 21,710
Re: Resize filtering

DSPographer wrote:

I never tested Photoshop's resize.

While I realize that PS' resample algorithms are not the greatest, the previous reference to my statement was about the on-screen window zooming at less than 100%. It can increase image-level nose tremendously over resampling.

Most image-viewing programs do this, too. Irfanview does it in windowed mode, and full-screen by default unless you change the options. FastStone does it too, unless you enable "smoothing", etc, etc.

There is a conspiracy, conscious or unconscious, against pixel density. Coarse monitors, coarse printer dithering patterns, horrible downsizing methods, etc, all either dilute higher pixel density benefits, or actually make them look worse, when they are actually better.

These are the dark ages.

If our monitors had always stayed ahead of cameras in resolution, most of these illusions would never have happened.

-- hide signature --

John

SLove Contributing Member • Posts: 813
Re: Resize filtering

John Sheehy wrote:

DSPographer wrote:

I never tested Photoshop's resize.

While I realize that PS' resample algorithms are not the greatest,
the previous reference to my statement was about the on-screen window
zooming at less than 100%. It can increase image-level nose
tremendously over resampling.

Most image-viewing programs do this, too. Irfanview does it in
windowed mode, and full-screen by default unless you change the
options. FastStone does it too, unless you enable "smoothing", etc,
etc.

There is a conspiracy, conscious or unconscious, against pixel
density. Coarse monitors, coarse printer dithering patterns,
horrible downsizing methods, etc, all either dilute higher pixel
density benefits, or actually make them look worse, when they are
actually better.

I think conspiracy is a too strong word... Coarse monitors are just a limitation of current monitor tech. Monitor resolutions improve slowly and the CRT-> TFT transition has not sped up the resolution increase either. After all, you could buy a 21-22" CRT monitor capable of 2048 x 1536 pixels eight or nine years ago and the best TFT monitors available today are only slightly better at 2560 x 1600 pixels.

The "horrible" downsizing methods especially in viewing applications are just a way to make the program more responsive. Today perhaps there are less need for such coarse methods, but most applications still try to be usable with older computers as well, which in my opinion is not a bad thing. Providing options for the users like Irfanview and FastStone do is the right way. The only thing open to question is whether a faster but coarse or slower but more accurate method should be enabled by default. As processing power increases more accurate methods should be favored as standard, so perhaps the current applications are a bit behind their times.

If our monitors had always stayed ahead of cameras in resolution,
most of these illusions would never have happened.

True, but it's no use to wish for practical impossibilities. In fact the divide between camera and monitor resolutions is still increasing and will, ironically, only start to decrease once camera resolutions no longer increase significantly. Monitor resolutions are like optics: slow incremental improvements.

 SLove's gear list:SLove's gear list
Canon PowerShot E1 Canon PowerShot SD300 Canon PowerShot S90
John Sheehy Forum Pro • Posts: 21,710
Re: Resize filtering

SLove wrote:

Coarse monitors are just a
limitation of current monitor tech. Monitor resolutions improve
slowly and the CRT-> TFT transition has not sped up the resolution
increase either. After all, you could buy a 21-22" CRT monitor
capable of 2048 x 1536 pixels eight or nine years ago

There is no exact alignment of video pixels and phosphor pixels on CRT monitors, so they have quite a blur to them. You can say that they are always being resampled to, with a soft opto-mechanical "algorithm".

and the best
TFT monitors available today are only slightly better at 2560 x 1600
pixels.

The "horrible" downsizing methods especially in viewing applications
are just a way to make the program more responsive.

On a 386 or 68030 computer with 4 megabytes of RAM, yes.

Any computer from the last 10 years can resample a 15MP image to the screen resolution or a window in a fraction of a second.

I'm not even asking for a quality resample (though that would be nice). Just RESAMPLE instead of dropping pixels. Dropping pixels has an eye-candy effect of making the image look sharper, and look more detailed to the optically naive. These could be why they are chosen as the default - because many people like poorly resized images, until the noise becomes an issue, and then they blame the camera for having too many pixels.

Today perhaps
there are less need for such coarse methods, but most applications
still try to be usable with older computers as well, which in my
opinion is not a bad thing.

Making everyone else's images to look like garbage by default, and make high-MP suffer the most so that the small percentage of people using old, slow computers can avoid experiencing how old and slow they are, is a very bad decision, IMO.

Providing options for the users like
Irfanview and FastStone do is the right way.

The default should be "accurate/slower", and "less accurate/faster" should be an option.

The only thing open to
question is whether a faster but coarse or slower but more accurate
method should be enabled by default. As processing power increases
more accurate methods should be favored as standard, so perhaps the
current applications are a bit behind their times.

A bit?

If our monitors had always stayed ahead of cameras in resolution,
most of these illusions would never have happened.

True, but it's no use to wish for practical impossibilities. In fact
the divide between camera and monitor resolutions is still increasing
and will, ironically, only start to decrease once camera resolutions
no longer increase significantly. Monitor resolutions are like
optics: slow incremental improvements.

I'm just stating the reason for the illusion. Bad resizing is used because the pixels are coarse. If monitors were 100MP, and images were all upsampled to them, none of this talk about problems with high pixel density would have happened. The higher-MP cameras would have a nice, velvety texture, and everyone would be talking about how much the "cameras" had a fine-grain-slidefilm look.

-- hide signature --

John

John W Peterson Senior Member • Posts: 2,737
Pix Density - for any given tech generation

I would presume that both of DPR's statements are reasonable.

For any given generation of sensor technology - the sensor with the lower pix density will generally speaking, produce better S/N.

Of course, as technology improves, the actual pixel coverage increases, i.e. more and more of the sensor aree is made up of individual photon wells. Hence a sensor of newer design, might theoretically have the same pixel density as an older model, while actually offering larger individual photon wells.

Manufacturers are quick to point out that every new model offers improvements in regard to photon well volume, but they never seem to tell us, truthfully, how much of an improvement, quantitatively, each new generation brings. While the 1DsIII and the A900 migh appear to offer similar pixel density, it might turn out to be the case that the 1DsIII offers higher volume individual photon wells.

Since we are deprived of the "real data" i.e. the photosite volume, pixel density is a way to "guestimate" what that would be. Perhaps "Pixel density for a given product cycle or year + a given manufacturer" woule be even better.

bobn2
bobn2 Forum Pro • Posts: 61,383
Re: Pix Density - for any given tech generation

It's amazing how many people bump a thread without bothering to go through the arguments there already. At the risk of sending the thread around another cycle, some comments on your post below.

John W Peterson wrote:

I would presume that both of DPR's statements are reasonable.

For any given generation of sensor technology - the sensor with the
lower pix density will generally speaking, produce better S/N.

Only the case if you talk at the pixel level. At the image level, for a given generation and size of sensor, broadly speaking the all produce the same S/N.

Of course, as technology improves, the actual pixel coverage
increases, i.e. more and more of the sensor aree is made up of
individual photon wells. Hence a sensor of newer design, might
theoretically have the same pixel density as an older model, while
actually offering larger individual photon wells.

The size of the photon wells isn't directly related to pixel density, it's a designable parameter. Smaller well, higher base ISO. the trade off is lower S/N at low ISO's, but in practice this can be retrieved with good read chain design.

Manufacturers are quick to point out that every new model offers
improvements in regard to photon well volume,

I've never actually seen a manufacturer claim that.

but they never seem to
tell us, truthfully, how much of an improvement, quantitatively, each
new generation brings. While the 1DsIII and the A900 migh appear to
offer similar pixel density, it might turn out to be the case that
the 1DsIII offers higher volume individual photon wells.
Since we are deprived of the "real data" i.e. the photosite volume,
pixel density is a way to "guestimate" what that would be. Perhaps
"Pixel density for a given product cycle or year + a given
manufacturer" woule be even better.

In practice the design tradeoffs between FWC and read noise are not so simple. Sensor designers make different choices depending on the target application of the sensor.

-- hide signature --

Bob

OP igb Senior Member • Posts: 2,637
Re: Pix Density - for any given tech generation

John W Peterson wrote:

. Perhaps
"Pixel density for a given product cycle or year + a given
manufacturer" woule be even better.

Are you being sarcastic? given product cycle + given manufacturer narrows the selection very much to a given camera, no?

Unless you're interested in comparing compacts and dslrs from the same manufacturer for noise performance.

Regards
--
-------------------------------------------------------
My Galleries: http://webs.ono.com/igonzalezbordes/index.html

John W Peterson Senior Member • Posts: 2,737
No not sarcastic

Thanks for your comments.
I'm not trying to be sarcastic - I apologize if it appeared so.

My point was that the utility of pixel density, in comparing cameras, or more accurately sensors, is likely to vary over time. A current design sensor at a given pixel density is likely to outperfrom a similar pixel density sensor of 5 years ago. This may relate at least partly to the improvement in actual photon well size, relative to any one pixel density, and also I assume to other characteristics of the sensor that affect noise.

My own take on the utility of reporting pixel density for digital cameras, is that it may be most useful in comparing differing digicams, rather than different dSLRs. Digicam manufacturers seem, to me at least, to try to cram continuously more pixels into small, and sometimes smaller sensors. DPReview's reporting pixel density helps, in my view, to "keep the digicam manufacturers honest", or at least to put their claims of improved performance into a more realistic context.

John W Peterson Senior Member • Posts: 2,737
Pixel quality vs Image Quality

Thanks for your comments and insights. I'm guessing that you are an engineer ? I am not, so any help is much appreciated.

I understand the point, which you made in your earlier posts about the distinction between image quality and the pixel level characteristic of S/N. These observations seemed correct / non-controversial to me, which is why I didn't follow up on them.

My point, also I thought, not so controversial, was that sensor design for any given manufacture improves over time, even though there appear to be distinctions between the sensors produced by, e.g. Canon vs Sony / Nikon. For that reason, comparing pixel density may be useful primarily to help us think about differences between differing cameras of the same generation. For example, canon has introduced several digi-cams in the last year. Looking at pixel density may help us to "pick the likely winners" amongst them as regards pixel S/N.

As you raise the point of image quality, as opposed to pixel level issues, let me just go after the dynamic range (DR) issue. You point out that, for a given print size, the perception of image noise will decline, as absolute pixel number increases. A simple way, that I would look at it would be that the noise is lost in the sea of more data. This however does not deal with the DR issue. If a given image area, say, a white wedding dress, "pegs the needle" i.e blows out the whites, then it won't matter how many pixels I have to record that sea of perfect white. No increase in pixel number will make up for that. What I need, are pixels with, individually, greater DR, and I may need to put up with having much fewer pixels in order to get to that. So, my conlusion is that although 'more and more pixels' might help percieved noise, at a fixed print size, it won't help the DR issue at all; in fact, it will hurt DR - unless some of those pixels can be made to record at differing sensitivity levels (as with the fuji sensor).
Comments ?

OP igb Senior Member • Posts: 2,637
Re: Pixel quality vs Image Quality

John W Peterson wrote:

Thanks for your comments and insights. I'm guessing that you are an
engineer ? I am not, so any help is much appreciated.
I understand the point, which you made in your earlier posts about
the distinction between image quality and the pixel level
characteristic of S/N. These observations seemed correct /
non-controversial to me, which is why I didn't follow up on them.
My point, also I thought, not so controversial, was that sensor
design for any given manufacture improves over time, even though
there appear to be distinctions between the sensors produced by, e.g.
Canon vs Sony / Nikon. For that reason, comparing pixel density may
be useful primarily to help us think about differences between
differing cameras of the same generation. For example, canon has
introduced several digi-cams in the last year. Looking at pixel
density may help us to "pick the likely winners" amongst them as
regards pixel S/N.

In the event one is interested in pixel S/N instead of S/N of the final image.

As you raise the point of image quality, as opposed to pixel level
issues, let me just go after the dynamic range (DR) issue. You point
out that, for a given print size, the perception of image noise will
decline, as absolute pixel number increases. A simple way, that I
would look at it would be that the noise is lost in the sea of more
data. This however does not deal with the DR issue. If a given image
area, say, a white wedding dress, "pegs the needle" i.e blows out the
whites, then it won't matter how many pixels I have to record that
sea of perfect white. No increase in pixel number will make up for
that. What I need, are pixels with, individually, greater DR, and I
may need to put up with having much fewer pixels in order to get to
that. So, my conlusion is that although 'more and more pixels' might
help percieved noise, at a fixed print size, it won't help the DR
issue at all; in fact, it will hurt DR - unless some of those pixels
can be made to record at differing sensitivity levels (as with the
fuji sensor).
Comments ?

The pixel size and DR question has been discussed hereliterally dozens of times. Now I regret I didn't pay attention in any of them.

So much for my qualifications but instead of a comment let me pose you a question: for any given intensity of an evenly distributed 'rain of photons', which would fill earlier 10 pixels able to 'hold' 600 photons of 5 pixels able to hold 1200?

-- hide signature --
bobn2
bobn2 Forum Pro • Posts: 61,383
Re: Pixel quality vs Image Quality

John W Peterson wrote:

Thanks for your comments and insights. I'm guessing that you are an
engineer ? I am not, so any help is much appreciated.

I wouldn't class myself as an engineer (although I do have CEng status). My background is science, which puts a rather different spin on things than engineering training.

I understand the point, which you made in your earlier posts about
the distinction between image quality and the pixel level
characteristic of S/N. These observations seemed correct /
non-controversial to me, which is why I didn't follow up on them.
My point, also I thought, not so controversial, was that sensor
design for any given manufacture improves over time, even though
there appear to be distinctions between the sensors produced by, e.g.
Canon vs Sony / Nikon.

Less than you might think, the performance of the latest generation CMOS seems to be within 1/3 stop, with Sony being marginally worse and Nikon being marginally better, vis-a-vis Canon, in the middle. In any case, not enough to worry about.

For that reason, comparing pixel density may
be useful primarily to help us think about differences between
differing cameras of the same generation. For example, canon has
introduced several digi-cams in the last year. Looking at pixel
density may help us to "pick the likely winners" amongst them as
regards pixel S/N.

I can't see how pixel density helps pick the likely winners, except that generally, the higher the pixel density the better (in terms of image quality, of course there might be storage/speed concerns which make very high pixel densities and the resultant large pixel counts problematic)

As you raise the point of image quality, as opposed to pixel level
issues, let me just go after the dynamic range (DR) issue. You point
out that, for a given print size, the perception of image noise will
decline, as absolute pixel number increases.

That wasn't what I was pointing out. It has nothing to do with perception (although there are also arguments about fine grain patterns being perceptually better than course ones). It is a statistical argument. Simply, the more samples you have, the more random errors even out. If you do the sums for noise in pixels you find that the 'more samples' advantage precisely evens out the 'bigger error in each sample', so far as photon shot noise is concerned. So far as read noise is concerned, it seems that the 'more samples' advantage more than compensates for any additional noise in each sample.

A simple way, that I
would look at it would be that the noise is lost in the sea of more
data. This however does not deal with the DR issue.

It deals with the DR issue also. DR is essentially the ration between the maximum possible signal and the noise (exactly how much above noise the minimum acceptable signal is is subjective, but that doesn't change the fact that it is noise that determines the bottom end of the DR ratio). Signal, not being a random phenomenon adds, noise tends to cancel itself out, so adds 'in quadrature), again, you do the sums and you find that pixel size disappears from the dynamic noise calculation.

If a given image
area, say, a white wedding dress, "pegs the needle" i.e blows out the
whites, then it won't matter how many pixels I have to record that
sea of perfect white. No increase in pixel number will make up for
that.

Except it's not like that. If the exposure is such as to bombard each large pixel with enough photons to saturate, the amount in each smaller pixel is reduced in proportion to the area, as is the FWC, so once again, everything ends up equal, the difference being that when the noise causes individual pixels to 'blow' you get fine patches of blown highlight, not big blocky ones.

What I need, are pixels with, individually, greater DR, and I
may need to put up with having much fewer pixels in order to get to
that.

But if you have fewer of them, you haven't gained anything, because each of those bigger pixels has to carry a bigger load of the total image. As with many things lots of little ones can do the same job as a few big ones (except they do it better)

So, my conlusion is that although 'more and more pixels' might
help percieved noise, at a fixed print size, it won't help the DR
issue at all; in fact, it will hurt DR -

As argued above, your conclusion is wrong. Go to the DPR on the A900, see what its (raw) DR is, see also that the 50D has a larger raw DR than the 40D.

unless some of those pixels
can be made to record at differing sensitivity levels (as with the
fuji sensor).

The Fuji design uses a 12 MPix sensor to deliver the resolution of perhaps an 8MP one, in order to compensate for inadequacies of the read chain. An improved read chain could get the same DR out of a conventional sensor, and keep all the resolution.

Comments ?

As above.
--
Bob

John W Peterson Senior Member • Posts: 2,737
I'm going to need to think this through.

Thanks for challenging my assumptions on this issue. I'll need to think through the whole concept of image noise / image quality vs pixel density. Perhaps what I am seeing is the impact of photoshop's trying to preserve noise "detail" if I downsample a high MP digicam image.

The Fuji design uses a 12 MPix sensor to deliver the resolution of perhaps an
8MP one, in order to compensate for inadequacies of the read chain. An

improved read chain could get the same DR out of a conventional sensor, and > keep all the resolution.

I understand what you're saying (or at least part of it), but the S5 does have an ability to render images that have a certain "filmic" / beautiful image quality that seems to be unmatched in the digital world (except perhaps for the S3).

Perhaps / probably this has nothing to do with noise or DR issues, though I understand that the S5 has been reviewed as having DR greater then essentially all other dSLRs (though some come close).
Thanks again.
j. peterson

natureman Veteran Member • Posts: 3,979
Re: Pixel density revisited

bobn2 wrote:

Phil Askey wrote:

I think a 30+ page review correctly qualifies any statement about
pixel density and noise. Where is this reverse evidence?

It could be any length, but if it didn't contain any statements
relating noise content to final output size, it wouldn't be
qualified, would it? I've read through it quite carefully, and I
can't find any pointer there that would help me understand the
relative noise between the cameras for usual output sizes. There are
a lot of statements about how noisy the A900 is, but all related to
per pixel noise.

There's nothing "absolute" about "final output sizes". 100% magnification is "absolute" and "qualified" and so is "per pixel noise".

As for the evidence, Emil has pointed you to a post of his, there's a
long series of posts which you didn't follow, but did discuss and
present the evidence in some detail, there's John Sheehy's
demonstration under the title 'the joy of pixel density' , and
finally there have been extensive discussions of the physics behind
it, which back up the position that in theory there is no causal link
between pixel density and final image noise content at any given
image size (with the caveat that there are noise effects such as
random telegraph noise, which come into play at very small
geometries). These discussions included a number of people who are
research physicists (not me, I hasten to add), and included Eric
Fossum.
I'm not really asking you to go back on your testing methodology
completely, it just seems to me that including a set of equal sized
crops from a given proportion of the frame (resized using a sensible
resampling method, there are plenty here who could advise) would be
much more helpful for people to assess the likely image quality from
any camera, and to make comparisons between cameras of different
pixel densities and sensor sizes. Do it in addition to the 100% crops
if you like (and please, in RAW), but it would improve the reviews
still further. Similarly, the crops from the 'bottle' scene are
difficult to make judgments from when they are presented at vastly
different output sizes.

bobn2 wrote:

These statements on noise and pixel density need to be qualified. The
message that people are getting from statements of this sort is that
increasing pixel density reduces image quality in an absolute way,
and there is no evidence to support that, rather the reverse, in fact.

-- hide signature --

Bob

-- hide signature --

Phil Askey
Editor, dpreview.com

-- hide signature --

Bob

DMillier Forum Pro • Posts: 20,995
Re: Pixel density revisited

How so?

100% view compares pixel level performance and doesn't take into account the fact that different sensors have different pixel counts.

A comparison of say a 5MP sensor and a 100MP sensor at 100% might conclude that (say) the 5MP sensor was 15% less noisy, yet in a print of any size the extra pixels will more than compensate.

To get a fair comparison of noise and detail absolutely requires comparison at a range of practical output sizes i.e. prints varying in size from typical pocket sized prints to exhibition sizes.

Essentially, every camera on the market is equal for displaying on screen so the only difference worth comparing is prints: something Dave etchells at Imaging resource recognises.

natureman wrote:

bobn2 wrote:

Phil Askey wrote:

I think a 30+ page review correctly qualifies any statement about
pixel density and noise. Where is this reverse evidence?

It could be any length, but if it didn't contain any statements
relating noise content to final output size, it wouldn't be
qualified, would it? I've read through it quite carefully, and I
can't find any pointer there that would help me understand the
relative noise between the cameras for usual output sizes. There are
a lot of statements about how noisy the A900 is, but all related to
per pixel noise.

There's nothing "absolute" about "final output sizes". 100%
magnification is "absolute" and "qualified" and so is "per pixel
noise".

As for the evidence, Emil has pointed you to a post of his, there's a
long series of posts which you didn't follow, but did discuss and
present the evidence in some detail, there's John Sheehy's
demonstration under the title 'the joy of pixel density' , and
finally there have been extensive discussions of the physics behind
it, which back up the position that in theory there is no causal link
between pixel density and final image noise content at any given
image size (with the caveat that there are noise effects such as
random telegraph noise, which come into play at very small
geometries). These discussions included a number of people who are
research physicists (not me, I hasten to add), and included Eric
Fossum.
I'm not really asking you to go back on your testing methodology
completely, it just seems to me that including a set of equal sized
crops from a given proportion of the frame (resized using a sensible
resampling method, there are plenty here who could advise) would be
much more helpful for people to assess the likely image quality from
any camera, and to make comparisons between cameras of different
pixel densities and sensor sizes. Do it in addition to the 100% crops
if you like (and please, in RAW), but it would improve the reviews
still further. Similarly, the crops from the 'bottle' scene are
difficult to make judgments from when they are presented at vastly
different output sizes.

bobn2 wrote:

These statements on noise and pixel density need to be qualified. The
message that people are getting from statements of this sort is that
increasing pixel density reduces image quality in an absolute way,
and there is no evidence to support that, rather the reverse, in fact.

-- hide signature --
Keyboard shortcuts:
FForum MMy threads