The 10mp sensors are lifeless

I think its too easy to want to spend far too much money above actual photographic requirements and that there are perfectly capable cameras and lenses available for far less than the 'latest and greatest', like the Canon L's and Full Frame models.

Its easy to get caught up in measuring gear and feeling left behind when new models with larger resolution and sharpness come out, or to look for marginal improvements which then become must have buys - this for me would be a huge drain of resources that could be better spent elsewhere.

After starting out looking for a dslr camera and lenses by being interested in resolution, megapixels, etc and buying a Canon L zoom and primes I've found that its whether the camera takes the kind of pictures I like that counts for me.

The 6 megapixel cameras are amongst the best in this regard so I'm getting rid of the Canon gear ordering a Pentax K100D, a good prime and two average zooms, I'm better off financially and can spend some time looking at excellent images for inspiration rather than looking for the latest specifications in the gear itself.

I'd recommend looking online for the kind of pictures you'd like to take yourself, then seeing what gear took them and buying the cheapest that does the job.
 
I'm sure that Phil Askey on this site would be interested in using such a tool in his reviews, as it appears that NR is becoming more an more of a factor on the overall IQ rating and the "standard" standard deviation noise measurements no longer really apply after the test patches are "smudged" by such.
As far as i can determine he is not interested in any such approach. i think most of Phils bread must be buttered by jpeg people. i have suggested to him (replying directly to his posts) these same basic ideas, and never gotten a reply. There are a few magazines that do do more serious tests, testing basic sensing capabilities of cameras. But i think if you want to be really popular, as DPReview is succeeding at, you have to appeal to the jpg masses. Personally i do not see that such obviates a more serious thoughtful look at actual performance. He just does not seem interested. Not enough butter? Hard to get an industry to swing if you cannot even get the basic tests published in highly visible places. i just cannot understand what Phil has to gain by ignoring such basic, fundamental, comparisons. Technically involved, yes, but fundamental and not that much more so than his other tests.

If you want to see a programs that does conversions using a lot of different Bayer methods then look at Dalibors converter for Minolta raws:
http://dalibor.cz/minolta/index.htm
--
http://public.xdi.org/=greg.heil
 
Perhaps the Emperor should consider wearing three-layers of clothes instead ;-)
Then his tailor would thrice clothe him fakely:) And he would feel all the more foolish when exposed. i do think you can pass a threshold and never be able to reverse oneself. The mental pain would be worse than the cure. GWB will never understand the pain he causes, because he has to torture his mind so much to convince himself of his own righteousness. And further to be the mouthpiece for such pain and suffering. The admission is just too humiliating. Nixon seemed to come around years after he was impeached, but these are extreme efforts. And only done obliquely. GWB does not have that fortitude. Sony is a pragmatic company - when the bottom line shifts they will follow the trend, but not until it is profitable, in the overall picture. Opinion leaders have to do their job, and lead. Then the industry eventually have to follow. As it is they can happily make more money by just printing more densely, without ever selling the more complicated, nuanced, story of sensor responsiveness.
--
http://public.xdi.org/=greg.heil
 
Is it the Sony A100 (10MP) that you're comparing images with or another 10MP camera? I haven't seen any comparisons done directly of 6 vs. 10 on the image, but I'm sure there could be some negative consequences in the way different cameras set up for sharpness, saturation, etc. I've heard this more in relation to the image variables that can be set, not the overall image where general conclusions could be made. It's an interesting premise and, having not read all the responses, I don't know whether anyone has tested this. Then again, it may be related to the manufacturer's learning curve on image production from the cameras. I know my 20D seems a step up as far as pleasing images compared to my 10D. Thanks.
 
Well I count 4 people who guessed at your test so the sample size is a bit small. However everyone agreed that the last image was from the 8MP sensor. The votes were split 50% on the 12MP sensor. If the 12MP camera was the 5D, the fact that it has a huge sensor means that the pixel density is about the same as the density on the 6MP, which would explain the confusion, or no?
If one of the cameras was a 5D, the test is totally invalid. Like mixing apples and oranges to put a large sensor camera in the mix. The whole idea would be to keep the sensor size constant and vary the "sensel" size. Using a 5D would vary the heck out of it and i am sure Petteri would not make so gross a mistake. A Arif after all is a very sophisticated photographer who is using a Canon 5D, as well as a Minolta 7D and Sony A100 and the other 10MP cameras at least by reputation, and knows full well that its pixels are quite large and of a totally different class than the yucky 10MP sensors this thread is all about. He probably knows Arifs reputation and would not insult him with such a ringer.
--
http://public.xdi.org/=greg.heil
 
As follows and interspersed:
I'm sure that Phil Askey on this site would be interested in using such a tool in his reviews, as it appears that NR is becoming more an more of a factor on the overall IQ rating and the "standard" standard deviation noise measurements no longer really apply after the test patches are "smudged" by such.
As far as i can determine he is not interested in any such
approach. i think most of Phils bread must be buttered by jpeg
people. i have suggested to him (replying directly to his posts)
these same basic ideas, and never gotten a reply. There are a few
magazines that do do more serious tests, testing basic sensing
capabilities of cameras. But i think if you want to be really
popular, as DPReview is succeeding at, you have to appeal to the
jpg masses. Personally i do not see that such obviates a more
serious thoughtful look at actual performance. He just does not
seem interested. Not enough butter? Hard to get an industry to
swing if you cannot even get the basic tests published in highly
visible places. i just cannot understand what Phil has to gain by
ignoring such basic, fundamental, comparisons. Technically
involved, yes, but fundamental and not that much more so than his
other tests.
Since one generally has control over how much NR is applied when given raw image files, the tool generally applies to JPEG's, but can also be applied to the output from raw convertors to determine how much detail is lost vs. how much the standard deviation of noise in flat textured patches is reduced.

I don't know how you can say that Phil Askey is not interested in evolving his reviews with digital camera developments: he has continuously improved his testing and techniques of determining DR of DSLR's, Simon's reviews now have some analysis of preservation of low contrast detail by providing hair 100% zoom crops at various ISO's, he now provides a detailed image along with the grey patches in his noise analysis pages so one can see the detail preservation and sharpness as compared to various equivalent cameras, etc.

I believe that Phil regularily does a search for his name and that of DPReview to see what we are saying about him and this site, and may even now be consulting with someone with a mathematics background to see what can be done along the lines I have suggested (provided he has time after working on the backlog of DSLR camera reviews and the various Photokina reports). He may even contact me, and I would be happy to provide him with source of my work so far or my completed programs when I get time to finish them.
If you want to see a programs that does conversions using a lot of
different Bayer methods then look at Dalibors converter for Minolta
raws:
http://dalibor.cz/minolta/index.htm
I did look at Dalibor's work, and although impressive as far as user interface, etc., as well as no doubt being tuned for speed of conversion, he only uses various algorithms as commonly available in the public domain and doesn't really break any new ground.

Regards, GordonBGood
 
Ken and chuxter, I've been working at a mathematical tool that
would determine how much NR/loss of detail there is in a given
image. I think that in a similar way that noise measurements are
made on a flat textured (hopefully zero natural variation) test
patch, a determination could be made on a finely detailed low
contrast test patch. I have run images of such patches through the
Discreet Cosine Transformation algorithm (DCT, as is used in JPEG
lossy compression to apply different quanitizations to different
"frequencies" of detail) . Given that the components that come out
of the DCT algorithm have magnitudes that are proportional to
frequency, it seems that comparing the magnitudes of the higher
components to the middle "frequency" components does co-relate to
the amount of NR/loss of detail.
Possibly my thinking on this is too fuzzy to argue, but it strikes me that the method needs to be able to distinguish signal and noise of the overlapping spatial frequencies. Some of the more visually annoying image noise, especially after some NR processes, can be quite low contrast and more than a few pixels in extent. The same spatial frequencies that convey image sharpness. Perhaps I'm missing something in your method, in which case, sorry.
I have found this is an adequate tool for determining how much
NR/loss of detail increases with increasing ISO sensitivity in the
same camera, but unfortunately the numbers obtained aren't
absolute
I take it that you normalise the results to the lowest ISO performance and work from there. I can imagine that is good with SLRs where the lowest ISO performance is already very good. I have a smaller sensor (2/3") camera that has enough noise at the lowest ISO to be a poor choice for normalisation, so I would not know how to compare those cameras.
in comparing different cameras due to the variation in Bayer
interpolation algorithms in how much averaging they do across wider
ranges of pixels, although I suppose that could be considered to be
a form of NR with potential loss of fine low contrast detail (ie.
Fuji processing on "angled" sensors does quite a lot of this). I
suppose that I should determine the ratio of frequencies for a very
simple, maximum detail, Bayer interpolation algorithm and use that
as a "standard" for rating all other camera models.
But how do you account for anti-aliasing and the low pass function of the lens which will vary from model to model? That is why I wondered about using a known pattern, containing many spatial frequencies at a range of contrast to give the best chance of finding a method that can separate out MTF (not exposure dependent) and frequency-dependent noise (exposure dependent) at the same time - I assume that noise after NR is no longer white, so it is hard to separate them by any simple method.

Or is that even possible?

Another approach could be to calibrate the transfer functions of lens and anti-aliasing filter in a very low-noise situation (strong exposure of a low contrast target at lowest ISO) followed by noise measurement in a weaker exposure. Is that really your method?

Actually perhaps that is not so hard - two different exposures of a low contrast test chart could suffice - unless the NR were found to be adaptive! Perhaps you could try that with your DCT method, if you did not do that already:)

Certainly standard test charts do not seem to have enough pattern - very little of the image is of the same spatial frequency as the visually dominant noise. Your idea sounds better in that respect.

I'm not very convinced that someone would use any of these methods seriously, there may even be licensing problems for "commercial" use if (as I suspect is likely) none of this is original!
I have no plan to do any real work on this, but good luck to you!
Ken
 
The following are all actual-pixels crops converted with default
settings from RAW files. One is shot with a 6MP dSLR, one with an
8MP one, and a third with a 12 MP one. Care to take a guess as to
which is which?
Why are they different crops? If you're trying to make the point that these things are indistinguishable at the pixel level, then showing us such varried scenes, well, hides all the differences in how things are rendered, at the pixel level.

My guess: #2 looks best, #3 gets the silver, and #1 the bronze prize.

My other guess: what's in focus? In the second photo, with the yellow leaves, if we're talking about resolution, you can see the most here, on the right side of the photo, in the branches; but is this superior contrast, or superior resolution? The first image, it looks like nothing in the crop is really in focus, and, well, it looks like part of the "acceptable" zone when you hyperfocus. I'm also saying this because it seems in this one like you were shooting at closer range, which means limited DOF. Same with the third image, to a lesser degree.

I'm assuming these are 100 % crops, so if this is 100,000 out of 6,000,000 pixels, you're still in good shape. Although the photo with the trees and yellow leaves is clearly better. It also looks more vibrant in terms of color, but of course that's the lighting and subject matter.
 
But only one manufacturer makes 6, 8 and 12MP DSLRs, and the only 12MP camera they make has a very big sensor. He could have been using a D2X I suppose.
 
http://www.slack.co.uk

His photographs have been taken with many different cameras including E10, D100, D1x, D2x, D200, D50, E1, E330, Kodak 14n and nx and various compact digicams.

The remarkable thing about his shots is that they all look very similar in presentation. His particular 'look' and workflow survives the transition from camera to camera. His 10MP D200 shots are no different.

This tells me that a great deal of supposed characteristics unique to a camera or sensor are NOT the fixed things some people suggest they are.
I don’t know if I’m alone but from what I’ve seen in all the new
dslrs the images lack life and are flat.

With around a 1.5 crop factor the 6mp sensors to me deliver the
best images (and I’m not talking about noise ) full of life but
when they pack more and more pixels in the same size sensors it
just deteriorates.

I remember a while back looking at images from the canon 10D and
thought to my self these look better then the 20D.

I have both a 6mp and a 10mp dslr and the 6mp wins easily for me

does any one else feel the same.

Regards

--
You're welcome to visit my favorite Gallery
http://www.pbase.com/aarif/favorites
--
Galleries and website: http://www.whisperingcat.co.uk/mainindex.htm
 
http://www.slack.co.uk

His photographs have been taken with many different cameras
including E10, D100, D1x, D2x, D200, D50, E1, E330, Kodak 14n and
nx and various compact digicams.

The remarkable thing about his shots is that they all look very
similar in presentation. His particular 'look' and workflow
survives the transition from camera to camera. His 10MP D200 shots
are no different.

This tells me that a great deal of supposed characteristics unique
to a camera or sensor are NOT the fixed things some people suggest
they are.
I agree. The so-called "lifelessness" of an image has a lot more to do with the shooter than the sensor.

--
http://www.pixelstatic.com
 
This tells me that a great deal of supposed characteristics unique
to a camera or sensor are NOT the fixed things some people suggest
they are.
Post-processing and downsampling images to 800*600 hides most of the differences between cameras.

--
John

 
Post-processing and downsampling images to 800*600 hides most of
the differences between cameras.
Well, the OP probably saw the so-called lifeless 10mp images on the web, which were no doubt post-processed and downsampled as well. I'm betting that Jon Slack's work looks great in print too, regardless of the camera and sensor used.

--
http://www.pixelstatic.com
 
As interspersed:
Ken and chuxter, I've been working at a mathematical tool that
would determine how much NR/loss of detail there is in a given
image. I think that in a similar way that noise measurements are
made on a flat textured (hopefully zero natural variation) test
patch, a determination could be made on a finely detailed low
contrast test patch. I have run images of such patches through the
Discreet Cosine Transformation algorithm (DCT, as is used in JPEG
lossy compression to apply different quanitizations to different
"frequencies" of detail) . Given that the components that come out
of the DCT algorithm have magnitudes that are proportional to
frequency, it seems that comparing the magnitudes of the higher
components to the middle "frequency" components does co-relate to
the amount of NR/loss of detail.
Possibly my thinking on this is too fuzzy to argue, but it strikes
me that the method needs to be able to distinguish signal and noise
of the overlapping spatial frequencies. Some of the more visually
annoying image noise, especially after some NR processes, can be
quite low contrast and more than a few pixels in extent. The same
spatial frequencies that convey image sharpness. Perhaps I'm
missing something in your method, in which case, sorry.
My technique currently works best when the image has noise, which provides a high frequency pattern which is not limited by AA filters or lens glass. I use as my standard a theoretical camera that has only gaussian noise, and treat the Bayer pattern with a minimally smoothing Bayer interpolation to obtain RGB values. I then pass that through the DCT conversion to get "standard" ratios between the highest "frequencies" and lower "frequencies".

I find that when there is NR applied, the ratios between the high frequencies to the lower ones will change from these ideal ratios in ways that indicate that there is much less spatial resolution; Bayer interpolation that uses more smoothing is, in a sense, applying some NR due to its native processing.
I have found this is an adequate tool for determining how much
NR/loss of detail increases with increasing ISO sensitivity in the
same camera, but unfortunately the numbers obtained aren't
absolute
I take it that you normalise the results to the lowest ISO
performance and work from there. I can imagine that is good with
SLRs where the lowest ISO performance is already very good. I have
a smaller sensor (2/3") camera that has enough noise at the lowest
ISO to be a poor choice for normalisation, so I would not know how
to compare those cameras.
As you say, many new high MP compact cameras have a lot of noise, even at low ISO sensitivities, but that helps plot trends of reduced higher frequencies as sensitivity and NR goes up. I think that ultimately, I will need a very high frequency low contrast test target in order to try to determine how much low contrast detail is lost when noise isn't very detectable. For instance, in all of the 10 MP compact cameras for which I have seen reviews to date, I see a considerable loss of low contrast detail, even in the lowest ISO shots. Note that Simon now includes comparisons of hair shots at various ISO sensitivities for new small sensor compact cameras; I would like to see Phil do this or something similar for DSLR's.
But how do you account for anti-aliasing and the low pass function
of the lens which will vary from model to model? That is why I
wondered about using a known pattern, containing many spatial
frequencies at a range of contrast to give the best chance of
finding a method that can separate out MTF (not exposure dependent)
and frequency-dependent noise (exposure dependent) at the same
time - I assume that noise after NR is no longer white, so it is
hard to separate them by any simple method.
But AA filters and the low pass function of lenses will affect high contrast detail, so be comparing high contrast detail to low contrast detail, one can see how much low contrast detail is being lost to NR. As to separating frequencies, the DCT responds to any loss of high frequency detail, so is a valid tool.
Another approach could be to calibrate the transfer functions of
lens and anti-aliasing filter in a very low-noise situation (strong
exposure of a low contrast target at lowest ISO) followed by noise
measurement in a weaker exposure. Is that really your method?
Actually perhaps that is not so hard - two different exposures of a
low contrast test chart could suffice - unless the NR were found to
be adaptive! Perhaps you could try that with your DCT method, if
you did not do that already:)
Yes, with a standard test chart, one would have to have both high contrast and low contrast patches in order to be able to plot the loss of low contrast detail as compared to the high contrast detail.
I'm not very convinced that someone would use any of these methods
seriously, there may even be licensing problems for "commercial"
use if (as I suspect is likely) none of this is original!
I have no plan to do any real work on this, but good luck to you!
Ken
Well, there shouldn't be licensing problems with the DCT, which is pretty old math. Any I don't plan to offer a commercial product to analyse the amount of NR; I just want to make a tool available that can help us access whether we are really making any gains in low contrast resolution with these new higher MP cameras.

Regards, GordonBGood
 

Keyboard shortcuts

Back
Top