New Hot Thread!

the 5 micron peak is located entirely by his arbitrary choice of
diffraction limit (at 6MP). Even more problematic is that this
arbitrarily imposed diffraction limit is derived from completely
different picture taking situations for different formats. It's f/8
for FF and both varieties of APS sensor, f/7 for FT, f/4 for 2/3 and
f/2.8 for 1/1.8. There is simply no logic to that. One might have
expected (if the diffraction limit idea is valid at all) either a
fixed f-number for all formats or DoF equvalents,
The diffraction limit is indeed something the sensor designers consider all the time. But they tend to balance aperture against photosite size for this. In other words, at f/2 the theoretical maximum resolution is achieved with photosite sizes no smaller than 2.7 microns, at f/2.8 it would be 3.8 microns, and so on. Those are the "hard" boundaries they're working against.

--
Thom Hogan
author, Complete Guides to Nikon bodies (19 and counting)
http://www.bythom.com
 
Once the noise has been referred to a common scale, one need only
compare the noise figures; resampling the higher resolution image to
lower resolution will not affect the comparison at the fixed scale,
since this was designed to be independent of where the pixel scale
lies (so long as the reference scale is still present in the image
after resampling; so choose a reference scale sufficiently above the
pixel scale and one needn't worry about this issue).

If the read noise at a fixed reference scale is larger for the small
pixel camera, binning or resampling will not make it better. This is
one of the reasons why it is a good measure -- you can't mess with it
much.
I understand that you are trying to come up with a measure for read
noise that is independent of scaling (I just don't get how that
works). However, my argument was not that we could normalize the
noise levels via resampling, but rather by normalizing the detail via
NR. Thus, there would be two separate and independent measures,
noise and detail, which is not at odds with how I interpret the
measure you are defining. But would not a proper compare noise
comparison be at the same level of detail, with NR, rather than
downsampling, being the vehicle of normalization? Surely, NR is the
better path to normalizing detail than downsampling, is it not?
I like to think of noise in terms of its power spectrum -- how much noise there is at a particular spatial frequency, or scale, in the image. This is similar in spirit to an MTF chart, which plots how much contrast is transmitted at a particular spatial frequency; except here one is plotting the amount of noise at a particular spatial frequency. Read noise is largely uncorrelated from pixel to pixel (there are line or pattern noises that are correlated, but they typically make up a small component of the read noise); it is spatially white noise. This means that its statistical properties as a function of spatial frequency are entirely determined -- the noise power in a 2d power spectrum is constant for all frequencies. So pick some reference frequency well away from the Nyquist frequency, and use that as a measure of noise at a fixed reference scale.

Noise reduction is a local operation on neighboring pixels, so all it can do is reduce the noise power on high frequency scales; it will not reduce the noise frequency on lower scales, which is why noise reduction often leaves a "blotchy" appearance to images. The blotches represent the scale over which the noise was scrubbed out, and so pixel values within regions of that size have been made uniform, but patches separated by more than the scale of noise reductionn are still fluctuating. The beauty of considering the noise power spectrum is that it separates scales -- one finds that the noise power at high frequencies has been reduced by the noise reduction, but the noise power at low frequencies remains untouched. So long as the reference frequency or scale is beyond the scale affected by NR, it remains a valid measure of noise in the capture, and unaffected by the NR.

Downsampling works the same way -- it simply removes all image structure beyond some spatial frequency (the Nyquist frequency of the downsampled image). But again it does not substantively touch the noise spectrum on lower frequencies than this new Nyquist frequency, and so that spectrum remains unaltered if the downsampling was done properly (there are BTW bad ways of downsampling, that do not preserve the lower frequency noise spectrum, but feed high frequency noise into it, a phenomenon called aliasing).

So noise at a fixed reference spatial frequency, well below the frequencies affected by NR or downsampling (which affect only high frequencies if done properly), is a robust measure of the noise of the image capture; and because it is at a fixed scale, it is independent of pixel size, since changing the pixel size changes the Nyquist frequency, and noise measured at the pixel level by the std dev of pixel values is most sensitive to noise near the Nyquist frequency.

Now, one can measure the noise power spectrum directly, but often a quick and dirty way, provided that the data is not contaminated by NR, so that pixel values are uncorrelated at all spatial frequencies, is to apply simple scaling rules to the pixel level std dev of noise. Typically, the correct scaling for uncorrelated noise is to take the pixel level std dev, and divide it by the ratio of the pixel scale to the reference scale. This for instance leads to the result that downsampling reduces noise, because it takes the pixel level std dev of the source image, and divides it by the ratio of the old Nyquist frequency to the new Nyquist frequency, which is a number larger than one; hence the pixel level std dev after downsizing is lower by the ratio of scales. If the noise is correlated (for instance via NR), then the quick and dirty scaling does not apply (this was one of the mistakes Phil made in his blog post from the Fall).

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
The
problem I'm beginning to have is finding enough subjects to do such a
survey - I think it means it has to be screen based, not print based.
With the WWW, finding enough suspects is not tough. It's just a simple marketing problem. ;-)

My concerns are many...

If we are viewing 6 to 30 MP images on typical CRT/LCD screens with about 1 MP available, how can that possibly give results that any discerning dpr member will believe?

If we crop the source images to fit that 1 MP CRT/LCD screen, how will we respond to accusations that we and our suspects were "pixel-peeping" and that our conclusions are therefore faulty?

How do we control the quailty of the screens being used?

I have the beginning of a vision of an approach...

Select several "coordinators" in different parts of the world. Each of these coordinators works with a local, large camera store (think "sponsor" w/o much $$ involved). Each coordinator/store produces a set of prints to a common standard from our image files. The prints are displayed in these big camera stores and customers are asked to complete a questionaire about them.

I want to make the pictures interesting and high quality from an artistic perspective. We might have a "contest" on dpr to select the images? Nah, that would be too hard. But at least I want the pictures to function as a "show" for the stores. We might even disguise the purpose of our questionaire? Some surveys and tests are done that way, as humans have a tendency to be marplots when they think they don't agree with what the hidden "we" are doing.
By selecting one of these type images, are we not biasing the
results? How can we select one image that doesn't control, to some
extent, the results.
I see that. I have an idea as to how this might be done - but it's
going to be horrendously difficult to set up...more later.
OK, I'm all ears...

--
Charlie Davis
Nikon 5700, Sony R1, Nikon D300
HomePage: http://www.1derful.info
'I'm from Texas. We have meat in our vegetables.' Trenton Doyle Hancock
 
Should we move the Great Experiment Design to a thread of its own
(we'll probably need 150 or more messages to sort it)?
Prolly, but not until we need it?

--
Charlie Davis
Nikon 5700, Sony R1, Nikon D300
HomePage: http://www.1derful.info
'I'm from Texas. We have meat in our vegetables.' Trenton Doyle Hancock
 
Onto your comments on my experiment:
1) make the judging double-blind,
I agree,
Yes. It has to be double blind. And if you really want to test the
test, it should be A-B-A B-A-B.
Yes, good point.
2) use a carefully selected, non-web-based audience
That all depends on the community you are interested in.
No. The problem with a Web-based test is that you can't control the
viewing variable. If I view it on my calibrated high end monitor and
the random Web visitor views it on a cheapo netbook screen, you've
got problems. You can only change one variable if you want to get
reliable results, which is why the same prints or monitor has to be
used with each "testee."
I very much agree. I think making controlled prints will be easier than trying to get several, equal monitors.
3) have several carefully selected types of pictures for each "type".
The problem I have with that is that it introduces a variable of
subject matter.
Yes, subject matter matters. We found this out with audio equipment
testing over the years, too.
Yep, I remember the Hi-Fi years...subject matter affects the mood of the suspects and thus their choices. I found that personally I have to play SEVERAL different types of music when I'm evaluating something...and I never use something that I hate to listen to. Same way with photographic images!

--
Charlie Davis
Nikon 5700, Sony R1, Nikon D300
HomePage: http://www.1derful.info
'I'm from Texas. We have meat in our vegetables.' Trenton Doyle Hancock
 
Anyway, I appreciate your responses, but want you to understand that
you are at a much higher level of understanding about all this than
I, but I'd like to think I have the capacity to eventually get what
you're saying and how you deduced it.
I'm with you, S p i t z e r...but I'll state it differently:

"Emil, I don't understand everything you say, but I LOVE the way you say it!"

--
Charlie Davis
Nikon 5700, Sony R1, Nikon D300
HomePage: http://www.1derful.info
'I'm from Texas. We have meat in our vegetables.' Trenton Doyle Hancock
 
I also did one where the 400D was set to ISO 1600, the FZ50 at ISO
100, and both pushed to about 3200.
Here it is:

http://forums.dpreview.com/forums/read.asp?forum=1019&message=31512159
Thanks. Some of the comments I made from memory, and would take back. I prefer the FZ50 version, even downsampled to 400D pixel density. Note that I did no noise reduction or noise-hiding. I simply did "as literal as possible" manual RAW conversions. A tone curve could hide some of the FZ50's red banding noise in the black areas; also, a chroma-NR'ed version of the FZ50 would probably still have a lot more resolution.

--
John

 
Both those explanations were very good. I understand much more than before and I still like they way you say it...except for your continual use of the word "scale"? Do you understand how many times you used it? ;-)

--
Charlie Davis
Nikon 5700, Sony R1, Nikon D300
HomePage: http://www.1derful.info
'I'm from Texas. We have meat in our vegetables.' Trenton Doyle Hancock
 
Not sure I agree with that,
Well, if YOU ain't sure, don't expect me to help much! ;-)
you have to sift through it quite fine to
understand that the 2 micron droop in the curves is simply an
assumption and not based on any theory or measurement and also that
the 5 micron peak is located entirely by his arbitrary choice of
diffraction limit (at 6MP). Even more problematic is that this
arbitrarily imposed diffraction limit is derived from completely
different picture taking situations for different formats. It's f/8
for FF and both varieties of APS sensor, f/7 for FT, f/4 for 2/3 and
f/2.8 for 1/1.8. There is simply no logic to that. One might have
expected (if the diffraction limit idea is valid at all) either a
fixed f-number for all formats or DoF equvalents, but we get neither
of those. The peak, which is your evidence of a sweet spot, is really
based on nothing at all apart from Roger's arbitrary assumptions.
Yes. I contend that Roger was honest and reasonably transparent in making his assumptions. I was too, but not everybody reads well (or completely).

--
Charlie Davis
Nikon 5700, Sony R1, Nikon D300
HomePage: http://www.1derful.info
'I'm from Texas. We have meat in our vegetables.' Trenton Doyle Hancock
 
Here it is:
It seems that the recent Panasonics, like the LX3 are better than the FZ50, with less noise, especially line noise.

LX3 vs 5D2 would be an interesting comparison, or vs a D3 or D700.

I'd buy an LX3, except that I just bought a G9 last year, and my money's been shrinking. Maybe the LX4.

--
John

 
I agree. But my point was that there IS a limit. And as you get close
to the limit, something bad happens...
And my point is that we're a long way from the limit. And you have to
be careful about making assumptions about limits. I once had someone
tell me that we'd never get below a 2.5 micron photosite pitch
because of quantum noise. Hmm. A lot of camera phones must not work,
then ; ).
I only have one...it doesn't work (but I don't think it's because of the tiny photosites.) On the other hand... ;-)

I wish I kept better records of important links. Several weeks ago in another dpr thread about this same @#$% subject, someone posted a link to a great article about an analysis of what the limits were. The approach was to calculate the number of electrons in photosite wells, with knowledge of how many discrete levels were required for humans to not notice the steps in shadow areas. I can't remember exactly, but it seems the conclusion was that a typical 4 micron photosite was the limit? Might have been 5 micron? I wish I had kept that link...perhaps someone reading this thread will have it? Anyway, the article said that below this limit, humans could start to see steps in shadow regions. It was implied that seeing the steps was bad. I though the approach was good in that it avoided noise discussions!
It won't make the
output binary, but rather have visible intensity steps. This ruins
the shadows.
I think you need to think a little more about how digital data is
recorded. Shadows are already ruined ; ). The problem is bit depth.
If you use 8-bit data you don't have very many bits in which to
record shadow data.
If I recall correctly, the article found a correlation between having small numbers of electrons in wells and visible artifacts.
And we haven't even talked about what happens to the
noise when one noise electron is added to a small number of signal
electrons.
You need to think more open-minded. Did you know that electrons have
spin? What if you could make the spin of all photon-generated
electrons the same? Think you might be able to distinguish good from
bad ones then? As I noted elsewhere, I long ago learned not to
arbitrarily place thought hurdles in front of technological advances.
My head is spinning already! ;-)
Although we are not there, everyone should know that there
IS a limit and even getting close to the limit affects the IQ
negatively.
See, that's the kind of leap I don't like. First, implied in your
sentence is that something that we can measure AND which won't get
better DIRECTLY impacts image quality. You haven't even DEFINED image
quality yet! You can't make that logical leap, therefore.
I think you read much more into my above sentence than I wrote into it?
I'm saying that it crossed near the middle because I accidently
SWAGed 2.5 pixels per line-pair. If I had picked 5 pixels per line
pair, it would not have looked the same.
And that was my point: your definition produced a conclusion. Those
are always dangerous. And it takes more than 2.5 pixels to record a
line pair correctly WITHOUT anti-aliasing filters, let alone with
them.
I picked a small number to avoid accusations that I was unfairly biased against resolution. I didn't anticipate your stance, which is of course correct.
I didn't plot DR.
No, you plotted pixel pitch, which is a stand-in for what, if not DR?
I have never made that connection. I think PP affects ALL the parameters that we commonly talk about: resolution, noise, and DR. When I calculated the two curves, the only variable I used was PP.
Yes, that's an important point that I think many people miss...ie,
that if the camera has higher DR than the viewing means, then what is
it good for?
I think you're still missing the point. If I have more DR than I can
print, I can actually start selecting where I use my available DR.
Most of the high-end practitioners of digital imaging are masters of
compression and decompression of tonal range. They use it to enhance
or minimize local contrast.
No, I got that point...I just didn't word that sentence very well. I do that late at night sometimes...and also in the morning before my first cup.

--
Charlie Davis
Nikon 5700, Sony R1, Nikon D300
HomePage: http://www.1derful.info
'I'm from Texas. We have meat in our vegetables.' Trenton Doyle Hancock
 
And a CCD based sensor often has a different limiting
component than a CMOS one.
The trend of recent DSLRs to have read noise which drops rapidly in proportion to ISO going from base ISO to ISO 800 or 1600 makes the value of binning quite questionable or limited, since hardware-binning 4 photosites is basically like reading 4x as many photons, or readout at 1/4 the ISO, which has, guess what, almost the same read noise if you're talking about binning ISO 400 or 800.

Binning is most useful when there is only one gain, or when different gains have similar read noise in electrons.

--
John

 
Unless you answer the basic question of whether you are talking about
per pixel IQ
I wouldn't begin to even think what 'per pixel IQ' might be.
Well, for the time being, most of us are looking at monitors with roughly the same PPI, so pixel-level has some semi-standardized context.

Of course, when I get my dream-100MP monitor, I won't know what pixel-level means anymore, I think.

So much of the nonsense and misunderstanding probably never would have started if we had monitors with variable fine resolution, or a lot more pixels than our cameras, to show all the pixels on the monitor at once. No one would likely be complaining about too many pixels, except for storage/speed concerns.

--
John

 
What I think is that there is a different sweet spot for every ISO,
and every print size. For example, for a 5X7 print at ISO 1600 you
don't need much resolution, but noise is important. For a 12X19 print
size at ISO 100 you need resolution but noise isn't important. There
is no one magic sensor that meets all needs.
Bingo! But I think a consideration of the photographic environment as well as the print size and sensitivity setting is germaine.
My opinion, subject to change, is that a high MP sensor does a better
job of meeting all needs than a low MP sensor does. Consider my 1st
example, a 5X7 print at ISO 1600, where it's obvious that a low MP
count sensor should work fine. A high MP sensor will probably work
almost as well, despite being noisier on a per pixel basis, because
you simply won't see much noise at 5X7. The higher resolution doesn't
help, but it doesn't hurt much either.
I almost agree. In my diagram, I indicated that a sensor in the MIDDLE is the best all-round choice. Besides, even the low MP sensors today have enough pixels to do big prints!
Now turn it around. For a 12X19 print at ISO 100 a high MP sensor
will work well, but a low MP sensor won't . Now you'll see the
benefits of a high MP sensor.
My above contention is that we don't really have new cameras with low MP sensors in them any more...all modern cameras have enough pixels to do that 12 x 19 print. We have to go drag up OLD cameras to get small pixel counts (well, except for the D2H).
I might be wrong here, I don't think we have the data to know, too
much going on.
There is a subthread...we are going to try (again) to test these issues.

--
Charlie Davis
Nikon 5700, Sony R1, Nikon D300
HomePage: http://www.1derful.info
'I'm from Texas. We have meat in our vegetables.' Trenton Doyle Hancock
 
How soon we forget CRTs!
I can proudly state that only my microwave oven has a vacuum tube. ;-)

No...wait a minute...are CFLs "vacuum tubes"? OK, I'm ready for LED illumination...

--
Charlie Davis
Nikon 5700, Sony R1, Nikon D300
HomePage: http://www.1derful.info
'I'm from Texas. We have meat in our vegetables.' Trenton Doyle Hancock
 
First of all, the definition that
Clark uses for DR is the norm in the engineering and scientific world
-- max recordable signal relative to the noise level with no signal.
Absolutely correct, and a useful measurement to know. However, where
people keep getting confused by these numbers is in the use of the
word "stops." In photography, stops has a specific meaning.
Personally, I wish all the engineering numbers were reported as Max
SN ratio or dB numbers. Then we wouldn't have this problem.
I like that suggestion, but I don't think it would eliminate this problem entirely...you have more faith in humans than I do?

--
Charlie Davis
Nikon 5700, Sony R1, Nikon D300
HomePage: http://www.1derful.info
'I'm from Texas. We have meat in our vegetables.' Trenton Doyle Hancock
 
I am amazed at the Apparent Quality (AQ ;) ) of the posts in that thread and others related.

At what level is the discussion ? anything camera manufacturers can use for progress, or are they in a different league alltogether ?
 
I am in the middle ground
See my other post on this
http://forums.dpreview.com/forums/read.asp?forum=1018&message=31725717

For the same area same enlargement:
Pure small pixels summed up has the same noise
Also for dynamic range.

But the pixels are not pure :
Clip point if there is noise
A little difficult for me to grasp but small pixels alone has more noise
And clip at the noise at different point ?
Anyway if clipped information is lost
It is unclear to my how that work out if to the overall noise dynamic range

Less than 100 % pixel area ? even if the light is 100 % -- microlenses

Question to people who know a lot of sensors
If there is a sweet point:
If sensor tech advances shfit the sweet point to the smaller pixels
Question 2
Must reviewers add to the unseal 100% pixel show
A same % of the Sensor at the same enlargement on screen?
Berl.
 
The ISO standard is most closely related to the RAW level that yields
a fixed percentage of RAW saturation. So fixing the ISO determines
an exposure that yields a particular RAW level, and the change in
exposure that yields a change in one RAW level. One wants to then
understand how many electrons per unit area that change in exposure
corresponds to. This is what the above figure of merit measures.
Although, as Iliah Borg pointed out, most manufaturers use the standard's Recommended Exposure Index rating, which allows the manufacturer to declare the 'ISO' rating to be anything they like. This can be used to finesse the noise by setting the metering anywhere in the available raw DR, particularly if the JPEG DR is compressed. Leave lower headroom, giving more exposure, more photons = lower shot noise - hey presto.

--
Bob

 

Keyboard shortcuts

Back
Top