Pixel Density (Thread 5)

On behalf of Kim, following from previous thread
(so I can answer in the next post)
http://forums.dpreview.com/forums/read.asp?forum=1012&message=37573212
So how do you explain it then?

Indeed, I do accept their results without dispute. I also explained, I thought clearly, where that difference comes from - the G11 has a stop better read noise, and since read noise is the lower end of the DR equation, that gives a stop better DR, just as DxO has found and also just as my example illustrated, hardly any difference in the highlights, G11 winning in the shadows.
Shot Noise

Shot noise is caused by the random arrival of photons. This is a fundamental trait of light. Since each photon is an independent event, the arrival of any given photon cannot be precisely predicted; instead the probability of its arrival in a given time period is governed by a Poisson distribution. With a large enough sample, a graph plotting the arrival of photons will plot the familiar bell curve.

Shot noise is most apparent when collecting a relatively small number of photons. It can be reduced by collecting more photons, either with a longer exposure or by combining multiple frames.
CCD Read Noise (On-chip)

There are several on-chip sources of noise that can affect a CCD. CCD manufacturers typically combine all of the on-chip noise sources and express this noise as a number of electrons RMS (e.g. 15eˉ RMS).

CCD Read Noise is a fundamental trait of CCDs, from the one in an inexpensive webcam to the CCDs in the Hubble Space Telescope. CCD read noise ultimately limits a camera’s signal to noise ratio, but as long as this noise exhibits a Gaussian, or normal distribution, its influence on your images can be reduced by combining frames and standard image processing techniques.


So shot noise is worse when you collect fewer photons per pixel. The pixels in the G10 are roughly 33% smaller, which increases the influence of shot noise.

The read noise in the G11 is roughly 40% smaller, which improves the ratio of noise against signal, but the signal is also 33% smaller because we collect that many fewer photons.

So please explain why just talking about the read noise and ignoring the per pixel photon count (signal) and shot noise is acceptable when both follow Gaussian distribution ...
--
Bob
 
I am guessing that wavelength of light / photon size (depending on your persuasion) might play a crucial and limiting role here [ at 100mp pixel size on a FF sensor 24x36 mm would be 0,0000864 mm(sq)]
 
Kim Letkeman wrote
So how do you explain it then?

Indeed, I do accept their results without dispute. I also explained, I thought clearly, where that difference comes from - the G11 has a stop better read noise, and since read noise is the lower end of the DR equation, that gives a stop better DR, just as DxO has found and also just as my example illustrated, hardly any difference in the highlights, G11 winning in the shadows.
Shot Noise

Shot noise is caused by the random arrival of photons. This is a fundamental trait of light. Since each photon is an independent event, the arrival of any given photon cannot be precisely predicted; instead the probability of its arrival in a given time period is governed by a Poisson distribution. With a large enough sample, a graph plotting the arrival of photons will plot the familiar bell curve.

Shot noise is most apparent when collecting a relatively small number of photons. It can be reduced by collecting more photons, either with a longer exposure or by combining multiple frames.
I think I explained this several times in the previous threads. The signal to noise ratio due to shot noise is determined by the number of photons collected in samples of an image, often but not necessarily pixels. Shot noise is, as your source suggests bu doesn't say, in the image it is not generated by the sensor it is sampled by the sensor.
CCD Read Noise (On-chip)

There are several on-chip sources of noise that can affect a CCD. CCD manufacturers typically combine all of the on-chip noise sources and express this noise as a number of electrons RMS (e.g. 15eˉ RMS).

CCD Read Noise is a fundamental trait of CCDs, from the one in an inexpensive webcam to the CCDs in the Hubble Space Telescope. CCD read noise ultimately limits a camera’s signal to noise ratio, but as long as this noise exhibits a Gaussian, or normal distribution, its influence on your images can be reduced by combining frames and standard image processing techniques.

So shot noise is worse when you collect fewer photons per pixel.
That is the case, but you can't directly compare the number of photons collected in different sized pixels because if we sample the same light with different sized pixels, although the variance will be greater in the small pixels, the spatial extent of the noise is smaller - we are sampling the noise more finely, along with the detail.
The pixels in the G10 are roughly 33% smaller, which increases the influence of shot noise.
It changes the observed nature of the shot noise, it doesn't 'increase the influence' the noise is sharper, but so is the scene detail. Smooth them to the same degree, and we're back where we started. The shot noise has not changed, only the observation has, and that observation can be changed on viewing or on taking.
The read noise in the G11 is roughly 40% smaller, which improves the ratio of noise against signal, but the signal is also 33% smaller because we collect that many fewer photons.
Which is what I said, the read noise is the determining factor. However, in the highlights the read noise is irrelevant, it is only the shot noise which is visible.
So please explain why just talking about the read noise and ignoring the per pixel photon count (signal) and shot noise is acceptable when both follow Gaussian distribution ...
I don't know what you mean? Do you mean why I'm wont to ignore the read noise except for shadow noise? That is simple. Assume that we have a signal in the shadow of 16 photoelectrons and a read noise of 4. the shot noise is sqrt(16) = 4 so the combined noise is sqrt(16+16) = 5.7. Without the read noise the SNR would be 4, with it, it is 2.8, so the read noise is significant.

Now imagine a highlight, with a photoelectron count of 1024, the shot noise is 32. The read noise is still 4, the combined noise is sqrt(1024+16) = 32.25. So in this case the read noise has reduced the SNR from 32 to 31.75. It is insignificant.

--
Bob
 
I am afraid that now that I see things in terms of variables, it appears that you seem to want to explain away differences in IQ by playing the variable card every time a camera with a lower PD appears to have a good as or better IQ than a camera with the higher PD.
Yeah, imagine that -- varaibles matter. For example, a low PD system with a sharp lens outresolving a high PD system with a dull lens. A low PD systems with a more effiicent sensor having lower noise than a high PD system with a less efficient sensor.

Obviously, I'm up to no good taking these variables into account.
I am not accusing you of being up to anything, I am just pointing out that after I asked for the information about how you control for variables, you went to quite a lot of trouble in a few posts to make it clear to me that camera variables are insignificant in the grand scheme of things and can effectively be ignored when running the experiments. The posts are all there, you can check up on what you said if you have forgotten. Naturally I believed you and that shut down my argument but here you are using those in camera variables to explain the differences in another set of images in a way that implies that the variables are important to such an extent that they account for all the difference in IQ in these images.

Well this opens back up my original concern that I had as a result of your friends wide ranging variable attack on Kim's D300 and D700 comparison, which is that if you can't control these variables and they do indeed matter, than your experiment would lead to incorrect results and conclusions. This is what always happens when any experiment does not control all its variables and the experiment would not meet scientific experimental standards. Of course forum members who are laymen like me would not know this.

I am simply asking an easy to understand and logically consistent and very straightforward question about variables that has not changed at all but takes into account and accepts the various things you have been saying to me and to others.

All I have done is note your various responses and put them together and raise them with you again, because you appear to have directly contradicted yourself over the issue.

Its not my fault that you are simultaneously holding two positions on whether or not camera variables are important to the resulting images and that you appear to use whichever one suites the occasion.
 
The F550 arrives soon and many are hoping there will be a noticable improvement over previous models. I think you mentioned Fuji has a particularly good duel micro- lens set-up. Would you expect the 16mp BS I sensor and EXR pixel binning technique to break new ground or be no more than a small incremental step, at best?

Sorry if this is only loosely about PD. Thanks,

Nick
I think it's hard to tell. This is a change of tech from CCD to BSI MOS. I don't think that the dual element microlens can work with BSI, but then BSI doesn't need it. My guess would be that it will be an incremetal step, because mostly the first outing of a new technology is released when it can just better the old one. The point about BSI MOS is it still has some where to go, while the old CCD processes were at the end, without new fab lines, which was never going to happen.
--
Bob
 
I am guessing that wavelength of light / photon size (depending on your persuasion) might play a crucial and limiting role here [ at 100mp pixel size on a FF sensor 24x36 mm would be 0,0000864 mm(sq)]
In terms of how much additional IQ higher PDs deliver, that's a whole other game. Long before we get to the HUP (Heisenberg Uncertainty Principle) for pixels smaller than the wavelength of light, we have to consider diffraction, motion blur, camera shake, and lens sharpness.

For example, if we use pixels half the size (quadruple the pixel count), then the shutter speed will have to be twice as fast to have the same amount of motion blur. A shutter speed twice as fast puts half the light on the sensor, which results in 41% more apparent photon noise. Couple this with the fact that each pixel gets 1/4 the light, anyway, well, I think you can see that this extra IQ is already becoming difficult to get.

But, there's a bright side. A higher PD means a less agressive AA filter, and means that photos would require less sharpening. And even binning (as opposed to NR) would result in a more accurate representation, since each pixel would be made from more than one color, rather than one color per pixel in a Bayer CFA.

So, obviously, halving the pixel size doesn't double the IQ. And halving it again will improve IQ even less. At what point does it become silly? We're well past that point for people who just display pics on the web, as even a 1200 x 900 pic requires only 1 MP.
 
I am afraid that now that I see things in terms of variables, it appears that you seem to want to explain away differences in IQ by playing the variable card every time a camera with a lower PD appears to have a good as or better IQ than a camera with the higher PD.
Yeah, imagine that -- varaibles matter. For example, a low PD system with a sharp lens outresolving a high PD system with a dull lens. A low PD systems with a more effiicent sensor having lower noise than a high PD system with a less efficient sensor.

Obviously, I'm up to no good taking these variables into account.
I am not accusing you of being up to anything, I am just pointing out that after I asked for the information about how you control for variables, you went to quite a lot of trouble in a few posts to make it clear to me that camera variables are insignificant in the grand scheme of things and can effectively be ignored when running the experiments. The posts are all there, you can check up on what you said if you have forgotten. Naturally I believed you and that shut down my argument but here you are using those in camera variables to explain the differences in another set of images in a way that implies that the variables are important to such an extent that they account for all the difference in IQ in these images.

Well this opens back up my original concern that I had as a result of your friends wide ranging variable attack on Kim's D300 and D700 comparison, which is that if you can't control these variables and they do indeed matter, than your experiment would lead to incorrect results and conclusions. This is what always happens when any experiment does not control all its variables and the experiment would not meet scientific experimental standards. Of course forum members who are laymen like me would not know this.

I am simply asking an easy to understand and logically consistent and very straightforward question about variables that has not changed at all but takes into account and accepts the various things you have been saying to me and to others.

All I have done is note your various responses and put them together and raise them with you again, because you appear to have directly contradicted yourself over the issue.

Its not my fault that you are simultaneously holding two positions on whether or not camera variables are important to the resulting images and that you appear to use whichever one suites the occasion.
I think the problem is that you're asking for general rules and principles, whereas each demonstration and experiment has to stand on its own. The confounding factors are different, it depends what the demo is trying to demonstrate. Some variables can change wildly without affecting much, some are very critical. In all experiment design, one has to be clear minded about precisely what you are trying to show, and I don't think Kim has done this, which is why his demonstrations are open for dispute. If you compare my G10/G11 demo with his D700/D5000 demonstration, you will se that while I was very careful to ensure that both cameras had exactly the same processing with no noise processing and same tone curves, he was using ex-camera jpegs, where application of noise smoothing may be very different, tone curves may be different etc. All these things affect noise and sharpness, which was what his demo was about comparing.
Lets try another simile.

Suppose we have an assertion that large cars are generally heavier than smaller ones. I think that is likely true.

I could mount an experiment which showed a particular small car was heavier than a big one, because there are examples, but I would not have shown either that the trend isn't there or that all small cares are heavier than big ones.

Alternatively, I could weigh a small car full of passengers and baggage and compare it with an empty large car. Once again, i haven't shown anything and have introduced confounding variables.

On the other hand, I could plot the weight of a number of cars against their length. If I found a correlation between length and weight, I would show that the generalisation I was making had some truth.

It all depends on what the assertion is that is being made, and whether on's trying to prove it or disprove it. Disproving things is much easier.

--
Bob
 
I am afraid that now that I see things in terms of variables, it appears that you seem to want to explain away differences in IQ by playing the variable card every time a camera with a lower PD appears to have a good as or better IQ than a camera with the higher PD.
Yeah, imagine that -- varaibles matter. For example, a low PD system with a sharp lens outresolving a high PD system with a dull lens. A low PD systems with a more effiicent sensor having lower noise than a high PD system with a less efficient sensor.

Obviously, I'm up to no good taking these variables into account.
I am not accusing you of being up to anything, I am just pointing out that after I asked for the information about how you control for variables, you went to quite a lot of trouble in a few posts to make it clear to me that camera variables are insignificant in the grand scheme of things...
Link and quote where I said that, please.
...and can effectively be ignored when running the experiments.
Some variables are insignificant for some photos.
The posts are all there, you can check up on what you said if you have forgotten.
Yeah, but you're the one saying I said what I said I didn't say, or taking what I said out of context, so it's up to you to back it up. When you provide the link, I'll quote from it where you misinterpreted.
Naturally I believed you and that shut down my argument but here you are using those in camera variables to explain the differences in another set of images in a way that implies that the variables are important to such an extent that they account for all the difference in IQ in these images.
Yes or no -- did I not link and quote this for you long ago:

http://www.josephjamesphotography.com/equivalence/#equivalence

In other words, Equivalent images are not "equal", but instead have five equal attributes which all correspond to the visual properties of the final image. So, while equivalent images on different formats will usually have the most similar visual properties, they will not be identical, as other visual elements, such as noise, detail, flare, moiré, distortion, bokeh, etc., will not necessarily be the same, and sometimes, radically different.

http://www.josephjamesphotography.com/equivalence/#shot

There are, of course, other sources of noise, such as thermal noise, which plays a central role in long exposures, PRNU (Pixel Response Non-Uniformity) noise, which plays an important role in the highlights of the image, as well as other sources of noise. So, noise, is, of course, even more complicated than this essay makes it appear, and for some specific forms of photography (such as astrophotography) we may find that the noise is very different for equivalent images in some situations, much in the same way that corner sharpness is very different for equivalent images in some situations.

I mean, dude, what the hell?
Well this opens back up my original concern that I had as a result of your friends wide ranging variable attack on Kim's D300 and D700 comparison...
This was directly addressed to you, and you made no comment (I'll quote the whole thing, since the point was not made the first time):

http://forums.dpreview.com/forums/read.asp?forum=1012&message=37566202

On 'not valid'

"We" (the gang of four) have not said that his experiment was "not valid" -- all experiments are "valid" -- it's a matter of the conditions under which the apparent conclusions are valid.

In this case, the experiment was likely more a demonstration of thermal noise, not pixel density. Also, if the pics were OOC (out of the camera) jpgs, well, then, all bets are off because who knows what kind of processing goes on in that case.

However, let's go the whole nine yards and say that particular test showed that the higher pixel density was the culprit for those particular cameras. The fact that it is not generally true, since there are many examples to the contrary, would simply point to the fact that the engineers of that particular sensor made some, shall we say, "unoptimum" choices in its design.

For example, the Sony A900 has horrific low light performance compared to the D3x, and both have the same pixel density. So, what this points to is an "unoptimum" design choice by the Sony team for the sensor design if the camera was intended to be used in low light (the "unoptimum" sensor design may have advantages in other realms, however).

So, you can't point to a single test in a special circumstance (long exposure times) and make the conclusion that higher pixel densities (for a given sensor size) result in worse noise performance.

I am simply asking an easy to understand and logically consistent and very straightforward question about variables that has not changed at all but takes into account and accepts the various things you have been saying to me and to others.

All I have done is note your various responses and put them together and raise them with you again, because you appear to have directly contradicted yourself over the issue.

Its not my fault that you are simultaneously holding two positions on whether or not camera variables are important to the resulting images and that you appear to use whichever one suites the occasion.
It is your fault in that you are not paying attention to much of what is said, and misinterpreting much, if not all, of the rest.

As I said, link and quote anything I said that is at odds with what I am saying now. Anything. Anything at all.

OK, I'm off for a bit. Light me up. I'll be more than happy to address each and every point when I get back.
 
Could you please comment on my query - 8 lines up.
Just did, like to take my challenge?
I think we have crossed over. My above query is about the upcoming F550. Care to speculate?

Nick
Done it, care to take my challenge? Someone has to go first. I know it's hard, that's the point.
--
First of all I am the world's worst candidate because I examine pics this closely infrequently. I'd say it's impossible to know which is which PD. All I will say is pic no6 seemed to be the best to my eyes, the rest indistinguishable. It's getting late, so off to bed.

Nick
 
Anyhow, since small pixels are clearly inferior, it should be easy to order these by pixel size. Off you go.
OK, I'll play. Looking mostly at the shadow noise I think cam7 is clearly the best, cam5 probably the worst and the rest are pretty close (sure hope that cam7 isn't the D3s ;-)).
 
Great Bustard wrote:

I am just pointing out that after I asked for the information about how you control for variables, you went to quite a lot of trouble in a few posts to make it clear to me that camera variables are insignificant in the grand scheme of things...
Link and quote where I said that, please.
However, as I've said, there are many variables that often go unspoken, usually because they are such rare occurances as to not warrant "bogging down" what I'm saying by mentioning over and over (e.g. thermal noise in long exposures).

http://forums.dpreview.com/forums/read.asp?forum=1012&message=37565776

I'm not upset. I'm just saying that most of the time many of the variables are insignificant. I don't account for the spin and curvature of the Earth when calculating how far a baseball can be thrown. But I do account for the spin and curvature of the Earth when calculating where an artillery shell will land.

http://forums.dpreview.com/forums/read.asp?forum=1012&message=37572246

Since most, if not all, demonstrations of Equivalence and pixel density (and, by the way, the demonstration in the OP was one of pixel density, not Equivalence) are not at such long exposures, the variable of thermal noise is all but eliminated.

http://forums.dpreview.com/forums/read.asp?forum=1012&message=37563219

OK I may have misunderstood the nuances contained in these posts, I got the distinct impression that you were minimising the effect of camera variables. If you were not this simply strengthens my main argument about the variables in the experiment making it unscientific.
Naturally I believed you and that shut down my argument but here you are using those in camera variables to explain the differences in another set of images in a way that implies that the variables are important to such an extent that they account for all the difference in IQ in these images.
Yes or no -- did I not link and quote this for you long ago:

http://www.josephjamesphotography.com/equivalence/#equivalence

/ So, while equivalent images on different formats will usually have the most similar visual properties, they will not be identical, as other visual elements, such as noise, detail, flare, moiré, distortion, bokeh, etc., will not necessarily be the same, and sometimes, radically different.
So there are lots of variables that can affect the two images. Without controlling these the experiment won’t stand up to scientific scrutiny.
http://www.josephjamesphotography.com/equivalence/#shot

There are, other sources of noise, such as thermal noise, which plays a central role in long exposures, PRNU (Pixel Response Non-Uniformity) noise, which plays an important role in the highlights of the image, as well as other sources of noise.
So there are lots of variables that can affect the two images. Without controlling these the experiment won’t stand up to scientific scrutiny.
Well this opens back up my original concern that I had as a result of your friends wide ranging variable attack on Kim's D300 and D700 comparison...
This was directly addressed to you, and you made no comment (I'll quote the whole thing, since the point was not made the first time):

http://forums.dpreview.com/forums/read.asp?forum=1012&message=37566202
That was not the main point, here is the rest:

/ ... which is that if you can't control these variables and they do indeed matter, than your experiment would lead to incorrect results and conclusions. This is what always happens when any experiment does not control all its variables and the experiment would not meet scientific experimental standards. Of course forum members who are laymen like me would not know this.

If variables affect the results and if they are not controlled then the results and conclusions are suspect. This is why peer review exists, to weed out this sort of thing. Its a simple issue that transcends your particular experiment.

Take your Poisson distribution of light hitting the sensor example. Somewhere in the literature is an experiment that had all its variables controlled, that proved that that was how light fell on an object. Either an experiment meets these standards, or its not a scientific experiment. Your uncontrolled variables open the experiment to doubt, in which case people have to choose whether or not to "believe" you.
I am simply asking an easy to understand and logically consistent and very straightforward question about variables that has not changed at all but takes into account and accepts the various things you have been saying to me and to others.

All I have done is note your various responses and put them together and raise them with you again, because you appear to have directly contradicted yourself over the issue.
It is your fault in that you are not paying attention to much of what is said, and misinterpreting much, if not all, of the rest.
What you do is reference me back to your theory or your prior explanations of your theory. This is not about your theory, this is about your use of uncontrolled variables in your experiment and the implications that follow from that.

My observation is that with NR the high PD images are far better but NR is a variable in your experiment. Without NR the low PD images seem to be better but you explain this difference away by citing differences (variables) between the cameras.

The entire basis of your conclusions and explanations around IQ in the images is based on uncontrolled variables. This is nothing to do with the theory, its a higher order and more general issue. it does not negate the theory but I don’t think this experiment supports it either.
OK, I'm off for a bit. Light me up. I'll be more than happy to address each and every point when I get back.
Goodnight, I am off to sleep now.
 
For example, if we use pixels half the size (quadruple the pixel count), then the shutter speed will have to be twice as fast to have the same amount of motion blur. A shutter speed twice as fast puts half the light on the sensor, which results in 41% more apparent photon noise. Couple this with the fact that each pixel gets 1/4 the light, anyway, well, I think you can see that this extra IQ is already becoming difficult to get.
I was about to agree with the overall message of the above statement but several things have bothered me.
For example, if we use pixels half the size (quadruple the pixel count)
If you half the pixel size, one would expect the count would double, not quadruple. Now if you meant half the pixel dimensions, I would concur.
then the shutter speed will have to be twice as fast to have the same amount of motion blur.
This statement seems troubling to me. It is my understanding that motion blur is directly impacted by shutter speed so a shutter speed twice as fast should not result in the same amount of motion blur but half. In order to maintain motion blur, it is ISO that must be raised. This is the real reason photon noise is increased.
Couple this with the fact that each pixel gets 1/4 the light, anyway,
You wrote that shutter speed should be doubled, yet each pixel gets 1/4 the light. Now you got me doubly confused.
 
I think the problem is that you're asking for general rules and principles, whereas each demonstration and experiment has to stand on its own.
I am afraid not, in experiments the reason why variables have to be controlled is so that they are replicable. The best that can be said about the results of one of these experiments, (assuming that NR is not used because that is another story), is that for a given pair of cameras X and Y, X had better IQ than Y. Without controlling all variables, you can't generalise the result to include all pairs of cameras. But generalised conclusions are being implied by the gang of 4 all the time.
The confounding factors are different, it depends what the demo is trying to demonstrate. Some variables can change wildly without affecting much, some are very critical. In all experiment design, one has to be clear minded about precisely what you are trying to show, and I don't think Kim has done this, which is why his demonstrations are open for dispute. If you compare my G10/G11 demo with his D700/D5000 demonstration, you will se that while I was very careful to ensure that both cameras had exactly the same processing with no noise processing and same tone curves, he was using ex-camera jpegs, where application of noise smoothing may be very different, tone curves may be different etc. All these things affect noise and sharpness, which was what his demo was about comparing.
Then you have proved that for the G10/G11, "X" camera has higher IQ than Y, nothing else can be concluded or implied about other pairs of cameras or about lower vs higher PD cameras.
Lets try another simile.

Suppose we have an assertion that large cars are generally heavier than smaller ones. I think that is likely true.

I could mount an experiment which showed a particular small car was heavier than a big one, because there are examples, but I would not have shown either that the trend isn't there or that all small cares are heavier than big ones.
We are in 100% agreement. This is a new experience for me.
Alternatively, I could weigh a small car full of passengers and baggage and compare it with an empty large car. Once again, i haven't shown anything and have introduced confounding variables.
Again I agree.
On the other hand, I could plot the weight of a number of cars against their length. If I found a correlation between length and weight, I would show that the generalisation I was making had some truth.
I agree.

What is different about all your examples though is that they have no uncontrolled variables at all.
It all depends on what the assertion is that is being made, and whether on's trying to prove it or disprove it. Disproving things is much easier.
True enough. I have already said to GB 4 threads back that it is possible to do an experiment that could be generally true which involves using the same sort of experiments that are used in drug trials because living things have myriad uncontrolled variables. This involves using a statistically significant sample and testing these then applying analysis to the data.

It can easily be done and be turned into a paper especially if you can find a research methods boffin to help set it up.

Thanks for the post
 
My opinion on comparing jpgs is well known -- not useful in terms of comparing tech, but the best way to compare if you're a jpg shooter.
It is what was available, and I am a RAW shooter.
Moving right along, the D5000 ISO 1600 crop definitely looks cleaner than the D3s ISO 6400 crop to me, as one would expect.
The hair is clumping a lot more in the D7000 shot ... which of course comes from the NR needed to make it look so clean ...

Yes, the D7000 has a magnificent sensor ... but the D3s does not have to work nearlt as hard to retain details. That tells me that it has more than the expected advantage.
Furthermore, the ISO 6400 D700 crop does look slightly cleaner and more detailed than the ISO 3200 D5000 crop, again, as expected, since the D700 would gather 0.2 stops more light for the same f-ratio and shutter speed, and a lens will resolve better on a FF DSLR than a crop DSLR (near the center, anyway, for the same f-ratio).
The chroma noise alone ruins the D5000 image, never mind all the clumping. The difference is a lot more than the tiny amount of extra light you are talking about would do ...
Thus, by taking advantage of the higher resolution, the D700 could extend it's lead over the D5000 by applying NR and normalizing the detail.
Higher resolution? Oh ... you are now using resolution of the lens as opposed to pixel count and thus density.

So ... you have now given us yet another reason why larger pixels are superior.

Did you intend to add fuel to the fire? Or was that an accident ... :-)
But, again, if we're talking jpgs, then I really make no predictions, because the variations in the jpg engines could be huge.
Agreed. Except that, if we do nothing else to the image, we can at least make a reasonable assumption that Nikon would do their utmost to produce the best output they could with each given sensor.

Again ... I still find that your minutiae-laden approach is not explaining these differences convincingly ... at all. Of course, the fact that the GoF are totally convinced that all these details add up to something and that everything that does not match the theory (and not much has the last few days) can be explained away makes these discussions almost entirely circular.

At least it amuses us all ...
  1. FF puts over a stop more light on the sensor for any given f-ratio and shutter speed.
Quite true ... but it has an theoretical 1.33 stop advantage according to y'all ... and I showed that the output advantage between the D7000 and D3s is closer to two stops ... so nothing seems to perfectly fit.
  1. FF resolves more detail due to the larger sensor unless the crop camera has a lens that resolves better in proportion to the sensor diagonals
Because it has larger pixels :-) Lenses like larger pixels, and so do we ...
There is a third, but minor advantage for FF as well: sensors are often (but not always) a bit more efficient at higher ISOs. So, with the FF sensor using a stop higher ISO than the crop sensor, it may close the gap if it is less efficient, or expand the gap if it is more efficient.
There ya go .. it may and it might ... but we've already had the equivocation argument ...

--
I am but one opinion in a sea of opinions ... right?
http://kimletkeman.blogspot.com
http://letkeman.net/Photos
 
OK I may have misunderstood the nuances contained in these posts...
;)
...I got the distinct impression that you were minimising the effect of camera variables. If you were not this simply strengthens my main argument about the variables in the experiment making it unscientific.
I was simply saying that all else is never equal. Depending on the situation, some variables matter, others don't. For example, in a photo with few shadows, read noise is not an important factor. In a photo with a short exposure, thermal noise is not a factor.

You keep saying because not all variables are accounted for that it's "unscientific". That's like saying not accounting for Jupiter makes calculating the tides "unscientific". A big part of science is knowing when which variables matter and when to account for them.
Yes or no -- did I not link and quote this for you long ago:

http://www.josephjamesphotography.com/equivalence/#equivalence

/ So, while equivalent images on different formats will usually have the most similar visual properties, they will not be identical, as other visual elements, such as noise, detail, flare, moiré, distortion, bokeh, etc., will not necessarily be the same, and sometimes, radically different.
So there are lots of variables that can affect the two images. Without controlling these the experiment won’t stand up to scientific scrutiny.
Indeed. But not all variables have to be accounted for, depending on the experiment. As I said, no one preceeds a text on the times for tides by saying why the didn't include Jupiter in the calculations. At some point, you just have to assume that the audience reading science at a certain level understands certain things.

The proper response, for those who aren't at the "proper" level of understanding, is to ask questions, not to say that the theory, or experiment, is "unscientific" because the impossibility of accounting for every single variable was not discussed ahead of time.
http://www.josephjamesphotography.com/equivalence/#shot

There are, other sources of noise, such as thermal noise, which plays a central role in long exposures, PRNU (Pixel Response Non-Uniformity) noise, which plays an important role in the highlights of the image, as well as other sources of noise.
So there are lots of variables that can affect the two images. Without controlling these the experiment won’t stand up to scientific scrutiny.
Of course there are "lots of variables that can affect the two images"! What if one camera was using IS and the other wasn't -- that could throw it off. It's assumed that they are either both using IS, or both aren't. It's assumed that if they are using IS, that the IS mechanisms have the same effects. It's assumed that the photos are in focus. It's assumed that neither photo suffers from motion blur and/or camera shake. I mean, I can go on and on and on.

Seriously, one can't go through every single little detail -- that's madness!
Take your Poisson distribution of light hitting the sensor example. Somewhere in the literature is an experiment that had all its variables controlled, that proved that that was how light fell on an object. Either an experiment meets these standards, or its not a scientific experiment. Your uncontrolled variables open the experiment to doubt, in which case people have to choose whether or not to "believe" you.
Look, Kim presented an experiment that violated a condition I had made ridiculously clear: equally efficient sensors. However, that didn't make his experiment useless. It was instead an opportunity to show (and to a 'T', I might add), exactly how the differences in sensor efficiencies translated into the visual properties of the photos.
It is your fault in that you are not paying attention to much of what is said, and misinterpreting much, if not all, of the rest.
What you do is reference me back to your theory or your prior explanations of your theory. This is not about your theory, this is about your use of uncontrolled variables in your experiment and the implications that follow from that.
Look, I have said over and over and over "for equally efficient sensors". No such things exist. But, there are sensors that are close in efficiency, and tests from those sensors bear out what I have said. And, for the sensors that do deviate significantly in efficiency, the differences in the photos is entirely predictable based on what those differences are from the theory I have presented.
My observation is that with NR the high PD images are far better but NR is a variable in your experiment. Without NR the low PD images seem to be better but you explain this difference away by citing differences (variables) between the cameras.
That's the subjective. I find the high PD images "superior", NR or not. The whole role of NR is that for those that prefer clean over detailed, it is an option . But the preference for clean over detailed is subjective .
The entire basis of your conclusions and explanations around IQ in the images is based on uncontrolled variables.
No, it is not. It is based on equally efficient sensors . How many times have I said that? And when sensors are not equally efficient, the differences are predictable, as Kim's tests demonstrated.
Goodnight, I am off to sleep now.
Enjoy your nap. ;)
 
My opinion on comparing jpgs is well known -- not useful in terms of comparing tech, but the best way to compare if you're a jpg shooter.
It is what was available, and I am a RAW shooter.
Moving right along, the D5000 ISO 1600 crop definitely looks cleaner than the D3s ISO 6400 crop to me, as one would expect.
The hair is clumping a lot more in the D7000 shot ... which of course comes from the NR needed to make it look so clean ...
Whereas I say it's because the lens resolves better on the FF sensor (since we are talking about photos made from the whole of the sensor, not crops of equal area).
Yes, the D7000 has a magnificent sensor ... but the D3s does not have to work nearlt as hard to retain details. That tells me that it has more than the expected advantage.
Interestingly, the D3s has a more "magnificent sensor" at high ISOs -- the D7000 is just a lot better at base ISO.

But, yes, the larger sensor does have a detail advantage because the lenses for larger sensors almost always resolve better than the lenses for smaller sensors on their respective systems:

http://www.josephjamesphotography.com/equivalence/#lensvssensor
Furthermore, the ISO 6400 D700 crop does look slightly cleaner and more detailed than the ISO 3200 D5000 crop, again, as expected, since the D700 would gather 0.2 stops more light for the same f-ratio and shutter speed, and a lens will resolve better on a FF DSLR than a crop DSLR (near the center, anyway, for the same f-ratio).
The chroma noise alone ruins the D5000 image, never mind all the clumping. The difference is a lot more than the tiny amount of extra light you are talking about would do ...
But it is not necessarily due to the smaller pixels, either. Just because A doesn't explain the phenomenon doesn't mean that B is necessarily correct.
Thus, by taking advantage of the higher resolution, the D700 could extend it's lead over the D5000 by applying NR and normalizing the detail.
Higher resolution? Oh ... you are now using resolution of the lens as opposed to pixel count and thus density.
I use both:

http://www.josephjamesphotography.com/equivalence/#lensvssensor

http://www.josephjamesphotography.com/equivalence/#megapixels
So ... you have now given us yet another reason why larger pixels are superior.
I've given no such reason, and have no idea why you said that.
But, again, if we're talking jpgs, then I really make no predictions, because the variations in the jpg engines could be huge.
Agreed. Except that, if we do nothing else to the image, we can at least make a reasonable assumption that Nikon would do their utmost to produce the best output they could with each given sensor.
Oh no -- I would not make that assumption at all.
Again ... I still find that your minutiae-laden approach is not explaining these differences convincingly ... at all. Of course, the fact that the GoF are totally convinced that all these details add up to something and that everything that does not match the theory (and not much has the last few days) can be explained away makes these discussions almost entirely circular.
So, for the record, QE, read noise, and total light are "minutiae"? If not, then what is this "minutiae" you speak of?
At least it amuses us all ...
Define "us".
  1. FF resolves more detail due to the larger sensor unless the crop camera has a lens that resolves better in proportion to the sensor diagonals
Because it has larger pixels :-) Lenses like larger pixels, and so do we ...
Bzzt. Take a read:

http://www.josephjamesphotography.com/equivalence/#lensvssensor

Here's a hint for you: the 5D2 outresolves the 5D, yet the 5D2 has smaller pixels.
There is a third, but minor advantage for FF as well: sensors are often (but not always) a bit more efficient at higher ISOs. So, with the FF sensor using a stop higher ISO than the crop sensor, it may close the gap if it is less efficient, or expand the gap if it is more efficient.
There ya go .. it may and it might ... but we've already had the equivocation argument ...
And we have the data, too:

http://www.sensorgen.info/

Ta.
 
For example, if we use pixels half the size (quadruple the pixel count), then the shutter speed will have to be twice as fast to have the same amount of motion blur. A shutter speed twice as fast puts half the light on the sensor, which results in 41% more apparent photon noise. Couple this with the fact that each pixel gets 1/4 the light, anyway, well, I think you can see that this extra IQ is already becoming difficult to get.
I was about to agree with the overall message of the above statement but several things have bothered me.
For example, if we use pixels half the size (quadruple the pixel count)
If you half the pixel size, one would expect the count would double, not quadruple. Now if you meant half the pixel dimensions, I would concur.
Yes -- by "half the size" I mean "half the linear dimensions".
then the shutter speed will have to be twice as fast to have the same amount of motion blur.
This statement seems troubling to me. It is my understanding that motion blur is directly impacted by shutter speed so a shutter speed twice as fast should not result in the same amount of motion blur but half. In order to maintain motion blur, it is ISO that must be raised. This is the real reason photon noise is increased.
The shutter speed needs to be twice as fast for pixels half the size to have the same motion blur at the pixel level . In other words, in order to take advantage of the additional resolution for subjects in motion, you need to use a faster shutter, in proportion to the pixel size.

As for ISO, that has little to do with anything. Setting the ISO is simply an indirect way to increase the shutter speed and/or close the aperture down. For many sensors, they are more efficient at higher ISOs, which is why, for example, f/2.8 1/100 ISO 100 pushed four stops is, on many sensors, more noisy than f/2.8 1/100 ISO 1600. Read all about it:

http://www.josephjamesphotography.com/equivalence/#iso
Couple this with the fact that each pixel gets 1/4 the light, anyway,
You wrote that shutter speed should be doubled, yet each pixel gets 1/4 the light. Now you got me doubly confused.
Doubling the shutter speed results in half the light falling on the pixel. On top of that , the pixel being half the size (one fourth the area) gets 1/4 of that light.

I hope this clears things up satisfactorily.
 
I think the problem is that you're asking for general rules and principles, whereas each demonstration and experiment has to stand on its own.
I am afraid not, in experiments the reason why variables have to be controlled is so that they are replicable.
The scientific method involves giving as much information as is necessart to allow an experiment to be replicated for validataion - but just enough. There are many variables, and only the ones that impact the experiment in question are given. For instance, in a chemistry experiment, in some cases it will say heat the mixture to 100C, sometimes heat it to 100C and maintain that temperature for 24 hours and sometimes raise the temperature form 20C to 100C at a rate of 0.05C per second. If the only thing which impacts the experiment is the absolute temperature, that's all the information that will be given.
The best that can be said about the results of one of these experiments, (assuming that NR is not used because that is another story), is that for a given pair of cameras X and Y, X had better IQ than Y. Without controlling all variables, you can't generalise the result to include all pairs of cameras. But generalised conclusions are being implied by the gang of 4 all the time.
I don't believe so, I think that you need to look back and see what's actually being said. In fcat, the gang of 4 rarely needs to generalise, because we are arguing against a generalisation, thet small pixels cause poorer IQ, we just need on counter example to disprove that, and we've given many.
The confounding factors are different, it depends what the demo is trying to demonstrate. Some variables can change wildly without affecting much, some are very critical. In all experiment design, one has to be clear minded about precisely what you are trying to show, and I don't think Kim has done this, which is why his demonstrations are open for dispute. If you compare my G10/G11 demo with his D700/D5000 demonstration, you will se that while I was very careful to ensure that both cameras had exactly the same processing with no noise processing and same tone curves, he was using ex-camera jpegs, where application of noise smoothing may be very different, tone curves may be different etc. All these things affect noise and sharpness, which was what his demo was about comparing.
Then you have proved that for the G10/G11, "X" camera has higher IQ than Y, nothing else can be concluded or implied about other pairs of cameras or about lower vs higher PD cameras.
Exactly, and I have never claimed anything else. In fact my proposition was finer than that. The G11 does give better low light images (not as good bright light ones, though) - I was demonstrating that that betterness does not manifest itself according to the mechanism proposed by the big is best brigade, the proposition on both sides was specifically about that camera (although Kim had cited it as the canonical example that big is better, so I'm expecting you to take up the generalisation issue with him).
Lets try another simile.

Suppose we have an assertion that large cars are generally heavier than smaller ones. I think that is likely true.

I could mount an experiment which showed a particular small car was heavier than a big one, because there are examples, but I would not have shown either that the trend isn't there or that all small cares are heavier than big ones.
We are in 100% agreement. This is a new experience for me.
Good, we shouldn't be making generalised propositions.
Alternatively, I could weigh a small car full of passengers and baggage and compare it with an empty large car. Once again, i haven't shown anything and have introduced confounding variables.
Again I agree.
On the other hand, I could plot the weight of a number of cars against their length. If I found a correlation between length and weight, I would show that the generalisation I was making had some truth.
I agree.

What is different about all your examples though is that they have no uncontrolled variables at all.
There are plenty. How many doors, level of trim, seating fabric, etc, etc. The point is some of these variables will impact the experiment, other, though variables and uncontrolled will have negligable effect.
It all depends on what the assertion is that is being made, and whether on's trying to prove it or disprove it. Disproving things is much easier.
True enough. I have already said to GB 4 threads back that it is possible to do an experiment that could be generally true which involves using the same sort of experiments that are used in drug trials because living things have myriad uncontrolled variables. This involves using a statistically significant sample and testing these then applying analysis to the data.
But those experiments are phenomenally expensive and time consuming - do you think anyone's going to set that up for a photo forum.
It can easily be done and be turned into a paper especially if you can find a research methods boffin to help set it up.
Well, we have more than one 'research methods boffins' already within the gang, and people with a lot of peer reviewed research publications. The problem with this one, as I said, is that no-one is going to be interested in a huge expensive experiment which proves a few already well known physical principles, it might be a research method, but it's not research. The hysterical reaction of a few photo forum diehards to that well understood physics might well make an interesting research paper foe sociologists.
Thanks for the post
You're welcome.

--
Bob
 
Could you please comment on my query - 8 lines up.
Just did, like to take my challenge?
I think we have crossed over. My above query is about the upcoming F550. Care to speculate?

Nick
Done it, care to take my challenge? Someone has to go first. I know it's hard, that's the point.
--
First of all I am the world's worst candidate because I examine pics this closely infrequently. I'd say it's impossible to know which is which PD. All I will say is pic no6 seemed to be the best to my eyes, the rest indistinguishable. It's getting late, so off to bed.
Thanks Nick. That is useful data. Despite the apparently glaring differences in IQ that must be cause by the very great differences in pixel size, you find it hard to tell the difference. Data point 1. Anyway, at some stage, when a few more people have had a go, my beautiful assistant will reveal which is which, when she's finished putting on her sparkly leotard and fishnets.
--
Bob
 

Keyboard shortcuts

Back
Top