Pixel Density (Thread 5)

Anyhow, since small pixels are clearly inferior, it should be easy to order these by pixel size. Off you go.
OK, I'll play. Looking mostly at the shadow noise I think cam7 is clearly the best, cam5 probably the worst and the rest are pretty close (sure hope that cam7 isn't the D3s ;-)).
I've lost track of which is which. I emailed a list to somebody before I posted, so that I couldn't be accused of cooking the results. At some stage that person will jump out and reveal all, but hopefully not before a few more people have had a go.
--
Bob
 
Anyway, at some stage, when a few more people have had a go, my beautiful assistant will reveal which is which, when she's finished putting on her sparkly leotard and fishnets.
Will you present sample photo's as evidence...er, in the interest of completeness?

Nick
 
Anyway, at some stage, when a few more people have had a go, my beautiful assistant will reveal which is which, when she's finished putting on her sparkly leotard and fishnets.
Will you present sample photo's as evidence...er, in the interest of completeness?
I would, but the problem is the dressing room's so poorly lit, that I'd need a 1.5MP FF camera to cope, and I don't have one to hand.
--
Bob
 
I am guessing that wavelength of light / photon size (depending on your persuasion) might play a crucial and limiting role here [ at 100mp pixel size on a FF sensor 24x36 mm would be 0,0000864 mm(sq)]
In terms of how much additional IQ higher PDs deliver, that's a whole other game. Long before we get to the HUP (Heisenberg Uncertainty Principle) for pixels smaller than the wavelength of light, we have to consider diffraction, motion blur, camera shake, and lens sharpness.

For example, if we use pixels half the size (quadruple the pixel count), then the shutter speed will have to be twice as fast to have the same amount of motion blur. A shutter speed twice as fast puts half the light on the sensor, which results in 41% more apparent photon noise. Couple this with the fact that each pixel gets 1/4 the light, anyway, well, I think you can see that this extra IQ is already becoming difficult to get.

But, there's a bright side. A higher PD means a less agressive AA filter, and means that photos would require less sharpening. And even binning (as opposed to NR) would result in a more accurate representation, since each pixel would be made from more than one color, rather than one color per pixel in a Bayer CFA.

So, obviously, halving the pixel size doesn't double the IQ. And halving it again will improve IQ even less. At what point does it become silly? We're well past that point for people who just display pics on the web, as even a 1200 x 900 pic requires only 1 MP.
I've just replaced my old CRT TV with a 37" 16:9 plasma screen (Panasonic X20). Its resolution is only 1024x720 (0.74mp, and the pixels aren't square!), and it's really quite amazing how good it looks with a good 720p/1080i signal, even at 3 feet distance. I'm pretty sure that a 37" (diagonal) 0.74mp print wouldn't look nearly as good at that distance, but not quite sure why that is. Probably something with the difference in contrast/DR between a print and a monitor, I guess.
 
bobn2 wrote:
The scientific method involves giving as much information as is necessary to allow an experiment to be replicated for validataion - but just enough. There are many variables, and only the ones that impact the experiment in question are given. For instance, in a chemistry experiment, in some cases it will say heat the mixture to 100C, sometimes heat it to 100C and maintain that temperature for 24 hours and sometimes raise the temperature form 20C to 100C at a rate of 0.05C per second. If the only thing which impacts the experiment is the absolute temperature, that's all the information that will be given.
I would call those controlled variables. In this instance, its not as though you are waiting until the experiment is over then saying "oops it never worked because we don't know the rate at which we heated the the temperature to 100C but if we had heated at 0.05C per second, we would have gotten the right answer. This is not about whether experiments have variables, this is about whether variables that affect the result are being controlled or not.
I don't believe so, I think that you need to look back and see what's actually being said. In fcat, the gang of 4 rarely needs to generalise, because we are arguing against a generalisation, thet small pixels cause poorer IQ, we just need on counter example to disprove that, and we've given many.
Thats not really how the gang comes across. People here informally share their own empirical evidence of the same few Fuji's they all use. If these people have found empirically that increased pd has reduced IQ, thats all they are sharing. They are not announcing a major experimental breakthrough.
The confounding factors are different, it depends what the demo is trying to demonstrate. Some variables can change wildly without affecting much, some are very critical. In all experiment design, one has to be clear minded about precisely what you are trying to show, and I don't think Kim has done this, which is why his demonstrations are open for dispute.
What I refer to in Kims experiment is a series of posts from friends of the gang of 4 that listed a selection of variables that could affect IQ. This flagged the issue for me in a general way. The best one was possible differences in the substrata of the sensors.
Suppose we have an assertion that large cars are generally heavier than smaller ones. I think that is likely true.

I could mount an experiment which showed a particular small car was heavier than a big one, because there are examples, but I would not have shown either that the trend isn't there or that all small cares are heavier than big ones.
What is different about all your examples though is that they have no uncontrolled variables at all.
There are plenty. How many doors, level of trim, seating fabric, etc, etc. The point is some of these variables will impact the experiment, other, though variables and uncontrolled will have negligable effect.
Surely the doors, trim, fabric etc all have weight and are part of the car and that is what you are measuring, or have I misunderstood you? Are you just trying to measure the monocoque?
It all depends on what the assertion is that is being made, and whether on's trying to prove it or disprove it. Disproving things is much easier.
True enough. I have already said to GB 4 threads back that it is possible to do an experiment that could be generally true which involves using the same sort of experiments that are used in drug trials because living things have myriad uncontrolled variables. This involves using a statistically significant sample and testing these then applying analysis to the data.
But those experiments are phenomenally expensive and time consuming - do you think anyone's going to set that up for a photo forum.
That is not the issue, what that sort of experiment does is allow you to make generalised statements rather than specific ones that only relate to a specific pair of cameras with a specific version of firmware.

The JJP document is a general theory and derivative that applies universally in all cases. Only a general experiment can provide a general proof for a general theory.
Well, we have more than one 'research methods boffins' already within the gang, and people with a lot of peer reviewed research publications.
Then they can explain to GB why the NR is an uncontrolled variable that can lead to contradictory results and conclusions for the same data set. They can also explain that if the variables between cameras are insignificant to the result, then they can't be used to explain away differences in the result whereas if the variables are significant to the result, then the result can't fairly be compared in the first place.
The hysterical reaction of a few photo forum diehards to that well understood physics might well make an interesting research paper foe sociologists.
If this is an oblique reference to me for having the temerity to point out that because the experiment has not been peer reviewed and variables are not being controlled which means that wrong conclusions can be reached, all I can say is this is what can happen if you submit an experiment to a public forum, rather than to a respected scientific journal.

You want us to accord your experiment the respect and trust that it deserves as a scientific proof, when it has not been proven because it has not been accepted as scientific where it matters. You have been playing fast and loose with uncontrolled variables, even basing conclusions on them and no has one noticed until now. Probably because this is a public forum of amateurs who use cheap Fuji compact cameras, most of which don't even have RAW.

I am sorry if what I have said comes across as hysterical or illogical. I will make an effort to use less emotive language from now on.
 
I would call those controlled variables.
exactly, and to know that, you have to consider what is the likely impact of different variables on teh experiment on a case by case basis, and if they turn out potentially to be confounding variables, to control them. But isn't that exactly what the detailed discussion you've been having with Joe is about? You seem to be making a blanket charge that our demonstrations are subject to uncontrolled confounding variables. What you haven't done is enter into a discussion as to exactly which variable and why it is confounding in the context of that experiment . At the same time, you suggest that Kim's analysis of his expreiment has been unfairly dismissed on the basis of confounding variables, when the nature of the variables and how they impacted the result was very precisely presented.
Infcat, the gang of 4 rarely needs to generalise, because we are arguing against a generalisation, thet small pixels cause poorer IQ, we just need on counter example to disprove that, and we've given many.
Thats not really how the gang comes across.
I think you're generalising from your own perception. I've had enough communications since this discussion started to be reasonably satisfied that the 'gang's intervention has been seen as positive by some forum regulars.
People here informally share their own empirical evidence of the same few Fuji's they all use. If these people have found empirically that increased pd has reduced IQ, thats all they are sharing. They are not announcing a major experimental breakthrough.
I would differ from that view. People are presenting this 'empirical' knowledge as absolute and generalised fact, and it isn't. There were days when if the 'empirical evidence' 'proved' the ugly old woman to be a witch, she got burned. Thankfully, most places we've moved on from that attitude. The stakes might not be as high when it comes to cameras, but the faulty logic and confused motivation is the same.
What I refer to in Kims experiment is a series of posts from friends of the gang of 4 that listed a selection of variables that could affect IQ. This flagged the issue for me in a general way. The best one was possible differences in the substrata of the sensors.
Not sure what you mean. Performance parameters of sensors is a quite well understood thing, all that was happening was that well known and grounded knowledge was being deployed to show why the conclusion being drawn was false.
Surely the doors, trim, fabric etc all have weight and are part of the car and that is what you are measuring, or have I misunderstood you? Are you just trying to measure the monocoque?
I'm not trying to do anything with cars at all. I'm trying to illustrate that when one wants to discuss an experiment, you have to be clear minded about what it is you're trying to prove or disprove before you even begin to discuss what the significant or insignificant variables are. 'We' really need to keep out eye on the ball, and the ball isn't about testing cars, it's about defining propositions before you do experiments to prove or disprove them.
That is not the issue, what that sort of experiment does is allow you to make generalised statements rather than specific ones that only relate to a specific pair of cameras with a specific version of firmware.
It depends what is the generalised statement, as I said earlier.
The JJP document is a general theory and derivative that applies universally in all cases. Only a general experiment can provide a general proof for a general theory.
There is no such thing as a general experiment, it is an impossible thing to mount. The way that science works is a body of theory is developed which fits together and provides a description of a phenomenon. Then, experiments are mounted to show that individual predictions of the theory are borne out. You will find no general proof of relativity or quantum mechanics.
Then they can explain to GB why the NR is an uncontrolled variable that can lead to contradictory results and conclusions for the same data set.
They can't because it isn't. Which result which is being experimentally tested will NR invalidate and why?
They can also explain that if the variables between cameras are insignificant to the result,
No-one has said the variables between cameras are insignificant to the result. In fact Jop has told you several times that sensor efficiency is very significant to observed results.
then they can't be used to explain away differences in the result whereas if the variables are significant to the result, then the result can't fairly be compared in the first place.
You're arguing yourself round in circles. Variables are not universally significant or insignificant. Whether they are significant depends on what is the proposition being put forward . This discussion would fare better if rather than very vague and ill defined comments about significance of veriables you concentrated on specifics. Which proposition is under test, which is the confounding variable, why does it confound the test.
If this is an oblique reference to me
No.. I don't think you've been hysterical. I think you're been a bit woolly minded, and naive about the genre in which we're operating, but not hysterical.
You have been playing fast and loose with uncontrolled variables, even basing conclusions on them and no has one noticed until now.
That is a false accusation. As I said, be precise and prove your case before making accusations of peopel plying fast and loose. Which uncontrolled variables? How would the affect the claimed analysis? Put up or shut up.
--
Bob
 
I think this has been already shown that higher PD results in higher resolution images given the same lens is used. See the dxomark and photozone lens tests. Same lens shows more resolution if combined with higher MP (and same sized) sensor in every single case.

http://forums.dpreview.com/forums/readflat.asp?forum=1036&message=37468394
"1/C = sqrt(1/L^2 + 1/S^2)

This uncertainty does not change the general conclusions reached by yourself, Anders and others.

There is no sharp cut-off when the sensor resolution exceeds that of the lens. Image resolution continues to improve as sensor resolution increases.

With the root sum of squares formula, we achieve 70% of the theoretical sensor resolution when the lens and sensor resolution are equal, 90% when the lens has twice this resolution, with diminishing returns above 95% when the lens has at three times the sensor resolution.

As technology improves it will be more cost effective to improve sensor performance than lens performance."
http://forums.dpreview.com/forums/read.asp?forum=1036&message=37448004
http://forums.dpreview.com/forums/read.asp?forum=1036&message=37448004

D700's resolution advantage over D5000 is due to sensor size. As D5000s image has to be magnified 1.5 times more (in linear dimension) than the D700's image to print at same size, the D5000 will always show 1.5 times less resolution given that same lens is used (and as D5000 has not more MP than D700). imaging -resource uses same Sigma lens on DX and FX cameras for their studio shots.
Thus, by taking advantage of the higher resolution, the D700 could extend it's lead over the D5000 by applying NR and normalizing the detail.
Higher resolution? Oh ... you are now using resolution of the lens as opposed to pixel count and thus density.

So ... you have now given us yet another reason why larger pixels are superior.

Did you intend to add fuel to the fire? Or was that an accident ... :-)
 
So how do you explain it then?

Indeed, I do accept their results without dispute. I also explained, I thought clearly, where that difference comes from - the G11 has a stop better read noise, and since read noise is the lower end of the DR equation, that gives a stop better DR, just as DxO has found and also just as my example illustrated, hardly any difference in the highlights, G11 winning in the shadows.
....
Hi Bob,

Although I came a little late to the party, I started doing some per area calculations here, for equal sensor areas. What it boils down to, for read noise dominated situations in the low light areas of a high ISO picture, is the ratio of the QEs divided by the ratio of per area read noises for any 2 cameras we're comparing. So for example the D700 has a .38/.29 = 1.3 QE advantage, along with a 9 e/6 e = 1.5 advantage due to scaled up read noise of D300 compared to D700; combined this gives about a factor of 2 in SNR.

Other examples shown, but in the future I think QE is doing ok for the smaller pixels, Aptina predicts doing ok for 1.7 micron and down to 1 micron using their innovations. So as long as read noise goes down linearly with pixel size, then the smaller pixels should do ok.

Chris
 
So how do you explain it then?

Indeed, I do accept their results without dispute. I also explained, I thought clearly, where that difference comes from - the G11 has a stop better read noise, and since read noise is the lower end of the DR equation, that gives a stop better DR, just as DxO has found and also just as my example illustrated, hardly any difference in the highlights, G11 winning in the shadows.
....
Hi Bob,

Although I came a little late to the party, I started doing some per area calculations here, for equal sensor areas. What it boils down to, for read noise dominated situations in the low light areas of a high ISO picture, is the ratio of the QEs divided by the ratio of per area read noises for any 2 cameras we're comparing. So for example the D700 has a .38/.29 = 1.3 QE advantage, along with a 9 e/6 e = 1.5 advantage due to scaled up read noise of D300 compared to D700; combined this gives about a factor of 2 in SNR.

Other examples shown, but in the future I think QE is doing ok for the smaller pixels, Aptina predicts doing ok for 1.7 micron and down to 1 micron using their innovations. So as long as read noise goes down linearly with pixel size, then the smaller pixels should do ok.

Chris
Hi Chris,

My own thinking is that the stcking point is likely to be with read noise, not QE. I and several others had a long discussion picking Eric Fossum's brains on the matter (aren't we lucky to have people of that stature posting on these forums?) and he was explaining that at these very small geometries the normal transistor noise models begin to give out, and things like RTS noise become significant. Also, as I pointed out elsewhere, when a pixel has to be 5 microns deep if its going to collect red light, we need some fairly sophisticated toppings to stop severe red crosstalk. All these things though are problems, not limits, and the market demand is producing solutions. For the read noise problem, the solution to the transistor noise issue is to get to the pixel scale where photon events are being counted, then the sensor is into a domain of effectively zero read noise. Sometime soon there is a paradigm shift to come.

--
Bob
 
.

My own thinking is that the stcking point is likely to be with read noise, not QE. I and several others had a long discussion picking Eric Fossum's brains on the matter (aren't we lucky to have people of that stature posting on these forums?)
Yes, it is quite an honor.
and he was explaining that at these very small geometries the normal transistor noise models begin to give out, and things like RTS noise become significant.
This is a question that I've had for some time now, and I hope I ask ok: As technology improvements bring us to really low read noise levels, are there scale dependent (transistor sizing, etc.) as well as scale independent read noise components?
Also, as I pointed out elsewhere, when a pixel has to be 5 microns deep if its going to collect red light, we need some fairly sophisticated toppings to stop severe red crosstalk. All these things though are problems, not limits, and the market demand is producing solutions.
Yes, both the possibility of electrical and optical crosstalk are potential issues (please see the Lumerical website for cool wave simulations). But it's nice to see how brilliant innovation can help to overcome some of these things.
For the read noise problem, the solution to the transistor noise issue is to get to the pixel scale where photon events are being counted, then the sensor is into a domain of effectively zero read noise. Sometime soon there is a paradigm shift to come.
That would be cool!

Chris
 
Doubling the shutter speed results in half the light falling on the pixel. On top of that , the pixel being half the size (one fourth the area) gets 1/4 of that light.

I hope this clears things up satisfactorily.
Yes, this clears things up.

You have given a reason why low PD is better - to reduce motion blur and increase light. That's all I care for.

Now you also argue that higher PD has an upside when NR is applied or higher detail (albeit with noise) however, I personally do not care for those things. I almost never have use for resolutions above 6-8 mp.

Now if you believe binning will work better at higher resolutions, that gives us hope for the F550 EXR. I am in fact a fan of binning/fusion so perhaps the camera options will allow us to choose from the best of both worlds.
 
This is a question that I've had for some time now, and I hope I ask ok: As technology improvements bring us to really low read noise levels, are there scale dependent (transistor sizing, etc.) as well as scale independent read noise components?
The point about read noise scaling is the mechanism through which it occurs. In terms of relative strength against other noise sources, it needs to be measured in electron equivalents, which means it get transformed through the input capacitance of the reading circuit, which is actually the base of a MOSFET and a little attached floating diffusion. Now, any read noise source downstream will effectively be transformed through that capacitance, so as long as the downstream sources are independent of pixel size then the read noise will scale down with the pixel (if the pixel is strictly scaled).

For things like gain amps and ADC's, this seems OK, since they obviously don't have anything to do with pixel size, why would they change? And if they can be improved as time goes on, that's a bonus.

Now, so far as the in pixel sources are concerned, the major noise source is the read transistor itself, and the usual FET noise model says that FET noise is shape, not size dependent, so as long as that FET can be scaled and keep its shape, then the in pixel noise should be constant, and the scaling principle still works.

What Eric is saying is that the FET noise model ignores noise sources which become significant when the FET becomes very small. In effect the charge being transferred is so small that the movement of individual charge carriers becomes significant, and this can be affected by nanogeometry, as opposed to the simplistic shape factors that the normal model uses. Quantifying this is a matter of research, and a bit outside my areas of activity, but I think that there are probably some interesting papers in Eric's workshop series.
--
Bob
 
Doubling the shutter speed results in half the light falling on the pixel. On top of that , the pixel being half the size (one fourth the area) gets 1/4 of that light.

I hope this clears things up satisfactorily.
Yes, this clears things up.
Apparently, not so much as I had hoped (further explanation follows).
You have given a reason why low PD is better - to reduce motion blur and increase light. That's all I care for.
A lower PD does not have less motion blur. For the same shutter speed, the motion blur is the same regardless of pixel count . However, unless the shutter speed scales with the (linear) pixel size, then the additional detail that smaller pixels can potentially offer is significantly reduced.

By "significantly reduced", as opposed to "lost", I mean that the smaller pixels will still resolve more, just as smaller pixels will still resolve more even when the effects of diffraction softening are dominant. It's just that the additional detail afforded by smaller pixels (for a given sensor size, of course) is "signficantly reduced" from what it could have been with the proportionately higher shutter speed.
Now you also argue that higher PD has an upside when NR is applied or higher detail (albeit with noise) however, I personally do not care for those things. I almost never have use for resolutions above 6-8 mp.
The higher PD often has an upside with no NR being applied, as it is more detailed, and a lot of photography is done at base ISO where detail matters significantly more than noise.

Of course, the advantage of the additional detail depends hugely on how large you display the photo and how closely you view it.

As I've said in more than one post, even 1 MP gets you a 1200 x 900 pic for the web, so even 6 MP is well into the domain of overkill. Even 8 MP gets you 300 PPI for an 8x12 inch print (arguably, effectively 150 PPI if you take the Bayer CFA into account), and that is easily more than "good enough" for 99% or more.

So, I don't disagree with you saying that you "almost never have use for resolutions above 6-8 mp". All I'm saying is that more pixels result in more IQ (options), assuming, of course, the sensor is at least as efficient. What I am not making any comment on is the utility of this higher IQ. In fact, you might want to take a read of this:

http://www.luminous-landscape.com/reviews/kidding.shtml

which I cite quite a bit to support the point you just made.

This begs the question as to what I'm "banging on" about. Well, quite simply, it's this: more pixels not only don't hurt IQ, they improve it, assuming the sensor efficiency doesn't take a hit, and the evidence is that there is no correlation between sensor efficiency and PD (at least for CMOS).

What I'm not saying is that you , or anyone in particular , "need" more pixels.
Now if you believe binning will work better at higher resolutions, that gives us hope for the F550 EXR. I am in fact a fan of binning/fusion so perhaps the camera options will allow us to choose from the best of both worlds.
Now that is what I am saying! Well, almost -- NR is better than binning (in terms of IQ).
 
Slight correction: the lens resolution on D5000 will not be exactly 1.5 times less, as D5000 has a higher PD than D700, it will make up more (that is, it will have much higher lens resolution than the sensor size difference indicated, but the PD is not high enough to make up the size difference in this case)!

Another point: anyone who thinks noisier pixels (that is smaller pixels) make the images noisier should stop using dxomark data for their purposes.

See, according to dxomark data, 450D and 500D has same pixel level noise and 550D has noisier pixels than both:





But for the image level noise, 550D has least noise, followed by 500D and then 450D:





This is because more number of pixels (even though those r noisier) add up together to give better SNR at image level. This is how Dxomark computes their data, and it is part of their methodology. And don't forget that they use image level data for their scoring and disregard the pixel level data.
I think this has been already shown that higher PD results in higher resolution images given the same lens is used. See the dxomark and photozone lens tests. Same lens shows more resolution if combined with higher MP (and same sized) sensor in every single case.

http://forums.dpreview.com/forums/readflat.asp?forum=1036&message=37468394
"1/C = sqrt(1/L^2 + 1/S^2)

This uncertainty does not change the general conclusions reached by yourself, Anders and others.

There is no sharp cut-off when the sensor resolution exceeds that of the lens. Image resolution continues to improve as sensor resolution increases.

With the root sum of squares formula, we achieve 70% of the theoretical sensor resolution when the lens and sensor resolution are equal, 90% when the lens has twice this resolution, with diminishing returns above 95% when the lens has at three times the sensor resolution.

As technology improves it will be more cost effective to improve sensor performance than lens performance."
http://forums.dpreview.com/forums/read.asp?forum=1036&message=37448004
http://forums.dpreview.com/forums/read.asp?forum=1036&message=37448004

D700's resolution advantage over D5000 is due to sensor size. As D5000s image has to be magnified 1.5 times more (in linear dimension) than the D700's image to print at same size, the D5000 will always show 1.5 times less resolution given that same lens is used (and as D5000 has not more MP than D700). imaging -resource uses same Sigma lens on DX and FX cameras for their studio shots.
Thus, by taking advantage of the higher resolution, the D700 could extend it's lead over the D5000 by applying NR and normalizing the detail.
Higher resolution? Oh ... you are now using resolution of the lens as opposed to pixel count and thus density.

So ... you have now given us yet another reason why larger pixels are superior.

Did you intend to add fuel to the fire? Or was that an accident ... :-)
 
You keep saying because not all variables are accounted for that it's "unscientific". That's like saying not accounting for Jupiter makes calculating the tides "unscientific". A big part of science is knowing when which variables matter and when to account for them.
Precisely so let me try to qualify better what I mean by uncontrolled variables so that we can eliminate Jupiter.

I am only talking about any variables that cannot be controlled that can reasonably be expected to have an effect on the results of the experiments, where that effect is sufficiently great as to Affect the outcome of the results.

I have assumeed that these variables exist because at the start of this thread you mention some variables that would account for some results that appeared to favour a lower PD image. Similarly in the last thread, some people listed a number of variables that might invalidate Kim's results, other than the heat variable that you identified as actually causing the problem. So based on what you and your friends have said, I have assumed that there are indeed variables that cannot be controlled for but that can effect the results.

All I am saying is that if this is true then the results can't be fairly compared and any conclusions drawn would not be sound but if it is not true and those variables have no effect on the results then by definition, you can't look at results you don't like and say that they were caused by uncontrolled variables.

Another thing that I have noticed from posts of yours that I have read is that you appear to have a different attitude towards results that confirm your theory and those that don't. It would seem that when results show that a lower PD image appears to have higher IQ than a higher PD image, you bring the full force of your forensic skills, intelligence and encyclopedic knowledge to bear on the problem and you find compelling arguments why the test was flawed, often mentioning uncontrolled variables.

However in instances where higher PD image have higher IQ, I have not seen any evidence of you bringing your prodigious skills to bear in the same way, to try to establish whether those results were caused by uncontrolled variables that favour the high PD image. Instead it appears that you accept those results as correct. If this is indeed the case, it would indicate that there might be a lack of objectivity and a bias in attitude towards results that support your assertions.
 
You keep saying because not all variables are accounted for that it's "unscientific". That's like saying not accounting for Jupiter makes calculating the tides "unscientific". A big part of science is knowing when which variables matter and when to account for them.
Precisely so let me try to qualify better what I mean by uncontrolled variables so that we can eliminate Jupiter.

I am only talking about any variables that cannot be controlled that can reasonably be expected to have an effect on the results of the experiments, where that effect is sufficiently great as to Affect the outcome of the results.

I have assumeed that these variables exist because at the start of this thread you mention some variables that would account for some results that appeared to favour a lower PD image. Similarly in the last thread, some people listed a number of variables that might invalidate Kim's results, other than the heat variable that you identified as actually causing the problem. So based on what you and your friends have said, I have assumed that there are indeed variables that cannot be controlled for but that can effect the results.

All I am saying is that if this is true then the results can't be fairly compared and any conclusions drawn would not be sound but if it is not true and those variables have no effect on the results then by definition, you can't look at results you don't like and say that they were caused by uncontrolled variables.

Another thing that I have noticed from posts of yours that I have read is that you appear to have a different attitude towards results that confirm your theory and those that don't. It would seem that when results show that a lower PD image appears to have higher IQ than a higher PD image, you bring the full force of your forensic skills, intelligence and encyclopedic knowledge to bear on the problem and you find compelling arguments why the test was flawed, often mentioning uncontrolled variables.

However in instances where higher PD image have higher IQ, I have not seen any evidence of you bringing your prodigious skills to bear in the same way, to try to establish whether those results were caused by uncontrolled variables that favour the high PD image. Instead it appears that you accept those results as correct. If this is indeed the case, it would indicate that there might be a lack of objectivity and a bias in attitude towards results that support your assertions.
As I said before, I think you need to give specific examples of where a confounding variable has been ignored, and what effect it would have had on the result and analysis offered. Otherwise we're just dealing with generalised accusations of lack of objectivity and bias, and your 'experimental method' in coming to that conclusion is just as bad or worse that that which you are criticising.

--
Bob
 
There is no sharp cut-off when the sensor resolution exceeds that of the lens. Image resolution continues to improve as sensor resolution increases.

With the root sum of squares formula, we achieve 70% of the theoretical sensor resolution when the lens and sensor resolution are equal, 90% when the lens has twice this resolution, with diminishing returns above 95% when the lens has at three times the sensor resolution.
I agree that lens performance does not seem to be the limiting factor ... you can tell the difference a stunning lens makes on the D70s as easily as you can on the D300 and the D7000 ....

The issue that is most commonly raised is that technique must be improved to use the higher resolution sensors because blur has more effect as more pixels are exposed to it ...
D700's resolution advantage over D5000 is due to sensor size. As D5000s image has to be magnified 1.5 times more (in linear dimension) than the D700's image to print at same size, the D5000 will always show 1.5 times less resolution given that same lens is used (and as D5000 has not more MP than D700). imaging -resource uses same Sigma lens on DX and FX cameras for their studio shots.
1) D700 and D5000 create the same image of the same angle of view.

2) The D700 uses a lens that has more magnification to get the shot, ensuring that each pixel in the same location sees the same area of the image itself. I.e. the only difference is the size of each sample.

so ...

3) Unless the D5000 is sampling beyond the lens's resolution, the two images should be equal in detail.

So where does this logic break down?

--
I am but one opinion in a sea of opinions ... right?
http://kimletkeman.blogspot.com
http://letkeman.net/Photos
 
You keep saying because not all variables are accounted for that it's "unscientific". That's like saying not accounting for Jupiter makes calculating the tides "unscientific". A big part of science is knowing when which variables matter and when to account for them.
Precisely so let me try to qualify better what I mean by uncontrolled variables so that we can eliminate Jupiter.

I am only talking about any variables that cannot be controlled that can reasonably be expected to have an effect on the results of the experiments, where that effect is sufficiently great as to Affect the outcome of the results.
Remember, I have always given the condition of "same efficiency".
Another thing that I have noticed from posts of yours that I have read is that you appear to have a different attitude towards results that confirm your theory and those that don't. It would seem that when results show that a lower PD image appears to have higher IQ than a higher PD image, you bring the full force of your forensic skills, intelligence and encyclopedic knowledge to bear on the problem and you find compelling arguments why the test was flawed, often mentioning uncontrolled variables.
If an experiement confirms what I've said, then the differences in the variables is not significant enough to warrant further consideration. If an experiement contradicts what I've said, because it violates the "same efficiency" requirement, then a closer look at the variables is required.

I mean, that's how it works. If the tide comes when predicted, then there's no need to wonder why it did so without accounting for Jupiter. It's only when the tide didn't come when predicted, that we have to account for other factors.
However in instances where higher PD image have higher IQ, I have not seen any evidence of you bringing your prodigious skills to bear in the same way, to try to establish whether those results were caused by uncontrolled variables that favour the high PD image. Instead it appears that you accept those results as correct. If this is indeed the case, it would indicate that there might be a lack of objectivity and a bias in attitude towards results that support your assertions.
I hope my points above have answered those points to your satisfaction.
 
Increase in resolution if used on a higher MP body is true for even the cheapest lens. The links that I posted show increase in resolution of two cheapest Canon lenses (cheapest zoom and cheapest prime).

AFAIK, imaging resource uses same 70mm Sigma f/2.8 macro lens on FX and DX bodies. So it is not same AOV, they change camera position to get the same apparent scene.

Assuming all other things constant, a same lens should give 1.5 times less resolution on a DX body, as a DX sensor is 1.5 times smaller in linear dimension. Say a FX sensor's width is x unit. We will print an image of 30x unit. So the FX sensor image will be magnified 30times ( = 30x/x) it's sensor linear dimension. But the DX sensor has length of x/1.5 unit. So to print at 30x unit, we have to magnify it's output 30x/x/1.5 = 45 times. This will put the DX output in a resolution disadvantage. See this link: http://www.dpreview.com/lensreviews/nikon_50_1p4g_n15/page4.asp

"FX compared to DX

Eagle-eyed viewers will no doubt have noticed that the MTF50 sharpness data at any particular focal length/aperture combination is distinctly higher on FX when compared to DX. This may at first sight appear unexpected, but in fact is an inevitable consequence of our presentation of the sharpness data in terms of line pairs per picture height (and thus independent of format size).

Quite simply, at any given focal length and aperture, the lens will have a fixed MTF50 profile when expressed in terms of line pairs per millimeter. In order to convert to lp/ph, we have to multiply by the sensor height (in mm); as the full-frame sensor is 1.5x larger, MTF50 should therefore be 1.5x higher.

In practice this is an oversimplification ; our tests measure system MTF rather than purely lens MTF, and at higher frequencies the camera's anti-aliasing filter will have a significant effect in attenuating the measured MTF50. In addition, our testing procedure involves shooting a chart of fixed size, which therefore requires a closer shooting distance on full frame, and this will also have some influence on the MTF50 data."

When I said that that D700 will have 1.5 times more resolution, I actually oversimplified and forgot that D5000 has PD advantage over D700, so it's output will not be that less sharp. But still, it's output will be less sharp than the D700's.
There is no sharp cut-off when the sensor resolution exceeds that of the lens. Image resolution continues to improve as sensor resolution increases.

With the root sum of squares formula, we achieve 70% of the theoretical sensor resolution when the lens and sensor resolution are equal, 90% when the lens has twice this resolution, with diminishing returns above 95% when the lens has at three times the sensor resolution.
I agree that lens performance does not seem to be the limiting factor ... you can tell the difference a stunning lens makes on the D70s as easily as you can on the D300 and the D7000 ....

The issue that is most commonly raised is that technique must be improved to use the higher resolution sensors because blur has more effect as more pixels are exposed to it ...
D700's resolution advantage over D5000 is due to sensor size. As D5000s image has to be magnified 1.5 times more (in linear dimension) than the D700's image to print at same size, the D5000 will always show 1.5 times less resolution given that same lens is used (and as D5000 has not more MP than D700). imaging -resource uses same Sigma lens on DX and FX cameras for their studio shots.
1) D700 and D5000 create the same image of the same angle of view.

2) The D700 uses a lens that has more magnification to get the shot, ensuring that each pixel in the same location sees the same area of the image itself. I.e. the only difference is the size of each sample.

so ...

3) Unless the D5000 is sampling beyond the lens's resolution, the two images should be equal in detail.

So where does this logic break down?

--
I am but one opinion in a sea of opinions ... right?
http://kimletkeman.blogspot.com
http://letkeman.net/Photos
 

Keyboard shortcuts

Back
Top