Light loss on current CMOS sensors at big apertures

Given that the graph is (as I read it) gives a thumbs up to large sensors and a thumbs down to small sensors, and that it compares Canon and Nikon sensors, not m4/3, the only one here in the troll suit is you.... and its a mighty big suit I reckon.
A prime example of a troll, the OP is a m4/3 fanboi posting about Canon and full frame. They show up every once in a while. Do not feed.
 
OK, I tried the test I suggested previously. This is quick and dirty, to get some rough figures. I did the following test on both a Canon 40D and a Panasonic GH1. All shots were done at ISO1600 using manual lenses i.e. no electronic connection with the camera, so no automatic boost in sensor gain. Pictures were of a (grey) white balance card. This was under mixed indoor lighting (mostly energy saving bulbs), which is not ideal, but the results didn't vary much between shots so I'd say the results are fairly meaningful. Maybe I'll repeat the test in daylight tomorrow.

1st shot: 50mm f/4 1/20s
2nd shot: 50mm f/1.4 1/160s

I took both images in RAW and opened them in Lightroom. I set the white balance of the first image using the eyedropper and noted the RGB values at the centre of the image. I then set the white balance of the second image to exactly the same value as the first. Then I adjusted exposure (in 0.1 increments) until the RGB values at the centre were about the same as the first image.

For both cameras I had to increase the exposure of the f/1.4 image by +0.3

I think this demonstrates to me that the issue is a real one, but not all that significant. As I said, daylight results would be more accurate, but it may be difficult to shoot at f/1.4 if the sun is out!
 
Would Foveon sensors not be even more affacted by this, by virtue of their much deeper 3 layer depth... or would the less sensor density at the surface compared to bayer compensate for it? Argh... where are those SD1 samples....
 
OK, I tried the test I suggested previously. This is quick and dirty, to get some rough figures. I did the following test on both a Canon 40D and a Panasonic GH1. All shots were done at ISO1600 using manual lenses i.e. no electronic connection with the camera, so no automatic boost in sensor gain. Pictures were of a (grey) white balance card. This was under mixed indoor lighting (mostly energy saving bulbs), which is not ideal, but the results didn't vary much between shots so I'd say the results are fairly meaningful. Maybe I'll repeat the test in daylight tomorrow.

1st shot: 50mm f/4 1/20s
2nd shot: 50mm f/1.4 1/160s

I took both images in RAW and opened them in Lightroom. I set the white balance of the first image using the eyedropper and noted the RGB values at the centre of the image. I then set the white balance of the second image to exactly the same value as the first. Then I adjusted exposure (in 0.1 increments) until the RGB values at the centre were about the same as the first image.
Just curious whether you noticed any differences in the unadjusted WB for the two images. There shouldn't have been any need for WB equalization between the respective shots from the same camera, so I'm assuming that was probably an unnecessary step, right?
For both cameras I had to increase the exposure of the f/1.4 image by +0.3

I think this demonstrates to me that the issue is a real one, but not all that significant. As I said, daylight results would be more accurate, but it may be difficult to shoot at f/1.4 if the sun is out!
If the article linked below is representative, you shouldn't see any differences at f/2.8 or above. I'm bad at the math, but your .3 stop adjustment seems worse than the article indicates. Thanks for taking the time to do the experiment.

http://www.pages.drexel.edu/~par24/rawhistogram/CanonRawScaling/CanonRawScaling.html

--
My photos: http://www.pbase.com/imageiseverything/root
 
Yes, it is an interesting article, but I'm not sure I believe it. The angle in which the rays hit the CCD vary from the center to the edge, right? So the edges would be darker than the center. You couldn't compensate by simply boosting the ISO, you'd have to boost the ISO as a function of the distance from the center. The boost would be lens unique.
No, you're failing to understand their argument. Look at this diagram from Wikipedia:



Notice how the point at the object reflects light in multiple directions. Each of these directions is a different ray, so in fact, infinitely many of rays of light from that object enter at different points of the lens; the diagram depicts three of them. The lens focuses on that point if and only if it bends all those rays so that they coincide at the focal plane; again, the diagram depicts the three rays converging so.

Note that the three rays hit the same point in the focal plane from three different angles. What LL are saying is that the pixel at that point is less sensitive to the ray that comes at it from the edge of the lens (the topmost ray in the diagram). And if the aperture is large enough, the angles get larger and the sensitivity loss is significant enough that with some lens/sensor combinationss it can add up to nearly a whole stop.
 
Can they explain why the 7D and 550D have a 0.15EV (1/6th stop) difference if they have the same sensor? That seems like a fairly large margin of error in testing.
But if the hypothesis about pixel wells and the obliquity of light was correct there would be an effect of pixel pitch - that is why they show this graph - and there isn't. LL has simply misinterpreted the data.
The graphs read "T-stops loss depends on sensor", it doesn't say that it depends on pixel pitch.

The purpose of the letter is to present the facts and ask for explanations.

I was asking for interpretations, and I would like to see some. If the problem is just the micro lens design, it is surprising that one of the oldest cameras in the bunch has the least loss, while the newest sensors fare worse.

Does this mean that while creating gapless microlenses, the manufacturers actually decreased their Fstop? Or is the problem related to something else?

Does this also mean that as a result, and APS-C sensor paired with fast lenses will show more DOF than film (assuming 35 mm film would be cropped to match exactly the APS-C format?

If you have some ideas, please do share.
--
A l'eau, c'est l'heure! (French naval motto)
--
http://www.flickr.com/photos/bogdanmoisuc/
--
-Scott
http://www.flickr.com/photos/redteg94/
 
Great link thank you.
You're welcome.
But now it gets worse. Reading the LL article, I thought that camera manufacturers were increasing the gain in hardware. Simply scaling the data seems like an even more cheesy solution to the problem (resulting in more noise). Am I misunderstanding?
You're right. What's funny in this case is that Canon's braindamage in other areas of camera design happens to cancel out this particular flaw by the luck of circumstance.

In a camera that is properly designed by knowledgable engineers, scaling the raw data would introduce quantization error (AKA noise, as you said), though only in the deep shadows. Fortunately, this does not happen with Canon cameras, because the designs have flaws introduced by the crack-addled brains of the Marketing Department.

One such coke-inspired flaw is forcing the users to always use 14 bit raw files, when in the very best of cases, no Canon camera has ever yet even take full advantage of 12 bits. Of course, this needlessly bloats file size on compact flash cards and reduces the number of buffered shots. But there is one time when this braindamaged misfeature is useful, and that is to cancel out the mental retardation in other parts of the camera design, such as raw data scaling for angle of incidence response, 1/3 stop ISO, etc.

Now, we'd like to think that at least a hardware gain implementation wouldn't be this bad, right? Unfortunately not. Canon's previous hardware gain implementations, such as in the 1D series, have a variety of problems. It uses a secondary gain amplifier which, in half the 1/3 stop ISO settings, clips a full 1/3 stop of highlight in return for only the slightest decrease in read noise compared to sftware gain (not a good trade in the far majority of circumstances). In the other half of the 1/3 stop settings, the hardware has slightly more read noise compared to software gain.

The simple, cheap, and elegant solution to the problem is metadata gain, the same way that white balance and HTP are metadata gain.

Nikon doesn't make these mistakes. Their brandamage is in entirely different areas, though about the same in magnitude (high black clip, white balance scaling, mismatched LUTs for NEF level thinning, etc.).

--
Daniel
 
Just curious whether you noticed any differences in the unadjusted WB for the two images. There shouldn't have been any need for WB equalization between the respective shots from the same camera, so I'm assuming that was probably an unnecessary step, right?
I just wanted to ensure the WB was set the same in both shots. Best to eliminate any other variables.
If the article linked below is representative, you shouldn't see any differences at f/2.8 or above. I'm bad at the math, but your .3 stop adjustment seems worse than the article indicates. Thanks for taking the time to do the experiment.

http://www.pages.drexel.edu/~par24/rawhistogram/CanonRawScaling/CanonRawScaling.html
I see that they only go down to f/1.8 in that article. I'd imagine things would get a lot worse at f/1.4. I've just looked at the LL article again and my results are about the same as they got for the Canon 50D.
 
Just curious whether you noticed any differences in the unadjusted WB for the two images. There shouldn't have been any need for WB equalization between the respective shots from the same camera, so I'm assuming that was probably an unnecessary step, right?
I just wanted to ensure the WB was set the same in both shots. Best to eliminate any other variables.
If the article linked below is representative, you shouldn't see any differences at f/2.8 or above. I'm bad at the math, but your .3 stop adjustment seems worse than the article indicates. Thanks for taking the time to do the experiment.

http://www.pages.drexel.edu/~par24/rawhistogram/CanonRawScaling/CanonRawScaling.html
I see that they only go down to f/1.8 in that article. I'd imagine things would get a lot worse at f/1.4. I've just looked at the LL article again and my results are about the same as they got for the Canon 50D.
Good point. Your findings are consistent with LL chart. Perhaps you should go into experimental physics ;-)

I'm becoming a believer, but I still think he overstates the case. Plus, the chart would indicate that there's more going on than is explained by aperture alone. How much will improvements in microlens design eliminate the problem?

Thanks again.

--
My photos: http://www.pbase.com/imageiseverything/root
 
I'm wondering if focal length or lens design (e.g. retrofocus vs telephoto) is also a factor? It makes sense to me that the effect would be less for a longer focal length.
 
I really question whether or not iso is being boosted behind the scenes. I don't buy it.
It should be easy enough to verify, based on objective measurements of noise and tonal level, possibly comparing manual exposure with automated, if necessary.

Isn't that an old article, though? Improvements in microlens design may have since obsoleted the need to do things like this (camera hide the under-exposure with a boost).

I can't remember the exact conditions, by my Rebel XTi, which only officially has ISOs of 100 and 200, etc. had 1.25x the 100 read noise, and a gappy histogram, with some shots I took with a fast lens wide open. The XTi has "old school" quantum efficiency, and that may be partly due to poor microlens design.

--
John

 
It's quite likely that the sharpness difference is simply a consequence of the format sizes. If you take a photo with the same lens on both cameras and view them at 100%, the APS-C photo is displaying the lens' output at 1.6x the magnification of the full-frame photo, and that's going to be less sharp.

Here's another way of putting it. Your lens has an optical resolution that's measured in a unit like line pairs per millimeter (lp/mm). Let's assume your lens' lp/mm resolution is 100 lp/mm everywhere in the frame. It follows that:
  • With a camera whose sensor is 16mm tall, the lens can render 1,600 distinct line pairs from the top to the bottom of the frame.
  • For a camera with a sensor that's 24mm tall, however, the lens can render 2,400 distinct line pairs from top to bottom.
This means that if you view the photos at the same output size, you expect the full frame photo to have 50% more resolution than the 1.5x crop camera.
 
I'm becoming a believer, but I still think he overstates the case. Plus, the chart would indicate that there's more going on than is explained by aperture alone. How much will improvements in microlens design eliminate the problem?
The Canon 7D is supposed to have state of the art microlenses, but acording to the LL article, it does pretty poorly. So whatever the optimizations made, they weren't aimed at minimizing this effect.

On the other hand, the Canon 5D with it's 5 year old microlenses sems to be doing just fine.

Seems to me that fill factor of the microlenses and their f stop go into opposite directions, but that is, of course, just speculation.
--
http://www.flickr.com/photos/bogdanmoisuc/
 
Nice test, thank you.

I have 2a fewobservations.

First off, according to the dxo graphs, the change that you notice happens mostly between f 2 and f 1.4.

You're not very upset that your lens is actually f 1.7 at maximum aperture, I would be, but that's a different matter.

The 40 D is in the middle of the pack regarding this problem.

The GH1 is also in the middle of the pack, but here we have a new problem. mFT having naturally les DOF control, require faster lenses, so it would be nice to know what is the size of this problem on the new Nokton f 0.95, or on older f 1.1 and f 1.2 noktons.

I'm so glad I decided to stay with the f 1.5 nokton 50mm, I would've ben very disappointd to know I bought a huge 1.1 chunk of glass only to get f 1.6 "real" light and dof.
OK, I tried the test I suggested previously. This is quick and dirty, to get some rough figures. I did the following test on both a Canon 40D and a Panasonic GH1. All shots were done at ISO1600 using manual lenses i.e. no electronic connection with the camera, so no automatic boost in sensor gain. Pictures were of a (grey) white balance card. This was under mixed indoor lighting (mostly energy saving bulbs), which is not ideal, but the results didn't vary much between shots so I'd say the results are fairly meaningful. Maybe I'll repeat the test in daylight tomorrow.

1st shot: 50mm f/4 1/20s
2nd shot: 50mm f/1.4 1/160s

I took both images in RAW and opened them in Lightroom. I set the white balance of the first image using the eyedropper and noted the RGB values at the centre of the image. I then set the white balance of the second image to exactly the same value as the first. Then I adjusted exposure (in 0.1 increments) until the RGB values at the centre were about the same as the first image.

For both cameras I had to increase the exposure of the f/1.4 image by +0.3

I think this demonstrates to me that the issue is a real one, but not all that significant. As I said, daylight results would be more accurate, but it may be difficult to shoot at f/1.4 if the sun is out!
--
http://www.flickr.com/photos/bogdanmoisuc/
 
Te bvious answer is "they don't have the exact same sensor". Might be the same chip, but the microlenses, AA filter, etc, etc, may still differ.
Can they explain why the 7D and 550D have a 0.15EV (1/6th stop) difference if they have the same sensor? That seems like a fairly large margin of error in testing.
But if the hypothesis about pixel wells and the obliquity of light was correct there would be an effect of pixel pitch - that is why they show this graph - and there isn't. LL has simply misinterpreted the data.
The graphs read "T-stops loss depends on sensor", it doesn't say that it depends on pixel pitch.

The purpose of the letter is to present the facts and ask for explanations.

I was asking for interpretations, and I would like to see some. If the problem is just the micro lens design, it is surprising that one of the oldest cameras in the bunch has the least loss, while the newest sensors fare worse.

Does this mean that while creating gapless microlenses, the manufacturers actually decreased their Fstop? Or is the problem related to something else?

Does this also mean that as a result, and APS-C sensor paired with fast lenses will show more DOF than film (assuming 35 mm film would be cropped to match exactly the APS-C format?

If you have some ideas, please do share.
--
A l'eau, c'est l'heure! (French naval motto)
--
http://www.flickr.com/photos/bogdanmoisuc/
--
-Scott
http://www.flickr.com/photos/redteg94/
--
http://www.flickr.com/photos/bogdanmoisuc/
 
It's quite likely that the sharpness difference is simply a consequence of the format sizes. If you take a photo with the same lens on both cameras and view them at 100%, the APS-C photo is displaying the lens' output at 1.6x the magnification of the full-frame photo, and that's going to be less sharp.

Here's another way of putting it. Your lens has an optical resolution that's measured in a unit like line pairs per millimeter (lp/mm). Let's assume your lens' lp/mm resolution is 100 lp/mm everywhere in the frame. It follows that:
  • With a camera whose sensor is 16mm tall, the lens can render 1,600 distinct line pairs from the top to the bottom of the frame.
  • For a camera with a sensor that's 24mm tall, however, the lens can render 2,400 distinct line pairs from top to bottom.
This means that if you view the photos at the same output size, you expect the full frame photo to have 50% more resolution than the 1.5x crop camera.
This presumes a lens whose center resolution isn't sufficient for a crop sensor to resolve more detail in its smaller sensor area vs. a full-frame. There are several L lenses where this isn't the case.
--
Kodak Instant Camera
Kyocera 1MP Camera phone (pre-paid phone plan)
http://horshack.smugmug.com/
 
I'm so glad I decided to stay with the f 1.5 nokton 50mm, I would've ben very disappointd to know I bought a huge 1.1 chunk of glass only to get f 1.6 "real" light and dof.
This, I think reflects the most questionable idea put forward in the LL article - that somehow a fast lens is 'wasted' due to pixel vignetting, and one that's rather slower will do essentially the same job. Reality is that, because this is a sensor issue, on any given camera the slower lens behave exactly the same at its available apertures as the faster one, but that faster lens will still deliver benefits at larger apertures (the real 'fix' would be to change the camera, but this isn't an option for many users). Yes, there's diminishing returns, but the one thing that's not addressed in the article is what this means in practice.

Here's a little quick'n'dirty experiment I did about 6 months ago investigating the pixel vignetting phenomenon using the Leica Moctilux 50/0.95 on the GF1. This is aimed at looking at DoF and background blur effects, by measuring the blur circle from an out-of-focus point light source (here an LED flashlight about 4m from the camera in a darkened room) in the centre of the frame across a range of apertures. Clearly, the blur circle diameter should be directly proportional to the aperture diameter. Here's the results:





What you see immediately from this is that there's no sharp cutoff in blur circle at a specific aperture - however a little further calculation shows that you don't get the full theoretical benefit of the maximum aperture either. In this I'm assuming pixel vignetting affects are negligible at F2 (which, given the results, seems fair), and calculating what the blur circle diameter should be at larger apertures.





This shows that even at F1.2 there's not a huge effect - maybe 1/6 stop loss. At F0.95, though, the blur circle is considerably smaller than expected, corresponding to an apparent aperture of about F1.1, i.e. maybe a half stop loss in light. However, even then you do still get a bit more blur than when the lens is set to F1.2, so the F0.95 lens isn't completely wasted. (However it does reinforce the idea that the Voigtlander Nokton 50/1.1 may just be better value than the Noctilux, as if we didn't know that all along.)
--
Andy Westlake
dpreview.com
 

Keyboard shortcuts

Back
Top