SmilerGrogan: That must be one heck of a curve to get the top image to look the way it does. Would you care to share a screen capture of what your final curve looked like?
@BadScience - I've just checked with Rishi and it was a single conversion from a single file. All done using the Whites/Blacks, Shadows/Highlights sliders and tone curve adjustments: no stacking.
I used the phrase 'tone curve' to represent the net effect of all whole-image adjustments (since that's what defines the Raw number to pixel value mapping), rather than the effects of just the curves tool.
SolidMetal: Great article! Though I still cant really understand why high ISO images have less dynamic range. That orange line which is overamplified in the picture wouldnt be out of the raw file even if its straight not curved? I mean its still brighter than the highlights marked in yellow on the low ISO picture.
In terms of why does dynamic range fall at higher ISOs: don't forget that dynamic range is the gap between the brightest tone in your Raw file and the darkest *useable* tone (what noise level you consider to be useable ends up playing an important role in assessing DR).
At a high ISO you use amplification to overcome the shorter exposure you used, so that your highlight tone still comes from the same scene brightness.
However, you've amplified away all the additional bright tones that your shorter exposure allowed your sensor to initially capture.
This means that you're pulling your whole image from the bottom of your sensor's capture (amplified to fill the Raw file). This means a lot of those tones will have low SNRs as a result of shot noise and the contribution of read noise will have more impact, too.
This means an increasing number of tones will fall below your acceptable noise threshold, meaning your DR has narrowed (DR is lost in the shadows, not the highlights).
The angle of the lines between the Sensor Capture and Raw File sections of the diagram represent the amplification, so you can't selectively brighten certain tones using hardware amplification. (If you want the mid-tones to be bright enough, you have to throw away the extra highlights that the shorter exposure initially allowed you to capture).
By contrast, if you use the shorter exposure but don't amplify the signal (by shooting at base ISO), you don't throw away those brighter tones. And, unlike with hardware amplification, the tone curve adjustments you make when you process Raw files *do* allow you to selectively boost certain tones.
mapgraphs: Richard, Rishi,
When you make statements like "most cameras have low upstream read noise" it would be helpful if you cited your source for the statement. And the same goes for every other statement of implied fact. Your sources should be a readily available so that anyone can easily access them.
Otherwise this is just an editorializing opinion piece. If it is intended as an opinion piece, that's fine, but that should be set out at the beginning.
Upstream read noise is a variable value, which can be affected by heat and the software used to interpolate sensor pixels:
"Interpolation works by using known data to estimate values at unknown points."
"Image interpolation works in two directions, and tries to achieve a best approximation of a pixel's color and intensity based on the values at surrounding pixels." [ibid.]
mapgraphs: I appreciate your point. I was trying to write an accessible overview, rather than a fully referenced scientific paper. And it should be made clear than we ran it by a series of people much more experienced than Rishi or myself, to make sure we weren't getting any of the fundamentals wrong.
Stating facts without reference is not the same as writing opinions but I accept it does require a level of trust that isn't automatic.
For some idea of the numbers, you can look at [Sensorgen](http://www.sensorgen.info/) or [Bill Claff's website](https://home.comcast.net/~NikonD70/). Alternatively check the behaviour set out in this article against DxO's results.
Yes thermal noise can play a role but we're concentrating on the relatively short exposures used for everyday photography in this example.
mpgxsvcd: Will there be a 3rd article discussing exposure duration and noise?
Not in the short term, I'm afraid. There are a few other cases we'd like to discuss, but there are cameras that need reviewing.
Mssimo: Question: As far as shot noise... Would a APS-C camera with a speed booster have the same low noise performance as a full frame assuming the same lens is used?
As Ember42 points out: in principle, yes. But, since the Speedbooster effectively increases the f-number (by shortening the focal length), you do risk overexposure.
However, it should match the performance up until the point that it overexposes (it's just that a full frame camera is likely to be able to tolerate more exposure and therefore offer improved SNR).
steve_hoge: Would appreciate a more complete explanation of (or link to) the Exposure Latitude and ISO invariance tests - I'm not familiar with the specific procedures used for those tests, but I assume they are used to derive the Dynamic Range graphs shown in DPReview's camera reviews?
Also, a discussion of the meaning of "neutral" or "18%" gray in the context of the camera signal path and gain structure would be very useful.
[Here's one of the simpler examples](http://www.dpreview.com/reviews/nikon-d5500/9) of those two tests. [This one includes a more detailed analysis](http://www.dpreview.com/previews/canon-eos-5ds-sr/7).
The Exposure Latitude test takes a series of base ISO images with increasingly low exposure, and pulls them up to show the same image brightness. This mimics the real-world effect of pulling up dark tones in a Raw file but, as this article hopefully makes clear, includes increasing shot noise (because of the exposure change) as well as read noise.
The ISO Invariance test takes a series of shots taken at different ISO settings but the same exposure, then pulls them to the same brightness: taking shot noise out of the equation.
Artpt: To DPR, would isovariance be a standard then for ranking sensor performance considering the variable of electrical noise? I see the studio tests...is their a way to quantify it?
Perhaps I missed that in the article...
Thank you in advance.
Rishi's written a much more comprehensive article about ISO Invariance, which will be coming soon. It's not necessarily something we could put an easy-to-understand number on (and we'd need to include some consideration of efficiency, too).
We'll be increasingly looking for ways to meaningfully present this information. For now our ISO invariance test in our recent reviews is a start.
Photato: Any mention of Readout Rate as a Noise Source ?
And what about Angle of Incidence which makes some sensors especially those with smaller pixels less sensitive (noisier) at the edges of the frame ?
Ultimately Pixel Count do have an impact on Noise. Otherwise Sony wouldn't bother making a A7S, nor Apple would be making that official statement "Less pixels is more"
Most test for these assertions are based on steady subjects with camera stabilized on a tripod. But all that methodology is flawed when Camera Shake and Subject Motion are introduced.
I would kindly ask DPReview or others to point me to comparison test done when camera and/or subjects have even the slightest motion or not perfect focus.I am sure results will be different.
In terms of how much difference pixel size plays, have a look at [this example](http://bit.ly/pixelsize). Or, for a more in-depth look at the impact, [have a look at this](http://www.dpreview.com/articles/4613822764/).
Readout rate does have an influence (which Rishi is better qualified than me to expand on), but this is meant to be a high-level look at the bigger stuff and it links off to Emil Martinec's more detailed look at the subject.
With regards the a7S, do you think Sony made a sensor specifically to be the best above ISO 51200 (or wherever you think it steps ahead in the low light examples), or do you think they built a sensor that perfectly 2x oversamples its 1080 video region, so that they could offer excellent video, the low light benefits merely being a bonus when it came to marketing?
Fewer pixels *might* be better at the very small sensor sizes that smartphones work at (I haven't looked at the results), but again I wouldn't draw conclusions from marketing claims.
Lhermine: Very interesting article ! Thanks very much DPR.
As many people here, I was wondering if the number of photons can be so low that shot noise would be significant.
Based on the so-called "sunny 16 rule", I've found that for a proper exposition on a 24 MPix FF sensor, around 50,000 photons should hit each pixel. This lead to a shot noise with an amplitude of 1.4 %.
This kind of noise amplitude should to small to be noticed. However, in darker conditions, you may have to increase the ISO speed let's say for instance 6400. It means that you have fewer photons. The corresponding shot noise amplitude will be around 3.5 %. We reach up to 14 % for ISO 100,000!
And that's the bad news: 1,000,000 ISO speed will never be as good as 100 ISO whatever the quality of the sensor because of the shot noise.
(for those who are interseted in the computation, please mail me, you may point out some mistakes ;-) )
Is that figure of 50,000 photons the figure for a highlight tone or a midtone?
Dark regions will receive progressively fewer photons (hence being darker), so the SNR will drop down in darker regions of the image.
TN Args: I have never seen such obfuscatory writing.
Try reading DuncanDovovan's comments.
Or, take a photo with a D810. Now take the exact same photo again, but before pressing the shutter, place a Four Thirds sized mask over the sensor. According to you, the SN ratio just went up due to random shot noise because only a quarter of the light was captured. Well it couldn't. It was the same exposure on the same sensor.
Trying to make the images the same size is YOUR assumption, and pointless, because it would actually make a lot of sense to buy a full frame camera if you want larger final images than previous practice. But they will only be of the same image quality, not higher.
For my own use, I'd tend to agree.
I've used full frame here as an example, but the general point about bigger *having the potential to be* better is what I was hoping to make.
There's nothing special or optimal about full frame, it's just that it was easy to find examples of full frame lenses that also mount on smaller-sensor formats, etc, etc...
DuncanDovovan: Richard, I still believe your conclusion in this article are not entirely right.
Because you changed 2 things for the D810: The sensor AND the lens.
Thought experiment: Let's take the APS-C lens and sensor as the standard. Would there exist a light-neutral optical component that would enlarge the projected image to let the APS-C crop cover the FF sensor, the light intensity per square mm would DROP, because you'd have to spread the same amount of light over a larger area, correct?
Assuming the number of pixels per square mm are the same for the APS-C and FF sensors, the FF sensor pixels would now get LESS light per pixel - which would mean a worse S/N ratio.
BUT: You would now have more pixels covering the same area of the photo, because the image is enlarged. So you can now combine the pixels (downscaling the image) to improve the S/N ratio again.
I conclude that by only enlarging the sensor the result would be the same S/N ratio - not better as concluded in your article?
You can think in terms of magnification, enlargement and light fall-off, if you like, but to me that feels like over-complicating matters.
1st example: if you could project an APS-C-sized image across a whole full frame sensor, then yes, your *pixel-level* SNR would fall (though the whole-image SNR would stay essentially the same). If viewed at the same size, the APS-C and full frame images would be the same (or, at least, the differences are likely to be imperceptible).
2nd example: Yes, if you could condense all the light from the full frame region down onto an APS-C sensor then it would increase the light intensity. Your pixel-level SNR would increase but your image-level SNR would match the full frame image.
The optical component (though not light neutral) in the first example would be a teleconverter, and a SpeedBooster in the second.
In each case, these devices are considered to increase or decrease the focal length (and hence change the f-number of the combination, which accounts for the change in light intensity).
If you insist on only ever viewing or printing images in exact proportion to the size of the sensor you shot an image on, that's fine. I explicitly stated my assumption that most people wouldn't want to do this, at which point the reader is welcome to decide whether it's pixel-level or whole-image level performance they care about.
We're in total agreement that a crop of a large sensor is the same as shooting with a sensor the size of that crop. The article explains and illustrates this. However, as soon as you compare at a common size (either on screen or print), an image made up from more light will be better.
Whether people want to pay the size and monetary price for that improvement is a totally separate issue.
mbrobich: Kinda wish there was a Highlight color used to see what was updated in the review. I read the original one and don't want to re-read the whole damn thing....blahhhh
I've added a 'Review Timeline' box on the front page, so that you can see which pages were added and when.
Kerni: Thank you for this great and very interesting article!
When will the other parts of this article be published?I am looking forward to reading them.
I'm just trying to put the finishing touches to part 2 but some of the things that have been raised in the comments have complicated this, so I'm trying to make sure it addresses as many points as possible.
toughluck: Richard, labels in the second chart are the same as in the first and it doesn't make sense. "APS-c crop" (second column in the second chart) should be: "D810 whole frame resized to APS-c size".
They're resized to 16MP - not APS-C size. They still come from the whole, full frame region.
Sucama: Can DPR show/ add, Exposure Latitude & ISO-invariance and Exposure Latitude & ISO-invariance in d7100, to compare with d7200?
We haven't been intentionally avoiding adding the D7100, it's just that we haven't got one and there have been too many other things going on for us to start getting older cameras back in to expand the tests.
However, it is essential for the D7200 review, so we'll try to get hold of one.
DuncanDovovan: You mention pixel size is the same for a FZ sensor and the APS-C sensor. But I assume the FZ sensor has more pixels than the APS-C sensor, right?
Otherwise a better S/N ratio would not be possible?
That situation would be impossible: a larger sensor can't have the same pixel count and same pixel size as a smaller sensor.
sina_hml: I know it's an old article but i hope someone can explain some of my questions.Here is how i understand it:I have a 5d and a 450d and a 50mm 1.8. Both cameras have identical flange distance, so the lens is producing the exact same image at the sensor plane.450d captures a smaller part of this image. I think everyone agree with me so far.The part i don't understand is why do some insist that the picture that 450d sees is darker than what 5d sees? it is a smaller amount of the total light that is entering the lens but it is also used to illuminate a smaller area. I assume that the amount of light that each pixel (photo cell etc.) receives is the same between the cameras.
Reasonable *for a bridge camera* is probably fairer than 'almost unusable' - I take it back.
Modern small sensors tend to perform better (in proportion to their sensor size) than large sensors, as a *very* rough rule-of-thumb, which is part of why I shouldn't use 'equivalent ISO' as anything other than a *very* rough guide.
At F2.8 and ISO 100 the FZ200 is creating its image from the same amount of light as a full frame camera is at F15.6 and ISO 3086 however, the performance won't necessarily be identical.