Why is DR smaller when FF cameras are used in cropped mode?

Therefore, assuming similar sensor tech, and thus similar electronic noise, as well as similar processing (that is, one didn't have more noise filtering applied than the other), the DR for the crop would be greater than for the mFT photo.

Is there something else you wanted to add?
I could add something. Given that the top end of 'DR' is the 'maximum possible level' and few if any photos, especially if metered to a nominal ISO setting, use the maximum possible level, and individual photo can't really be taken as in indicator of DR.

I suspect Don is making the rather common confusion between DR and SNR, which is somewhat incorporated into 'PDR'.
 
Last edited:
When I look at the photonstophotons side and compare DR of different cameras, I see that the DR is smaller, when a camera is used in cropped mode, e.g.: https://www.photonstophotos.net/Cha...5(APS-C),Sony ILCE-7RM4,Sony ILCE-7RM4(APS-C)

I thought the sensor and the pixels remain the same and just a smaller area is used. How can DR be different?
Both photos were taken with a Canon 6D2 at f/4 1/200 ISO 25600 and converted using the exact same settings:

6D2 + Tamron 35-150 / 2.8-4 VC at 100mm f/4 1/200 ISO 25600 downsampled to the same dimensions of the crop from the 50mm photo below.
6D2 + Tamron 35-150 / 2.8-4 VC at 100mm f/4 1/200 ISO 25600 downsampled to the same dimensions of the crop from the 50mm photo below.

6D2 + Tamron 35-150 / 2.8-4 VC at 50mm f/4 1/200 ISO 25600 center crop.
6D2 + Tamron 35-150 / 2.8-4 VC at 50mm f/4 1/200 ISO 25600 center crop.

The low exposure was intentionally used to make the differences in noise more visible. The reason the crop of the 50mm photo is so much more noisy than the uncropped 100mm photo is because the latter was made with 4x as much light as the former. Thus it is both less noisy and has greater DR.

Some will argue that it is the downsampling of the 100mm photo that reduced the noise, not the fact that it was made with 4x as much light. However, prints of the same size would show the same thing, and displaying the top photo at full size on an 8K monitor and the crop at full size on a same size 4K monitor would also have the same results.
But when viewed at 1:1, they'll show the exact same amount of noise.
In short, both noise and DR are *highly* dependent on the the total amount of light
They're highly dependant on the scaling factor. There's no light (photons) in the images, we're dealing with information, and through the digital transformations (downscaling) we sacrifice spatial information for better DR. Obviously the more information we have to start with, the more we can sacrifice.

But having less information due to lesser light density (exposure) is different from having less information due to smaller area.
that makes up the photo.
The Photographic Dynamic Range of Bill Claff's or Landscape of DxO, yes. But those are good for comparison but not as an absolute measure of the DR.

--
 
When I look at the photonstophotons side and compare DR of different cameras, I see that the DR is smaller, when a camera is used in cropped mode, e.g.: https://www.photonstophotos.net/Cha...5(APS-C),Sony ILCE-7RM4,Sony ILCE-7RM4(APS-C)

I thought the sensor and the pixels remain the same and just a smaller area is used. How can DR be different?

Wolfgang
When you put a RP camera in crop mode the viewfinder will still show a full picture.

You are seeing less pixels from the sensor but they are magnified to fill the viewfinder.

What would happen if you could crop it down to 4 pixels and they would fill the viewfinder ?

If the scene has a lot of different bright and dark sections they would be averaged out to 4 pixels.

The 4 pixels could capture the full DR of this scene ?

9870c8440b3e48098908f87eebfbf6f3.jpg

--
" It's a virus that hitches a ride on our love and our trust for other people. "
Dr. Celine Gounder
 
Last edited:
Photostophotons doesn't tell you the dynamic range. It tells you Bill Claff's own metric, which he calls 'Photographic Dynamic Range', but is not 'Dynamic Range'. It's a kind of mixed metric, the purpose of which is unclear. If it is DR you want to know about, don't use those figures - maybe try DxOMark's instead.
DxOMark also uses a similar normalised metric, but much more obscure.
 
DSLRs have either full frame or crop frame sensors. Whichever you have will make a big difference on how you should go about choosing your lenses. Here, you will learn what the difference is and what it means for your photography.

“I got a new DSLR and I want a new lens. What should I get?” This was my aunt and, after a series of questions, I was able to narrow down what lens would probably make her happy based on her camera model. If you’re unhappy with the lens you have (or don’t have one at all) and you don’t know what you should use, find guidance in my straightforward suggestions below for new Canon and Nikon DSLR shooters. You must be equipped with this knowledge because not all lenses work as intended on all DSLRs, even within the same brand.

Narrowing down what lens you need can be divided into 3 steps:

Step 1: Identify Your Camera
Step 2: Identify Your Lens Type
Step 3: Pick a Lens (With Some Recommendations)

What are full frame sensors?

Every DSLR has an image sensor inside it. It is hiding behind a mirror and looks like a green rectangle. This is what conveys information that results in an image. It is what we popularly use now to make pictures instead of film. In fact, that is what a full frame sensor is – it is a digital version of a 35mm film frame. They are the same size!

What are crop frame sensors?

It’s a smaller sensor – smaller than 35mm. That’s it. That’s all it is. Imagine a 35mm piece of film, crop the edges down, and that’s your crop frame sensor.

Why would anybody crop a sensor?

The cynical answer is money. You can fit more cropped sensors on a silicon wafer during production than full frame-sized sensors so the yield is higher, making the cost lower. But there are other benefits. Crop sensors are smaller, which means the cameras they go into can be smaller. Crop sensors also have a narrower angle of view (they simply aren’t as wide as full frame sensors), which enhances the telephoto effect while reducing the wide angle affect. We’ll talk more about that later.

If full frame sensors match 35mm film, then exactly how big is a crop frame sensor?

Most crop sensor DSLRs use the “APS-C” format, which is a 3:2 ratio, as is full frame, but approximates the size of Advanced Photo System Classic film, which is closer to 24mm rather than 35mm. It was popular in the 90s in point-and-shoot cameras. In the digital age, APS-C sensor cameras occupy a formidable presence among pros and amateurs alike.

I heard crop sensor cameras have crop “factors”. What is a crop factor?

In the digital photography world, the 35mm size is our reference point for all imagery. We have all of these lenses available that are designed to work specifically on a standard 35mm frame size. But not all cameras have 35mm size image sensors! Many DSLRs have the APS-C sized sensor, which is closer to 24mm. When you mount a lens that is built for a 35mm size and stick it against a sensor that is 24mm size then the edges of your pictures are going to get cropped off. How much they get cropped is different on Nikon and Canon. Nikon APS-C sensors crop your image by 1.5x. Canon crops it a hair more, by 1.6x. This crop reduces your field of view through a lens by a factor proportional to the ratio between the 24mm size and the 35mm size.

Ok, so I’m going to see less on the edges of my scene through a lens on a crop sensor camera than on a full frame sensor camera. But how does that affect my lens choice?

When you cut off the edges of a scene, your field of view is narrower. If you’re a big fan of wide angle lenses because you like shooting wide scenes, you are going to lose some of that width on a crop sensor camera. How much? Simply multiple the length of the lens by the amount the sensor is cropped. In Nikon’s case, it is 1.5x – for Canon, 1.6x.

Let’s say you want to use a Nikon 16-35mm lens on a Nikon crop sensor DSLR:

16 x 1.5 = 24
35 x 1.5 = 52.50

Your 16-35mm lens will produce imagery on your crop sensor camera that looks more like what 24-52.50mm would look like on a full frame sensor camera. This is your focal length multiplier. You take your crop factor (in this case 1.5) and times that by the focal length you want to use. The result is how your crop sensor camera sees the scene in a world dominated by lenses designed for full frame fields of view. This will help you better choose a focal length that matches what you intend to see through your camera and not just what’s printed on the lens barrel.

I hope it helps :)
 
Photostophotons doesn't tell you the dynamic range. It tells you Bill Claff's own metric, which he calls 'Photographic Dynamic Range', but is not 'Dynamic Range'. It's a kind of mixed metric, the purpose of which is unclear. If it is DR you want to know about, don't use those figures - maybe try DxOMark's instead.
DxOMark also uses a similar normalised metric, but much more obscure.
FWIW, pixel level dynamic range (Engineering Dynamic Range (EDR)) can be found at PhotonsToPhotos by clicking on cameras in the legend of the Input-referred Read Noise chart. For example:

eb7495fa8bd24cfe8d0cb1fea992fa74.jpg.png

You can also look at DxOMark Screen dynamic range but they don't test extended and intermediate ISO Settings.

For normalized measures there's PhotonsToPhotos Photographic Dynamic Range (PDR) as opposed to DxOMark Print dynamic range.

So, both sites have both "types" of dynamic range.

--
Bill ( Your trusted source for independent sensor data at PhotonsToPhotos )
 
But having less information due to lesser light density (exposure) is different from having less information due to smaller area.
is it?

In the picture below, the second frame is a crop to 1/16 the area of the first frame. The third frame is the same size as the first frame, but received 1/16 the exposure. The first and second frames received the same exposure, and the same light per pixel. The second and third frames received the same total light.



690cac8a172b4d3b99d4befdb83917a0.jpg
 

Attachments

  • 45455a3573504ec1b59351ea947dd00b.jpg
    45455a3573504ec1b59351ea947dd00b.jpg
    321.4 KB · Views: 0
Last edited:
Photostophotons doesn't tell you the dynamic range. It tells you Bill Claff's own metric, which he calls 'Photographic Dynamic Range', but is not 'Dynamic Range'. It's a kind of mixed metric, the purpose of which is unclear. If it is DR you want to know about, don't use those figures - maybe try DxOMark's instead.
DxOMark also uses a similar normalised metric, but much more obscure.
It is not similar, it is not obscure, and it is not "much" more obscure. On the other hand, their website is a mess, and the original definition is hard to find now.
 
Last edited:
That definition is nonsense. Dynamic range is not something you could appreciate or measure viewing an 8x10" print (or indeed any other size print or display). It rather indicates the whole confusion behind 'PDR'.
This was only demonstration that PDR is not classic DR and that it's normalized.
So if someone understand it as definition of PDR, my fault, it isn't whole definition.
It's mainly a link that leads to Bill Claff's page with more details.

Personally I consider PDR obscurely defined too, so it's difficult to understand.
DXOMark score is much easier to understand with one simple equation.
But on positive side PDR is very useful for cross comparison between cameras and Bill Claff has always most actual data as DXOMark almost ignores cameras today and optyczne.pl is much slower than P2P.
I also admire Bill Claff's dedication to measure entire Photon Transfer Curve (PTC), I did some measurements of few sensors but only for the central part of PTC with 1/2 slope and even that is pretty time consuming.
And P2P has also data about DR and saturation points (unfortunately they are so wonderfully hidden that I doubt that most visitors can even found them! :) ).
 
Last edited:
Photostophotons doesn't tell you the dynamic range. It tells you Bill Claff's own metric, which he calls 'Photographic Dynamic Range', but is not 'Dynamic Range'. It's a kind of mixed metric, the purpose of which is unclear. If it is DR you want to know about, don't use those figures - maybe try DxOMark's instead.
DxOMark also uses a similar normalised metric, but much more obscure.
FWIW, pixel level dynamic range (Engineering Dynamic Range (EDR)) can be found at PhotonsToPhotos by clicking on cameras in the legend of the Input-referred Read Noise chart. For example:
Thank you Bill, that may be quite useful (I mostly used PDR charts for camera comparison).
eb7495fa8bd24cfe8d0cb1fea992fa74.jpg.png

You can also look at DxOMark Screen dynamic range but they don't test extended and intermediate ISO Settings.
I had a feeling your analysis of DXOMark methods was more informative than what they have on their own site - i.e. I'm not sure if one can scientifically reproduce their results and recreate the same setup from the description on their site.

JFYI a few links from PTP to DxOMark are broken 4ex at the bottom of this page https://www.photonstophotos.net/Charts/ReadNoise_ADU.htm

They seem to have removed some explanations.
For normalized measures there's PhotonsToPhotos Photographic Dynamic Range (PDR) as opposed to DxOMark Print dynamic range.

So, both sites have both "types" of dynamic range.

--
Bill ( Your trusted source for independent sensor data at PhotonsToPhotos )


--
 
That definition is nonsense. Dynamic range is not something you could appreciate or measure viewing an 8x10" print (or indeed any other size print or display). It rather indicates the whole confusion behind 'PDR'.
This was only demonstration that PDR is not classic DR and that it's normalized.
So if someone understand it as definition of PDR, my fault, it isn't whole definition.
It's mainly a link that leads to Bill Claff's page with more details.
One of the problems with PDR is that there is no available definition, nor a rationale, nor any real motivation of what it is for.
Personally I consider PDR obscurely defined too, so it's difficult to understand.
DXOMark score is much easier to understand with one simple equation.
But on positive side PDR is very useful for cross comparison between cameras and Bill Claff has always most actual data as DXOMark almost ignores cameras today and optyczne.pl is much slower than P2P.
The problem is, just what are you 'cross comparing'. PDR puts different cameras in a different order to normalised DR. Whether that is what you want depends on what you're trying to compare. If it is DR, then 'PDR' is entirely unhelpful.
I also admire Bill Claff's dedication to measure entire Photon Transfer Curve (PTC), I did some measurements of few sensors but only for the central part of PTC with 1/2 slope and even that is pretty time consuming.
I have no argument with the very useful data Bill makes available, including the PTCs. It's a very useful data source, and free as well.
And P2P has also data about DR and saturation points (unfortunately they are so wonderfully hidden that I doubt that most visitors can even found them! :) ).
All the useful data plays second fiddle to PDR, which is Bill's pet project. Unfortunately it is misconceived and not very useful.
 
Photostophotons doesn't tell you the dynamic range. It tells you Bill Claff's own metric, which he calls 'Photographic Dynamic Range', but is not 'Dynamic Range'. It's a kind of mixed metric, the purpose of which is unclear. If it is DR you want to know about, don't use those figures - maybe try DxOMark's instead.
DxOMark also uses a similar normalised metric, but much more obscure.
DxOMark is simply DR, normalised to 8MP. It's not obscure at all, though as JACS points out, the later versions of the website don't give the full information. The problem with PDR is not the normalisation (though the resolution to which PDR is normalised doesn't make a whole load of sense) but the misguided choice of the lower bound noise criterion. For DR, it should be the irreducible noise floor - then as I said you end up with a measure of information content. PDR doesn't do that, it mixes in an amount of shot noise, again based on a faulty rationale and with no good perceptual reason for the choice. The result is that you end up with something that doesn't measure anything in particular. The choices made would show up as some really strange figure, if they didn't conveniently cancel each other out. The unreasonably high noise floor would result in very low 'DR' figures, which are rescued by the very low normalisation resolution. That's a happy accident which makes the figures seem more reasonable at first sight, but in the process it mixes up and reorders the results, so if you see that camera A had PDR > Camera B, it doesn't mean that is reflected in the proper DR figure.

--
Is it always wrong
for one to have the hots for
Comrade Kim Yo Jong?
 
Last edited:
Photostophotons doesn't tell you the dynamic range. It tells you Bill Claff's own metric, which he calls 'Photographic Dynamic Range', but is not 'Dynamic Range'. It's a kind of mixed metric, the purpose of which is unclear. If it is DR you want to know about, don't use those figures - maybe try DxOMark's instead.
DxOMark also uses a similar normalised metric, but much more obscure.
It is not similar, it is not obscure, and it is not "much" more obscure. On the other hand, their website is a mess, and the original definition is hard to find now.
It's it possible (or was it) to verify and reproduce the same results using the publicly available information from DxO?
 
But having less information due to lesser light density (exposure) is different from having less information due to smaller area.
is it?

In the picture below, the second frame is a crop to 1/16 the area of the first frame. The third frame is the same size as the first frame, but received 1/16 the exposure. The first and second frames received the same exposure, and the same light per pixel. The second and third frames received the same total light.
The files are again normalised to the same resolution and also it's not clear if they're out of camera jpegs? The camera will apply a lot of NR to high ISO images.


--
 
Photostophotons doesn't tell you the dynamic range. It tells you Bill Claff's own metric, which he calls 'Photographic Dynamic Range', but is not 'Dynamic Range'. It's a kind of mixed metric, the purpose of which is unclear. If it is DR you want to know about, don't use those figures - maybe try DxOMark's instead.
DxOMark also uses a similar normalised metric, but much more obscure.
It is not similar, it is not obscure, and it is not "much" more obscure. On the other hand, their website is a mess, and the original definition is hard to find now.
It's it possible (or was it) to verify and reproduce the same results using the publicly available information from DxO?
Yes. When a new camera is released, many people do that well before DXO publishes its scores. I have done it myself several times. A good proxy already is to measure the noise in the masked areas of any shot. The DR is usually reported at pixel level with a note that you need to add this much to normalize to 8mp or whatever DXO does.

Now, a better way of doing this is to have a dark frame and measure across the whole sensor. Also, the type of the shutter, the temperature, etc., could affect the data. The read noise may have a significant low frequency deviations which may cast a doubt on the whole thing (i.e., not really white noise), etc. Not to mention pattern noise, etc.

Finally, this is done on the G channel, I presume. The noise of R and B is ignored, and then the effect of the color matrix on the result is lost as well.
 
Photostophotons doesn't tell you the dynamic range. It tells you Bill Claff's own metric, which he calls 'Photographic Dynamic Range', but is not 'Dynamic Range'. It's a kind of mixed metric, the purpose of which is unclear. If it is DR you want to know about, don't use those figures - maybe try DxOMark's instead.
DxOMark also uses a similar normalised metric, but much more obscure.
It is not similar, it is not obscure, and it is not "much" more obscure. On the other hand, their website is a mess, and the original definition is hard to find now.
It's it possible (or was it) to verify and reproduce the same results using the publicly available information from DxO?
Yes, it is, at least if you use the historical documentation that they published at the beginning. The same is not true of Bill's data, because he seems to be invested as the sole author, and doesn't want to publish sufficient information about his methods to allow them to be independently verified, which is kind of a prerequisite for authoritative data.

My own observation is that Bill's data seems to be more reliable than DxOMarks, who are subject to some plain old data errors every so often, but that Bill's data collection method, which doesn't use calibrated light sources, suffers from being inherently less precise and cannot measure some of the important measurands (such as saturation exposure, for instance).
 
Photostophotons doesn't tell you the dynamic range. It tells you Bill Claff's own metric, which he calls 'Photographic Dynamic Range', but is not 'Dynamic Range'. It's a kind of mixed metric, the purpose of which is unclear. If it is DR you want to know about, don't use those figures - maybe try DxOMark's instead.
DxOMark also uses a similar normalised metric, but much more obscure.
It is not similar, it is not obscure, and it is not "much" more obscure. On the other hand, their website is a mess, and the original definition is hard to find now.
It's it possible (or was it) to verify and reproduce the same results using the publicly available information from DxO?
It's not hard to reproduce DxOMark screen or print dynamic range.
The relevant pages do seem to have gone missing but the concept is quite simple.

(Nor is it hard to reproduce PhotonsToPhotos PDR. For example Jim Kasson has done it.)
 
Photostophotons doesn't tell you the dynamic range. It tells you Bill Claff's own metric, which he calls 'Photographic Dynamic Range', but is not 'Dynamic Range'. It's a kind of mixed metric, the purpose of which is unclear. If it is DR you want to know about, don't use those figures - maybe try DxOMark's instead.
DxOMark also uses a similar normalised metric, but much more obscure.
It is not similar, it is not obscure, and it is not "much" more obscure. On the other hand, their website is a mess, and the original definition is hard to find now.
It's it possible (or was it) to verify and reproduce the same results using the publicly available information from DxO?
Yes. When a new camera is released, many people do that well before DXO publishes its scores. I have done it myself several times.
You've used the 'transmission target' and DxO analyser software? It's the source code open?

A good proxy already is to measure the noise in the masked areas of any shot. The DR is usually reported at pixel level with a note that you need to add this much to normalize to 8mp or whatever DXO does.

Now, a better way of doing this is to have a dark frame and measure across the whole sensor. Also, the type of the shutter, the temperature, etc., could affect the data. The read noise may have a significant low frequency deviations which may cast a doubt on the whole thing (i.e., not really white noise), etc. Not to mention pattern noise, etc.

Finally, this is done on the G channel, I presume. The noise of R and B is ignored, and then the effect of the color matrix on the result is lost as well.
In the link above, they say they do measure the colour channels separately.

I might guess they changed the approach at some point.
 

Keyboard shortcuts

Back
Top