Dynamic range difference of sensor operated in FF vs Crop

Pentaborane

Member
Messages
17
Reaction score
16
Hello everyone,
Recently i have been looking through tests and various graphs of sensor tests and these usually come to the conclusion that a FF sensor has higher dynamic range than its APSC counterpart or itself if operated in crop mode.
I understand that aperture crop factor impacts the DOF but not the amount of light hitting a mm^2 of sensor this leads me to a confusion why would 2 sensors made in the same technology like ones found in Sony A7R5 and Sony a6700 that have basically the same pixel pitch and backend electronic processing would create a different DR value.
Especially if according to PhotonsToPixels dynamic range of A7R5 drops when going to APSC but we have the same pixels illuminated with the same amount of light.
This part makes no logical sense, what part of how sensors work am I missing that causes that difference?
 
Last edited:
Larger area results in more dynamic range. If you want to compare individual pixels or mm^2 you sure can, but what's the point if you want to compare sensor sizes i.e. different areas? That's like comparing two different size lakes and concluding they have the same amount of water since a bucket of water from each is the same size.
 
Hello everyone,
Recently i have been looking through tests and various graphs of sensor tests and these usually come to the conclusion that a FF sensor has higher dynamic range than its APSC counterpart or itself if operated in crop mode.
I understand that aperture crop factor impacts the DOF but not the amount of light hitting a mm^2 of sensor this leads me to a confusion why would 2 sensors made in the same technology like ones found in Sony A7R5 and Sony a6700 that have basically the same pixel pitch and backend electronic processing would create a different DR value.
Especially if according to PhotonsToPixels dynamic range of A7R5 drops when going to APSC but we have the same pixels illuminated with the same amount of light.
This part makes no logical sense, what part of how sensors work am I missing that causes that difference?
The assumption behind Bill’s PDR metric is that both images will be viewed at the same size and viewing distance. Thus the smaller sensor requires more enlargement in your example.

As a thought experiment, consider making a 16x20 print in the darkroom from a 35mm negative and an 8x10 negative, using the same film stock in both cameras. Wouldn't you expect the 35mm print to have more visible grain noise?

--
https://blog.kasson.com
 
Last edited:
Larger area results in more dynamic range.
But what mechanism causes that if sensor is just an array of pixels that should regardless of total pixel count have the same capability to convert light to a range of values over the noise floor before pixels become saturated?
The assumption behind Bill’s PDR metric is that both images will be viewed at the same size and viewing distance.
The grain caused by thermal noise should be more visible if zoomed in but in the case of cropping an A7RV sensor these patterns should have the same size in terms of pixel counts and not area of total image created.
Wouldn't you expect the 35mm print to have more visible grain noise?
I just don't understand how the range of values resolved by sensor pixels could changes when smaller part of it is sampled as dynamic range drop would indicate.
It's one thing that makes no sense to me
 
It’s best not to consider pixels at this level of analysis, and it’s unnecessary anyway. A purely geometric approach is both easier to understand and tells you most everything you need to know anyway.

Light falling on a small surface vs the same intensity of light falling on a surface that’s 2-½ times greater area, means that 2-½ times as much total light is being measured.
--
 
Hello everyone,
Recently i have been looking through tests and various graphs of sensor tests and these usually come to the conclusion that a FF sensor has higher dynamic range than its APSC counterpart or itself if operated in crop mode.
I understand that aperture crop factor impacts the DOF but not the amount of light hitting a mm^2 of sensor this leads me to a confusion why would 2 sensors made in the same technology like ones found in Sony A7R5 and Sony a6700 that have basically the same pixel pitch and backend electronic processing would create a different DR value.
Especially if according to PhotonsToPixels dynamic range of A7R5 drops when going to APSC but we have the same pixels illuminated with the same amount of light.
This part makes no logical sense, what part of how sensors work am I missing that causes that difference?
If you look at the pixel level your thought process is correct. However as Jim says the Photos to Photos data shows results that are normalised for a common print size and viewing distance. Bill gives an explanation here https://www.photonstophotos.net/Gen...r/DX_Crop_Mode_Photographic_Dynamic_Range.htm

If the FF image resolution is say 6000x4000, the APS-C resolution is roughly 4000x2666. This means to compare the two at the same print size, the FF image needs to be downsized more than the APS-C image. The extra downsizing causes more noise reduction and hence better effective S/N and DR.

Dave
 
Last edited:
Image quality including dynamic range, noise, color fidelity, etc. is more a function of the total light energy delivered to the sensor than the exposure. If two sensors built around the same tech but of different formats receive the same exposure, the larger sensor with the greater surface area receives more total light energy; more photons. As a result, the image produced will have measurably greater DR, less visible noise, and greater overall quality than the photo made with the smaller format camera working with the same exposure.

Whether or not that difference in DR, noise, etc. is of significance is a subjective question for the photographer to decide.
 
Image quality including dynamic range, noise, color fidelity, etc. is more a function of the total light energy delivered to the sensor than the exposure.
Yes.

Exposure is the total amount of light energy per unit area that reaches a photosensitive surface, such as film or a digital sensor. It is a physical quantity measured in lux-seconds (lx·s), where:
  • Lux is the unit of illuminance (lumens per square meter),
  • Seconds is the duration of the exposure.
Mathematically:

Exposure = E*t

Where:
  • E is the illuminance (in lux),
  • t is the exposure time (in seconds).
So exposure is normalized to area. But as you point out, what's important is light times area.
If two sensors built around the same tech but of different formats receive the same exposure, the larger sensor with the greater surface area receives more total light energy; more photons. As a result, the image produced will have measurably greater DR, less visible noise, and greater overall quality than the photo made with the smaller format camera working with the same exposure.

Whether or not that difference in DR, noise, etc. is of significance is a subjective question for the photographer to decide.
 
Larger area results in more dynamic range.
But what mechanism causes that if sensor is just an array of pixels that should regardless of total pixel count have the same capability to convert light to a range of values over the noise floor before pixels become saturated?
The assumption behind Bill’s PDR metric is that both images will be viewed at the same size and viewing distance.
The grain caused by thermal noise
Why do you say that? Most of the significant noise is photon noise.
should be more visible if zoomed in
That's the point.
but in the case of cropping an A7RV sensor these patterns should have the same size in terms of pixel counts and not area of total image created.
The assumption is that the viewer is looking at same sized images.
Wouldn't you expect the 35mm print to have more visible grain noise?
I just don't understand how the range of values resolved by sensor pixels could changes when smaller part of it is sampled as dynamic range drop would indicate.
It's one thing that makes no sense to me
Let's go back to basics. Dynamic Range is full scale divided by mean signal that produces a give signal to noise ratio. When viewed by humans, the scale of the image affects the visibility of the noise through the contrast sensitivity function. So very high frequency noise is less visible. That means that the threshold used for computing DR needs to take that into account. Hence greater magnification reduces DR.
 
... Dynamic Range is full scale divided by mean signal that produces a give signal to noise ratio. When viewed by humans, the scale of the image affects the visibility of the noise through the contrast sensitivity function. So very high frequency noise is less visible. That means that the threshold used for computing DR needs to take that into account. Hence greater magnification reduces DR.
Not fussing about the above - just separating-out an enumeration of the ratio-ed variables:

Dynamic Range is the ratio of the largest measurable signal to the smallest measurable signal.

The smallest measurable signal is typically defined as that equal to the noise level, or alternatively the “Noise Equivalent Exposure” or that point where the Signal-to-noise ratio (SNR) is 1.


Source: https://www.ophiropt.com/en/n/understanding-dynamic-range
 
Last edited:
... Dynamic Range is full scale divided by mean signal that produces a given signal to noise ratio. When viewed by humans, the scale of the image affects the visibility of the noise through the contrast sensitivity function. So very high frequency noise is less visible. That means that the threshold used for computing DR needs to take that into account. Hence greater magnification reduces DR.
Not fussing about the above - just separating-out an enumeration of the ratio-ed variables:

Dynamic Range is the ratio of the largest measurable signal to the smallest measurable signal.

The smallest measurable signal is typically defined as that equal to the noise level, or alternatively the “Noise Equivalent Exposure” or that point where the Signal-to-noise ratio (SNR) is 1.


Source: https://www.ophiropt.com/en/n/understanding-dynamic-range
There are many ways to define dynamic range. The differences boil down to the definition of the floor signal to noise ratio. I wrote the above to encompass all the ways of measuring dynamic range. SNR = 1 for the lowest threshold without scaling is not commonly used for photography, although it constitutes the floor for one of the ways of measuring engineering dynamic range. BIll's photographic dynamic range uses a higher SNR that varies with sensor resolution, with the goal of calculating the SNR at a constant print size.
 
... Dynamic Range is full scale divided by mean signal that produces a given signal to noise ratio. When viewed by humans, the scale of the image affects the visibility of the noise through the contrast sensitivity function. So very high frequency noise is less visible. That means that the threshold used for computing DR needs to take that into account. Hence greater magnification reduces DR.
Not fussing about the above - just separating-out an enumeration of the ratio-ed variables:

Dynamic Range is the ratio of the largest measurable signal to the smallest measurable signal.

The smallest measurable signal is typically defined as that equal to the noise level, or alternatively the “Noise Equivalent Exposure” or that point where the Signal-to-noise ratio (SNR) is 1.


Source: https://www.ophiropt.com/en/n/understanding-dynamic-range
There are many ways to define dynamic range. The differences boil down to the definition of the floor signal to noise ratio. I wrote the above to encompass all the ways of measuring dynamic range. SNR = 1 for the lowest threshold without scaling is not commonly used for photography, although it constitutes the floor for one of the ways of measuring engineering dynamic range. BIll's photographic dynamic range uses a higher SNR that varies with sensor resolution, with the goal of calculating the SNR at a constant print size.
Camera dynamic range measurements: Measurement of camera dynamic range is closely associated with measurement of camera noise. ISO 15739:2023 defines Digital Still Camera (DSC) dynamic range to be the ratio of the input signal saturation level to the minimum input signal level that can be captured with a signal-to-temporal noise ratio of at least 1. ISO 15739:2023 specifies how to measure and report DSC dynamic range.

The use of denominator SNR=1 (or, alternatively, larger values) appears to be ubiquitous in (what might be termed to be "engineering") DR mathematics (ie, by DxOMark metrics) - whereas, (last time I checked), Bill Claff's particular chosen denominator (as others have pointed out, and despite his for me and others hard to buy protests) virtually ensures a state of (input-referred) photon-transduction noise dominance over system readout-noise in nearly any modern camera system (such as the camera systems that he has characterized).
 
Last edited:
Hello everyone,
Recently i have been looking through tests and various graphs of sensor tests and these usually come to the conclusion that a FF sensor has higher dynamic range than its APSC counterpart or itself if operated in crop mode.
I understand that aperture crop factor impacts the DOF but not the amount of light hitting a mm^2 of sensor this leads me to a confusion why would 2 sensors made in the same technology like ones found in Sony A7R5 and Sony a6700 that have basically the same pixel pitch and backend electronic processing would create a different DR value.
Especially if according to PhotonsToPixels dynamic range of A7R5 drops when going to APSC but we have the same pixels illuminated with the same amount of light.
This part makes no logical sense, what part of how sensors work am I missing that causes that difference?
In general, for the same sensor architecture:

Can a larger area sensor store more energy than a smaller?

A sensor with more storage sites (pixels) store more energy?

Not modifying the data due to some following requirements such as the methods Bill uses should be the first step.

 
Hello everyone,
Recently i have been looking through tests and various graphs of sensor tests and these usually come to the conclusion that a FF sensor has higher dynamic range than its APSC counterpart or itself if operated in crop mode.
I understand that aperture crop factor impacts the DOF but not the amount of light hitting a mm^2 of sensor this leads me to a confusion why would 2 sensors made in the same technology like ones found in Sony A7R5 and Sony a6700 that have basically the same pixel pitch and backend electronic processing would create a different DR value.
Especially if according to PhotonsToPixels dynamic range of A7R5 drops when going to APSC but we have the same pixels illuminated with the same amount of light.
This part makes no logical sense, what part of how sensors work am I missing that causes that difference?
In general, for the same sensor architecture:

Can a larger area sensor store more energy than a smaller?

A sensor with more storage sites (pixels) store more energy?
These days, FWC's seem to hover around 3000e-/um^2 regardless of the pixel pitch.

--
https://blog.kasson.com
 
Last edited:
Hello everyone,
Recently i have been looking through tests and various graphs of sensor tests and these usually come to the conclusion that a FF sensor has higher dynamic range than its APSC counterpart or itself if operated in crop mode.
I understand that aperture crop factor impacts the DOF but not the amount of light hitting a mm^2 of sensor this leads me to a confusion why would 2 sensors made in the same technology like ones found in Sony A7R5 and Sony a6700 that have basically the same pixel pitch and backend electronic processing would create a different DR value.
Especially if according to PhotonsToPixels dynamic range of A7R5 drops when going to APSC but we have the same pixels illuminated with the same amount of light.
This part makes no logical sense, what part of how sensors work am I missing that causes that difference?
In general, for the same sensor architecture:

Can a larger area sensor store more energy than a smaller?

A sensor with more storage sites (pixels) store more energy?
These days, FWC's seem to hover around 3000e-/um^2 regardless of the pixel pitch..
Understood and expected. So the total storage is therefore, let's say, usually proportional to sensor area.

Thank you.
 
It’s best not to consider pixels at this level of analysis, and it’s unnecessary anyway. A purely geometric approach is both easier to understand and tells you most everything you need to know anyway.

Light falling on a small surface vs the same intensity of light falling on a surface that’s 2-½ times greater area, means that 2-½ times as much total light is being measured.
--
http://therefractedlight.blogspot.com
The problem i see with your analogy and some others, is we are not measuring/comparing total capacitance are we 🤔
 
Hello everyone,
Recently i have been looking through tests and various graphs of sensor tests and these usually come to the conclusion that a FF sensor has higher dynamic range than its APSC counterpart or itself if operated in crop mode.
I understand that aperture crop factor impacts the DOF but not the amount of light hitting a mm^2 of sensor this leads me to a confusion why would 2 sensors made in the same technology like ones found in Sony A7R5 and Sony a6700 that have basically the same pixel pitch and backend electronic processing would create a different DR value.
Especially if according to PhotonsToPixels dynamic range of A7R5 drops when going to APSC but we have the same pixels illuminated with the same amount of light.
This part makes no logical sense, what part of how sensors work am I missing that causes that difference?
Perhaps you're looking at trees rather than the forest, and trying to characterize the forest by single trees.

Try this: take a photo of something flat like a painting, photo, or poster on a wall (with a range of tones) that fills a FF sensor up close, with -5 EC in raw mode, in a place where you can step back so the object is tiny in the frame, and take another photo with the same focal length, and exposure/ISO settings. Then, correct the -5EC in the converter, and take the converted images and look at a 100% crop from the first image in a darker area of the image, and then compare the same area in the second photo to it, at 500% pixel zoom or whatever percent it takes to make the items in the images the same scale, displayed.

Then, despite the pixels having the same DR and the same low average exposure, you will see more noise damage to the image details with large, coarse noise "grain" in the very-cropped image. It is this practical need for greater magnification of a smaller sensor area creating a given viewing magnification that makes "image level" noise stats interesting, which is why many noise stats are given in an image-level metric.

Most people misconceive what even "pixel-level" DR means, no less image-level DR, which is a function of the pixel-level DR combined with the square root of the number of pixels. Typically, something like 8MP is considered the standard, where an 8MP sensor's DR is the same as the pixel-level DR, and sensors that are less than 8MP have a lower image DR than pixel DR, and sensors that are more than 8MP have more image DR than pixel DR. The pixel-level "noise floor" or lower-delimiter of DR is not some opaque threshold event where every signal that is lower than it is lost, as if it were clipped away by the "noise floor". That is a fictitious model; you can record scenes where 100% white is stops below the noise floor. What you won't have is usable resolution, but you can capture larger shapes, and doing so will reveal the reason that the results are not as good as they could be: the black level is not flat across the sensor; the dynamics of low-frequency and banded readout is not good enough yet for usable imaging below the noise floor. If you had a computer generate the readout noise in a simulation, the computer will not generate low frequency noise greater than is necessary with pure chance, and then any arbitrary low exposure, like ISOs in the millions or tens of millions, can produce a fairly-clean-but-tiny image. "Tiny" doesn't work well, though, with relatively strong low-frequency noise of digital cameras.
 
Hello everyone,
Recently i have been looking through tests and various graphs of sensor tests and these usually come to the conclusion that a FF sensor has higher dynamic range than its APSC counterpart or itself if operated in crop mode.
I understand that aperture crop factor impacts the DOF but not the amount of light hitting a mm^2 of sensor this leads me to a confusion why would 2 sensors made in the same technology like ones found in Sony A7R5 and Sony a6700 that have basically the same pixel pitch and backend electronic processing would create a different DR value.
Especially if according to PhotonsToPixels dynamic range of A7R5 drops when going to APSC but we have the same pixels illuminated with the same amount of light.
This part makes no logical sense, what part of how sensors work am I missing that causes that difference?
Perhaps you're looking at trees rather than the forest, and trying to characterize the forest by single trees.

Try this: take a photo of something flat like a painting, photo, or poster on a wall (with a range of tones) that fills a FF sensor up close, with -5 EC in raw mode, in a place where you can step back so the object is tiny in the frame, and take another photo with the same focal length, and exposure/ISO settings. Then, correct the -5EC in the converter, and take the converted images and look at a 100% crop from the first image in a darker area of the image, and then compare the same area in the second photo to it, at 500% pixel zoom or whatever percent it takes to make the items in the images the same scale, displayed.

Then, despite the pixels having the same DR and the same low average exposure, you will see more noise damage to the image details with large, coarse noise "grain" in the very-cropped image. It is this practical need for greater magnification of a smaller sensor area creating a given viewing magnification that makes "image level" noise stats interesting, which is why many noise stats are given in an image-level metric.

Most people misconceive what even "pixel-level" DR means, no less image-level DR, which is a function of the pixel-level DR combined with the square root of the number of pixels. Typically, something like 8MP is considered the standard, where an 8MP sensor's DR is the same as the pixel-level DR, and sensors that are less than 8MP have a lower image DR than pixel DR, and sensors that are more than 8MP have more image DR than pixel DR. The pixel-level "noise floor" or lower-delimiter of DR is not some opaque threshold event where every signal that is lower than it is lost, as if it were clipped away by the "noise floor". That is a fictitious model; you can record scenes where 100% white is stops below the noise floor. What you won't have is usable resolution, but you can capture larger shapes, and doing so will reveal the reason that the results are not as good as they could be: the black level is not flat across the sensor; the dynamics of low-frequency and banded readout is not good enough yet for usable imaging below the noise floor. If you had a computer generate the readout noise in a simulation, the computer will not generate low frequency noise greater than is necessary with pure chance, and then any arbitrary low exposure, like ISOs in the millions or tens of millions, can produce a fairly-clean-but-tiny image. "Tiny" doesn't work well, though, with relatively strong low-frequency noise of digital cameras.
All sounds pretty sensible. Some easy to understand concepts.

There is a UK space job on the go that the camera is going to use lots of small sensors (phone industry size) to create something bigger - a little like we may do with arrays of telescopes which supports your descriptions.
--
Beware of correct answers to wrong questions.
John
http://www.pbase.com/image/55384958.jpg
 
The problem i see with your analogy and some others, is we are not measuring/comparing total capacitance are we 🤔
Coulombs per volt? No, the significant measurement is total photons captured.
the photo diodes are not connected in series or parallel, they are collecting/storing photons individually are they not ?
 

Keyboard shortcuts

Back
Top