Perception, reality and a signal below the noise...

There are a lot of people who still (after all this time) are remembering the film legacy. Back then, photography jargon was full of meaninglessly subjective nonsense, because noone really understood colour film chemistry.

I don't engage in other hobbies to the same extent that I do with photography, but for non-pros, cameras seem a lot like hi-fi and cars. It's more about bar-room bragging rights than useful differences. Real world performance is subject to uncontrolled variables that are often more significant, such as the driver, the acoustics of a room, or where you got your film processing and printing done.

The medium format look, to me anyway, had far more to do with resolution, or grain/unit area, than anything else. After all, the emulsion was the same, so the only difference between Provia 100F in 35mm, 67 and 4X5 formats was the size of the negative.
I think the crucial factor is the degree of enlargement, rather than film or sensor size as such.
If you mean the ratio between the size of the image and the size of the negative, then yes. But we also have to take human acuity and contrast sensitivity into account. At high angular resolution (ie in small prints) pretty much anything looks good.

But if you make three decent sized prints - say 30X40", from 35mm, 67 and 4X5 all using the same film type, and view them at a reasonably close distance, like 20", the 35mm will look awful, the 67 will look OK, and the 4X5 will look great.

The fact that such prints are easily possible with a good APSC sensor at low ISO just shows how far we have come.
In film there is an old rule of thumb that says something like: you get a good print if you do not enlarge more than 10-12 times the size of the negative.
About right, except that fine grained ISO100 negatives could be enlarged a lot more than ISO400 negatives.
With digital the dominant rule of thumb is something like: you get a good quality print if you have 254 or 300 pixels per inch to work with

(both R-o-T, so don't make too much of the exact numbers)
Again, noise/grain is a factor. Smaller sensors have more noise...
Q: in how far is 'enlargement' in relation to the sensor size still an important factor?
...so you could regard each doubling in sensor area as equivalent to halving the ISO of the film in terms of equivalent grain.

But the threshold of grain tolerance is not the same as the threshold of detail detection, so it can all get rather complicated...
 
There are a lot of people who still (after all this time) are remembering the film legacy. Back then, photography jargon was full of meaninglessly subjective nonsense, because noone really understood colour film chemistry.

I don't engage in other hobbies to the same extent that I do with photography, but for non-pros, cameras seem a lot like hi-fi and cars. It's more about bar-room bragging rights than useful differences. Real world performance is subject to uncontrolled variables that are often more significant, such as the driver, the acoustics of a room, or where you got your film processing and printing done.

The medium format look, to me anyway, had far more to do with resolution, or grain/unit area, than anything else. After all, the emulsion was the same, so the only difference between Provia 100F in 35mm, 67 and 4X5 formats was the size of the negative.
I think the crucial factor is the degree of enlargement, rather than film or sensor size as such.
If you mean the ratio between the size of the image and the size of the negative, then yes. But we also have to take human acuity and contrast sensitivity into account. At high angular resolution (ie in small prints) pretty much anything looks good.

But if you make three decent sized prints - say 30X40", from 35mm, 67 and 4X5 all using the same film type, and view them at a reasonably close distance, like 20", the 35mm will look awful, the 67 will look OK, and the 4X5 will look great.

The fact that such prints are easily possible with a good APSC sensor at low ISO just shows how far we have come.
In film there is an old rule of thumb that says something like: you get a good print if you do not enlarge more than 10-12 times the size of the negative.
About right, except that fine grained ISO100 negatives could be enlarged a lot more than ISO400 negatives.
With digital the dominant rule of thumb is something like: you get a good quality print if you have 254 or 300 pixels per inch to work with

(both R-o-T, so don't make too much of the exact numbers)
Again, noise/grain is a factor. Smaller sensors have more noise...
Q: in how far is 'enlargement' in relation to the sensor size still an important factor?
...so you could regard each doubling in sensor area as equivalent to halving the ISO of the film in terms of equivalent grain.

But the threshold of grain tolerance is not the same as the threshold of detail detection, so it can all get rather complicated...
Hi,

Having shot film like 30 years and having scanned thousands of film images, I would say that film was a different animal from digital.

Best regards

Erik
 
There are a lot of people who still (after all this time) are remembering the film legacy. Back then, photography jargon was full of meaninglessly subjective nonsense, because noone really understood colour film chemistry.

I don't engage in other hobbies to the same extent that I do with photography, but for non-pros, cameras seem a lot like hi-fi and cars. It's more about bar-room bragging rights than useful differences. Real world performance is subject to uncontrolled variables that are often more significant, such as the driver, the acoustics of a room, or where you got your film processing and printing done.

The medium format look, to me anyway, had far more to do with resolution, or grain/unit area, than anything else. After all, the emulsion was the same, so the only difference between Provia 100F in 35mm, 67 and 4X5 formats was the size of the negative.
I think the crucial factor is the degree of enlargement, rather than film or sensor size as such.
If you mean the ratio between the size of the image and the size of the negative, then yes. But we also have to take human acuity and contrast sensitivity into account. At high angular resolution (ie in small prints) pretty much anything looks good.

But if you make three decent sized prints - say 30X40", from 35mm, 67 and 4X5 all using the same film type, and view them at a reasonably close distance, like 20", the 35mm will look awful, the 67 will look OK, and the 4X5 will look great.

The fact that such prints are easily possible with a good APSC sensor at low ISO just shows how far we have come.
In film there is an old rule of thumb that says something like: you get a good print if you do not enlarge more than 10-12 times the size of the negative.
About right, except that fine grained ISO100 negatives could be enlarged a lot more than ISO400 negatives.
With digital the dominant rule of thumb is something like: you get a good quality print if you have 254 or 300 pixels per inch to work with

(both R-o-T, so don't make too much of the exact numbers)
Again, noise/grain is a factor. Smaller sensors have more noise...
Q: in how far is 'enlargement' in relation to the sensor size still an important factor?
...so you could regard each doubling in sensor area as equivalent to halving the ISO of the film in terms of equivalent grain.

But the threshold of grain tolerance is not the same as the threshold of detail detection, so it can all get rather complicated...
Hi,

Having shot film like 30 years and having scanned thousands of film images, I would say that film was a different animal from digital.

Best regards

Erik
I agree, but I was careful to qualify what I said.

In noise/grain terms, doubling ISO is roughly equivalent to halving the sensor area, or halving the negative size on a film. Of course, different film emulsions were hard to compare, so ISO ratings for different film were even less reliable than on digital cameras.

I didn't say the film and digital images will look the same. I can print A2 images from my APSC camera than look better than A4 prints from 35mm, but the main difference is grain, not resolution.
 
There are a lot of people who still (after all this time) are remembering the film legacy. Back then, photography jargon was full of meaninglessly subjective nonsense, because noone really understood colour film chemistry.

I don't engage in other hobbies to the same extent that I do with photography, but for non-pros, cameras seem a lot like hi-fi and cars. It's more about bar-room bragging rights than useful differences. Real world performance is subject to uncontrolled variables that are often more significant, such as the driver, the acoustics of a room, or where you got your film processing and printing done.

The medium format look, to me anyway, had far more to do with resolution, or grain/unit area, than anything else. After all, the emulsion was the same, so the only difference between Provia 100F in 35mm, 67 and 4X5 formats was the size of the negative.
I think the crucial factor is the degree of enlargement, rather than film or sensor size as such.
If you mean the ratio between the size of the image and the size of the negative, then yes. But we also have to take human acuity and contrast sensitivity into account. At high angular resolution (ie in small prints) pretty much anything looks good.

But if you make three decent sized prints - say 30X40", from 35mm, 67 and 4X5 all using the same film type, and view them at a reasonably close distance, like 20", the 35mm will look awful, the 67 will look OK, and the 4X5 will look great.

The fact that such prints are easily possible with a good APSC sensor at low ISO just shows how far we have come.
In film there is an old rule of thumb that says something like: you get a good print if you do not enlarge more than 10-12 times the size of the negative.
About right, except that fine grained ISO100 negatives could be enlarged a lot more than ISO400 negatives.
With digital the dominant rule of thumb is something like: you get a good quality print if you have 254 or 300 pixels per inch to work with

(both R-o-T, so don't make too much of the exact numbers)
Again, noise/grain is a factor. Smaller sensors have more noise...
Q: in how far is 'enlargement' in relation to the sensor size still an important factor?
...so you could regard each doubling in sensor area as equivalent to halving the ISO of the film in terms of equivalent grain.

But the threshold of grain tolerance is not the same as the threshold of detail detection, so it can all get rather complicated...
Hi,

Having shot film like 30 years and having scanned thousands of film images, I would say that film was a different animal from digital.

Best regards

Erik
I agree, but I was careful to qualify what I said.

In noise/grain terms, doubling ISO is roughly equivalent to halving the sensor area, or halving the negative size on a film. Of course, different film emulsions were hard to compare, so ISO ratings for different film were even less reliable than on digital cameras.

I didn't say the film and digital images will look the same. I can print A2 images from my APSC camera than look better than A4 prints from 35mm, but the main difference is grain, not resolution.
Hi,

My intention was not to be negative. It is just that I may feel that film grain and noise on digital cameras is different.

Just to say, I have no issue with larger formats having an advantage.

In a way, that advantage is demonstrable. You can take two of DPReview's studio scene shots and measure SNR, DR, MTF or whatever is of interest.

Also, it seems that at least Fujifilm makes truly excellent lenses for the GFX cameras, the Hasselblad X-lenses are probably also very good. But, all MFD lenses are probably not created equal.

What makes me confused is more claims like better DR in the highlights, which is not a consequence of sensor size but of possibly of exposure strategi. Frequent claims of better color. The old 16-bit claim from Phase One and Hasselblad.

Almost all of that is pretty much contrafactual.

Another small example is the Hasselblad/Phase One proponents would always claim that the leaf shutter yields a huge advantage in certain situations, where short shutter speeds are needed in combination with high flash output.

Fujifilm proponents may claim that HSS solves all those problems. But, what HSS does is to prolong exposure to say 10 ms. If you shoot 1/1000 s, that means that approximately 90% of the light is caught by the shutter blade.

Most folks don't need high power flash at sync speeds. But, that doesn't mean that leaf shutters are not beneficial for shooters having specific needs.

Many times it is also claimed that MFD images stand out on screen. That may be due to different magical factors, like the 3D-pop, better color, better DR etc.

But, on screen viewing is pretty limited. To me it seems to have been demonstrated that images processed identically are normally very similar. Color rendition may differ of course, but that would mostly depend on color profiles.

Best regards

Erik
 
Another small example is the Hasselblad/Phase One proponents would always claim that the leaf shutter yields a huge advantage in certain situations, where short shutter speeds are needed in combination with high flash output.
A leaf shutter certainly makes life easier if you shoot regularly with flash.

One could use a Sigma dp3Q as a much cheaper alternative to a Hasselblad.
Fujifilm proponents may claim that HSS solves all those problems. But, what HSS does is to prolong exposure to say 10 ms. If you shoot 1/1000 s, that means that approximately 90% of the light is caught by the shutter blade.
Most folks don't need high power flash at sync speeds. But, that doesn't mean that leaf shutters are not beneficial for shooters having specific needs.
Studio based product and portrait photographers. People who photograph insects or various forest creatures.
 
Another small example is the Hasselblad/Phase One proponents would always claim that the leaf shutter yields a huge advantage in certain situations, where short shutter speeds are needed in combination with high flash output.
A leaf shutter certainly makes life easier if you shoot regularly with flash.

One could use a Sigma dp3Q as a much cheaper alternative to a Hasselblad.
Fujifilm proponents may claim that HSS solves all those problems. But, what HSS does is to prolong exposure to say 10 ms. If you shoot 1/1000 s, that means that approximately 90% of the light is caught by the shutter blade.
Most folks don't need high power flash at sync speeds. But, that doesn't mean that leaf shutters are not beneficial for shooters having specific needs.
Studio based product and portrait photographers. People who photograph insects or various forest creatures.
Yes,

I have seen that when shooting macro. I often want light from flash to dominate, so I get softer light, that mean short shutter times. But, that often means small aperture, too. Extension also reduces effective aperture.

So, I tend to run out of HSS power with my Wistro AD200. Godox has more high powered gear of course, but the AD 200 is nice to carry.

With the Hasselblad 555/ELD I can shoot 1/500s with no problems.

Best regards

Erik
 
There are a lot of people who still (after all this time) are remembering the film legacy. Back then, photography jargon was full of meaninglessly subjective nonsense, because noone really understood colour film chemistry.

I don't engage in other hobbies to the same extent that I do with photography, but for non-pros, cameras seem a lot like hi-fi and cars. It's more about bar-room bragging rights than useful differences. Real world performance is subject to uncontrolled variables that are often more significant, such as the driver, the acoustics of a room, or where you got your film processing and printing done.

The medium format look, to me anyway, had far more to do with resolution, or grain/unit area, than anything else. After all, the emulsion was the same, so the only difference between Provia 100F in 35mm, 67 and 4X5 formats was the size of the negative.
I think the crucial factor is the degree of enlargement, rather than film or sensor size as such.
If you mean the ratio between the size of the image and the size of the negative, then yes. But we also have to take human acuity and contrast sensitivity into account. At high angular resolution (ie in small prints) pretty much anything looks good.

But if you make three decent sized prints - say 30X40", from 35mm, 67 and 4X5 all using the same film type, and view them at a reasonably close distance, like 20", the 35mm will look awful, the 67 will look OK, and the 4X5 will look great.

The fact that such prints are easily possible with a good APSC sensor at low ISO just shows how far we have come.
In film there is an old rule of thumb that says something like: you get a good print if you do not enlarge more than 10-12 times the size of the negative.
About right, except that fine grained ISO100 negatives could be enlarged a lot more than ISO400 negatives.
With digital the dominant rule of thumb is something like: you get a good quality print if you have 254 or 300 pixels per inch to work with

(both R-o-T, so don't make too much of the exact numbers)
Again, noise/grain is a factor. Smaller sensors have more noise...
Q: in how far is 'enlargement' in relation to the sensor size still an important factor?
...so you could regard each doubling in sensor area as equivalent to halving the ISO of the film in terms of equivalent grain.

But the threshold of grain tolerance is not the same as the threshold of detail detection, so it can all get rather complicated...
Hi,

Having shot film like 30 years and having scanned thousands of film images, I would say that film was a different animal from digital.

Best regards

Erik
I agree, but I was careful to qualify what I said.

In noise/grain terms, doubling ISO is roughly equivalent to halving the sensor area, or halving the negative size on a film. Of course, different film emulsions were hard to compare, so ISO ratings for different film were even less reliable than on digital cameras.

I didn't say the film and digital images will look the same. I can print A2 images from my APSC camera than look better than A4 prints from 35mm, but the main difference is grain, not resolution.
Hi,

My intention was not to be negative. It is just that I may feel that film grain and noise on digital cameras is different.
Sorry, came across as a bit tetchy. I agree they look different. Sufficiently so that 'grain' filters don't add pixel noise, but carefully crafted 'filmic grain'. But AFAIK, the main difference is it's variable size (larger than sensor noise and geometrically random).

However, when I scanned negatives, the scanner noise also added to the problem, and of course there is printer dithering on top of that, so in the end the noise is not easy to analyse.
Just to say, I have no issue with larger formats having an advantage.

In a way, that advantage is demonstrable. You can take two of DPReview's studio scene shots and measure SNR, DR, MTF or whatever is of interest.
In other words, it's just physics. It's mostly just resolution vs. noise.
Also, it seems that at least Fujifilm makes truly excellent lenses for the GFX cameras, the Hasselblad X-lenses are probably also very good. But, all MFD lenses are probably not created equal.
What makes me confused is more claims like better DR in the highlights, which is not a consequence of sensor size but of possibly of exposure strategi. Frequent claims of better color. The old 16-bit claim from Phase One and Hasselblad.
Almost all of that is pretty much contrafactual.

Another small example is the Hasselblad/Phase One proponents would always claim that the leaf shutter yields a huge advantage in certain situations, where short shutter speeds are needed in combination with high flash output.

Fujifilm proponents may claim that HSS solves all those problems. But, what HSS does is to prolong exposure to say 10 ms. If you shoot 1/1000 s, that means that approximately 90% of the light is caught by the shutter blade.
Most folks don't need high power flash at sync speeds. But, that doesn't mean that leaf shutters are not beneficial for shooters having specific needs.
True - but leaf shutters limit max shutter speed and can cause vignetting at their maximum. Better for flash photography, arguably less good for landscapes...
Many times it is also claimed that MFD images stand out on screen. That may be due to different magical factors, like the 3D-pop, better color, better DR etc.

But, on screen viewing is pretty limited. To me it seems to have been demonstrated that images processed identically are normally very similar. Color rendition may differ of course, but that would mostly depend on color profiles.
Blinds tests generally put the lie to this one, but see below...
Best regards

Erik
I agree. There's a lot of pseudoscience and arm waving magical thinking going on.

Material differences derive mostly from resolution, but this does has a significant effect on colour because of Bayer interpolation. If you zoom in to all the cameras at 100%, they all look much the same, but there are fewer visible colour demosaicing errors as the angular resolution of the data increases.

In other words, we do see improvements in colour resolution if we over-sample the angular pixel resolution with respect to human sensitivity. Resolutions up to 100 pixels/deg should show some improvement in reproduction of details which have low green luminance (eg red/blue details on a neutral background).

100 pixels/degree at a close viewing distance (say 30 cm or 1 ft) is about 480 PPI. For a 24X18 inch image, that's about 100 megapixels.

But this will also show up on a large 4K display at a zoom ratio of about 50%.

It's also possible (unproveable) that the larger 33X44 format allows for some reduction in QE and SNR to improve colour filter response. Less adjustment during transformation would reduce channel noise.

However, I agree that by the time you account for all the other factors, the influence is likely to be minor.

What does seem to be true is that there is an increasingly conservative metering of midtones and subsequent tone adjustment to improve the overall response curve, again sacrificing SNR but simulating the film response above zone V. In other words, better highlight retention.

This of course is something we can engineer by simply adjusting exposure, but it's easier to do psychologically if we get a better rendition in the viewfinder during composition.
 
Last edited:
Again, noise/grain is a factor. Smaller sensors have more noise...
I don't know of any evidence to support that generalization.

https://www.photonstophotos.net/Cha...n D850_14,Sony ILCE-6500_14,Sony ILCE-7RM3_14
This only measures READ NOISE not total signal to noise ratio, which includes shot noise.

It has very little bearing on image noise except in dark shadows.

Try a proper SNR chart...

https://www.dxomark.com/Cameras/Compare/Side-by-side/Sony-A6500-versus-Sony-A7R-III___1127_1187
 
Again, noise/grain is a factor. Smaller sensors have more noise...
I don't know of any evidence to support that generalization.

https://www.photonstophotos.net/Cha...n D850_14,Sony ILCE-6500_14,Sony ILCE-7RM3_14
This only measures READ NOISE not total signal to noise ratio, which includes shot noise.
Correct. But for the same angle of view, shot noise is equal if the aperture diameters are the same. You probably meant to say that if you can accept a larger lens aperture, the larger sensor can admit more light and give a less noisy image. It's the lens that determines shot noise, not the sensor.
 
Again, noise/grain is a factor. Smaller sensors have more noise...
I don't know of any evidence to support that generalization.

https://www.photonstophotos.net/Cha...n D850_14,Sony ILCE-6500_14,Sony ILCE-7RM3_14
This only measures READ NOISE not total signal to noise ratio, which includes shot noise.
Correct. But for the same angle of view, shot noise is equal if the aperture diameters are the same.
With smaller sensors, they are often smaller not because it is beneficial but because there is no other choice.
You probably meant to say that if you can accept a larger lens aperture, the larger sensor can admit more light and give a less noisy image. It's the lens that determines shot noise, not the sensor.
It is the sensor as well since a very small sensor do not allow for very bright lenses.
 
Again, noise/grain is a factor. Smaller sensors have more noise...
I don't know of any evidence to support that generalization.

https://www.photonstophotos.net/Cha...n D850_14,Sony ILCE-6500_14,Sony ILCE-7RM3_14
This only measures READ NOISE not total signal to noise ratio, which includes shot noise.
Correct. But for the same angle of view, shot noise is equal if the aperture diameters are the same. You probably meant to say that if you can accept a larger lens aperture, the larger sensor can admit more light and give a less noisy image. It's the lens that determines shot noise, not the sensor.
No, I really didn't, and no it really doesn't.
 
What makes me confused is more claims like better DR in the highlights, which is not a consequence of sensor size but of possibly of exposure strategi.
If you equalize angle of view, entrance pupil size and exposure time, the larger sensor does have more room for highlights. So, from an equivalence point of view, it can be argued that the higher DR of larger sensors indeed comes from a higher upper bound rather than from a lower lower bound.

Of course, you can take advantage of that by exposing for longer or using a larger entrance pupil (up to roughly the same focal plane exposure as the smaller sensor) to increase SNR, and equalizing focal plane exposure in this way might lead you to the opposite conclusion, but I am not really sure that this point of view is more (or even as) justified as the equivalence one.
 
What makes me confused is more claims like better DR in the highlights, which is not a consequence of sensor size but of possibly of exposure strategi.
If you equalize angle of view, entrance pupil size and exposure time, the larger sensor does have more room for highlights. So, from an equivalence point of view, it can be argued that the higher DR of larger sensors indeed comes from a higher upper bound rather than from a lower lower bound.

Of course, you can take advantage of that by exposing for longer or using a larger entrance pupil (up to roughly the same focal plane exposure as the smaller sensor) to increase SNR, and equalizing focal plane exposure in this way might lead you to the opposite conclusion, but I am not really sure that this point of view is more (or even as) justified as the equivalence one.
Hi,

The issue I have with the claim that MFD has more highlight DR. The only way you can increase headroom below clipping is to reduce exposure.

But, highlight DR does not really exists.

On the other hand, correct exposure is also an uncertain term.

In a way, ETTR may be the optimal exposure. But, with ETTR we still need to determine which are important highlights.



This is an HDR exposure of a church scene, consisting of 2x3 exposures. Three exposures with the lens unshifted and three with the lens shifted.

This is an HDR exposure of a church scene, consisting of 2x3 exposures. Three exposures with the lens unshifted and three with the lens shifted.



On the left, the 'center exposure' that is 'under exposed' to 'protect the highlights' the HDR image is on the right. What we see is that left image is noisy and has lack of detail.
On the left, the 'center exposure' that is 'under exposed' to 'protect the highlights' the HDR image is on the right. What we see is that left image is noisy and has lack of detail.



Here is the mosaic part of the same images., 'under exposed image on the left and HDR on the right.
Here is the mosaic part of the same images., 'under exposed image on the left and HDR on the right.

What happens is that to keep highlight detail, we need to expose important highlights below saturation, and process the image 'lifting midtones and shadows'.

If an image has higher DR, it essentially means that we can push the shadows more without obvious noise.

Best regards

Erik

--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic uses to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
What makes me confused is more claims like better DR in the highlights, which is not a consequence of sensor size but of possibly of exposure strategi.
If you equalize angle of view, entrance pupil size and exposure time, the larger sensor does have more room for highlights. So, from an equivalence point of view, it can be argued that the higher DR of larger sensors indeed comes from a higher upper bound rather than from a lower lower bound.

Of course, you can take advantage of that by exposing for longer or using a larger entrance pupil (up to roughly the same focal plane exposure as the smaller sensor) to increase SNR, and equalizing focal plane exposure in this way might lead you to the opposite conclusion, but I am not really sure that this point of view is more (or even as) justified as the equivalence one.
I think it's important to realise that the only way to get more highlight headroom in a digital file is to expose the midtones lower than 18% saturation.

How much lower you go is determined by how well the midtones and shadows can be recovered, which is the limiting factor, because they always end up at 18% in the image. To the human eye, 18% of the brightest highlights appears to be about 50%.

18% is 2.5 stops below 100%. With the zone system, zone V (midtone) was 4 stops below zone IX, or highlights with detail.

So a 1.5 EV underexposure and subsequent adjustment of the tone curve would be required to emulate film.

This is not a sensor property, this is a decision made by the process and metering system based on how much shadow recovery the sensor is capable of, so it's more of an implementation strategy.

Of course, you can meter the midtones wherever you want and adjust it yourself. But what you are actually doing is digging deeper into the sensor's noise floor to achieve more highlights.
 
Well, of course, if one meters relative to the sensor’s saturation, then it naturally follows that no amount of increasing saturation capacity would ever give more highlight headroom. But I would like to question whether it is more relevant to look at things like that rather than in terms of number of photons.

Per “Image Sensors and Signal Processing for Digital Stills Camera”, page 308:
Because the capacitance of a pixel depends on its cross-sectional structure and the area of the pixel, pixel capacitance, and thus full-well capacity, generally scales linearly with area for a given technology […]. The maximum number of photoelectrons that a 1 × 1 μm pixel can store is approximately 1/25th the number that a 5 × 5 μm pixel can store.
Therefore, it generally is the case that larger sensors can hold more light before clipping, for the image as a whole or for any fixed fraction of it.

So, all in all, I think I would like to turn your last paragraph around, into something along the lines of: by continuing to target your mid-tones to 18% saturation as your sensor size and capacity increase, what you are actually doing is taking advantage of your newfound highlight headroom to get better overall SNR. (In fact, in a sense, that’s true even if the noise floor is correspondingly higher so that the dynamic range is the same in the end.)

(As an aside, I personally don’t really like the term “recovery” in this context. Is it really recovery if it was there all along? The shadows never really went away, unless the JPEG as processed by the camera is your reference.)
 
Last edited:
Well, of course, if one meters relative to the sensor’s saturation, then it naturally follows that no amount of increasing saturation capacity would ever give more highlight headroom. But I would like to question whether it is more relevant to look at things like that rather than in terms of number of photons.

Per “Image Sensors and Signal Processing for Digital Stills Camera”, page 308:
Because the capacitance of a pixel depends on its cross-sectional structure and the area of the pixel, pixel capacitance, and thus full-well capacity, generally scales linearly with area for a given technology […]. The maximum number of photoelectrons that a 1 × 1 μm pixel can store is approximately 1/25th the number that a 5 × 5 μm pixel can store.
Therefore, it generally is the case that larger sensors can hold more light before clipping, for the image as a whole or for any fixed fraction of it.
They may hold more light (at the same ISO) but they still clip at the same intensity, more or less.
 
Well, of course, if one meters relative to the sensor’s saturation, then it naturally follows that no amount of increasing saturation capacity would ever give more highlight headroom. But I would like to question whether it is more relevant to look at things like that rather than in terms of number of photons.

Per “Image Sensors and Signal Processing for Digital Stills Camera”, page 308:
Because the capacitance of a pixel depends on its cross-sectional structure and the area of the pixel, pixel capacitance, and thus full-well capacity, generally scales linearly with area for a given technology […]. The maximum number of photoelectrons that a 1 × 1 μm pixel can store is approximately 1/25th the number that a 5 × 5 μm pixel can store.
Therefore, it generally is the case that larger sensors can hold more light before clipping, for the image as a whole or for any fixed fraction of it.
No. Exposure is signal per unit area. The same exposure will saturate all sensors with the same QE at the same point. The clipping point is the same. On a 14-bit sensor, its 16383 units, which translates to RGB 255.

A larger signal does not make the exposure brighter, but the larger signal from a larger sensor DOES have less noise. With less noise you can underexpose more to preserve highlights and adjust in processing.
 
Well, of course, if one meters relative to the sensor’s saturation, then it naturally follows that no amount of increasing saturation capacity would ever give more highlight headroom. But I would like to question whether it is more relevant to look at things like that rather than in terms of number of photons.

Per “Image Sensors and Signal Processing for Digital Stills Camera”, page 308:
Because the capacitance of a pixel depends on its cross-sectional structure and the area of the pixel, pixel capacitance, and thus full-well capacity, generally scales linearly with area for a given technology […]. The maximum number of photoelectrons that a 1 × 1 μm pixel can store is approximately 1/25th the number that a 5 × 5 μm pixel can store.
Therefore, it generally is the case that larger sensors can hold more light before clipping, for the image as a whole or for any fixed fraction of it.
No. Exposure is signal per unit area. The same exposure will saturate all sensors with the same QE at the same point.
I have never denied that. But exposure is not what I meant by “light”, photons are. It seems logical that looking at light per unit area would appear to nullify a linear advantage of a larger area.

My point is precisely that exposure is probably not the relevant thing to look at for such comparisons, if only because equal exposures on differently-sized sensors do not produce equivalent images.
The clipping point is the same. On a 14-bit sensor, its 16383 units, which translates to RGB 255.
On most current sensors, it’s 2000-3000 electrons per µm². Does it really mean much that they all use most of their 14-bit range to represent the number of electrons from an individual photosite? Of course they do, why use so many bits otherwise?

(I say “most” of the range because it turns out that the RAWs from my G1 X III only go up to 15871, and those from my K-70, to 16313.)
A larger signal does not make the exposure brighter, but the larger signal from a larger sensor DOES have less noise. With less noise you can underexpose more to preserve highlights and adjust in processing.
“Underexpose” is such a loaded term. If a given exposure gives you the same noise as an equivalent (higher) exposure on a smaller sensor, and that exposure was fine on the small sensor, then what makes the lower (but still equivalent) exposure on the larger sensor “underexposed”? “Under” compared to what?

If the answer is “compared to what the sensor could have held”, then I believe that my point is made.
 
Last edited:
Well, of course, if one meters relative to the sensor’s saturation, then it naturally follows that no amount of increasing saturation capacity would ever give more highlight headroom. But I would like to question whether it is more relevant to look at things like that rather than in terms of number of photons.

Per “Image Sensors and Signal Processing for Digital Stills Camera”, page 308:
Because the capacitance of a pixel depends on its cross-sectional structure and the area of the pixel, pixel capacitance, and thus full-well capacity, generally scales linearly with area for a given technology […]. The maximum number of photoelectrons that a 1 × 1 μm pixel can store is approximately 1/25th the number that a 5 × 5 μm pixel can store.
Therefore, it generally is the case that larger sensors can hold more light before clipping, for the image as a whole or for any fixed fraction of it.
No. Exposure is signal per unit area. The same exposure will saturate all sensors with the same QE at the same point.
I have never denied that. But exposure is not what I meant by “light”, photons are.
But that's not what you see in the image, that's only what's recorded. The image is just a number code that's reproduced by a printer or display. Pure white is the brightest you are going to get, whatever the camera records.
It seems logical that looking at light per unit area would appear to nullify a linear advantage of a larger area.
No, because with a larger area, you record more light overall, so the signal to noise ratio is higher over the total area of the image.
My point is precisely that exposure is probably not the relevant thing to look at for such comparisons, if only because equal exposures on differently-sized sensors do not produce equivalent images.
Equivalent in what terms? And what do you mean by 'exposure' ?
The clipping point is the same. On a 14-bit sensor, its 16383 units, which translates to RGB 255.
On most current sensors, it’s 2000-3000 electrons per µm². Does it really mean much that they all use most of their 14-bit range to represent the number of electrons from an individual photosite? Of course they do, why use so many bits otherwise?
They use more bits to reduce quantisation error. But that's bits per pixel, not bits per square micrometer. And some FF sensors have 100,000 electrons per photosite.
(I say “most” of the range because it turns out that the RAWs from my G1 X III only go up to 15871, and those from my K-70, to 16313.)
Black level offset varies between cameras, and Canon's are generally higher, but the difference is not significant. The question is what does the maximum number represent when converted to image RGB?

255. Pure white, max brightness, whether it's a K70 or G1.

Same brightness, right?
A larger signal does not make the exposure brighter, but the larger signal from a larger sensor DOES have less noise. With less noise you can underexpose more to preserve highlights and adjust in processing.
“Underexpose” is such a loaded term. If a given exposure gives you the same noise as an equivalent (higher) exposure on a smaller sensor, and that exposure was fine on the small sensor, then what makes the lower (but still equivalent) exposure on the larger sensor “underexposed”? “Under” compared to what?
If the answer is “compared to what the sensor could have held”, then I believe that my point is made.
But that isn't the answer, so what's your point?
 

Keyboard shortcuts

Back
Top