Explaining the concepts behind equivalence without using the word

And also, ISO is IMHO a bigger source of confusion than equivalence. The exposure triangle has polluted so many minds to a point where they are almost beyond saving.
This is actually a really interesting point. Can you elucidate?

I've long thought ISO is kinda irrelevant in that some sensors seem largely ISO invariant? Given I live in extreme "high-ISO-land", the exposure triangle nowadays is really irrelevant to me in the field and a decision I make in LR post-processing?

To explain - I take a shot at 1/125th, F5.6 at night in poor lighting in the field.

I don't really care if my camera metering puts the auto-ISO at "ISO6400" or "ISO3200" or "ISO12800". Whatever it is, it's underexposed and I will bring it into LR and Photoshop and with my edits, it will be at what I call (perhaps incorrectly) an "effective ISO of 25,600".

Whether it's by the camera's ISO6400 setting and +2 Exposure in LR

or the camera having set at ISO12,800 and a +25 Shadows adjustment (or so) in LR

Or whatever variant thereof it takes for the image in the shadow regions to emerge. This where my image gets to the same place.

Really for me the exposure triangle is now a post-processing thing, where 20 years ago, maybe I could push one stop in post before everything just falls apart.

This is with, for example, the Leica M10-P, where the sensor, at least as it seems to me, is largely (I believe it's called) invariant? Is this what you're referring to?

All I know is the type of photography I do (night time documentary street photography) pushes the edge of what you can do with natural light and current sensor technology, because again, even for me, I lose shots because you push too far and it's just not a photograph any more, but an abstract constellation of pinpricks, blotches of color and brushstrokes.
It is easy to expose optimally in low light. Maximize the exposure (shutter speed and aperture) and keep the ISO low enough so the relevant highlights do not clip. You seem to be doing that. The only problem is the view in the EVF, which may be too dark. The solution is to switch off exposure simulation. That invalidates the histogram, but the exposure indicator may be good enough (it is still accurate).
 
ISO setting does not dictate exposure, unless the photographer has delegated that decision to the camera's metering system.
I see. That's kinda obvious isn't it?

My question is more whether a camera and it's sensor's ISO is irrelevant given it seems - and again, I am totally ignorant here - just a user who's bumbled around a ton in LR - it doesn't really matter if you adjust your exposure in post.

And, while I fix my DoF and shutter speed in the field, I make my ultimate exposure decision in post.
I've long thought ISO is kinda irrelevant in that some sensors seem largely ISO invariant? Given I live in extreme "high-ISO-land", the exposure triangle nowadays is really irrelevant to me in the field and a decision I make in LR post-processing?
Uh-huh.
So to ask the direct and answerable question:

Should I just set my camera ISO (the Leica M10-P) to something safe and low, lets say ISO1600, for my night time street photography, knowing that every photo will be grossly underexposed, and thus not blow out highlights.

And then manually adjust the exposure in LR, sometime by up to +5 exposure AND sometimes with shadows adjustments on top of that (which compounds the image quality degradation) to get my image. Because frankly, that's how far I need to make the adjustments to get to my image.

OR

Is it better to let my camera do auto-ISO (usually with a -1 exposure comp, to, again preserve highlights), which often results in higher in-camera auto-ISO settings of ISO6400, 12,500, etc., and less LR adjustments to get to the same place.

Ricoh GRIII - in-camera Auto-ISO2500. Exposure adjustment >+1, Shadows adjustment >+15
Ricoh GRIII - in-camera Auto-ISO2500. Exposure adjustment >+1, Shadows adjustment >+15

It seems to me not to matter. But I've been doing the latter because it feels better. Psychologically it feels better because part of me likes to think the engineers who designed this entire ISO thing with the sensor and hardware and software in this camera knew something, a little something, that might not matter a lot, but matters maybe just a little bit on the margins, that I don't as I adjust these LR sliders.
 
Last edited:
ISO setting does not dictate exposure, unless the photographer has delegated that decision to the camera's metering system.
I see. That's kinda obvious isn't it?
But that not what the exposure triangle says.
My question is more whether a camera and it's sensor's ISO is irrelevant given it seems - and again, I am totally ignorant here - just a user who's bumbled around a ton in LR.
Irrelevant is an overstatement IMHO, especially with dual conversion gain cameras. But it certainly is not as important as exposure, unless you set it too high.
I've long thought ISO is kinda irrelevant in that some sensors seem largely ISO invariant? Given I live in extreme "high-ISO-land", the exposure triangle nowadays is really irrelevant to me in the field and a decision I make in LR post-processing?
Uh-huh.
So to ask the direct and answerable question:

Should I just set my camera ISO (the Leica M10-P) to something safe and low, lets say ISO1600, for my night time street photography, knowing that every photo will be grossly underexposed, and thus not blow out highlights.
I have not tested that camera.
And then manually adjust the exposure in LR, sometime by up to +5 exposure AND sometimes with shadows adjustments on top of that (which compounds the image quality degradation) to get my image. Because frankly, that's how far I need to make the adjustments to get to my image.

OR

Is it better to let my camera do auto-ISO (usually with a -1 exposure comp, to, again preserve highlights), which often results in higher in-camera auto-ISO settings of ISO6400, 12,500, etc., and less LR adjustments to get to the same place.

Ricoh GRIII - in-camera Auto-ISO2500. Exposure adjustment >+1, Shadows adjustment >+15
Ricoh GRIII - in-camera Auto-ISO2500. Exposure adjustment >+1, Shadows adjustment >+15

It seems to me not to matter.
Then to you it does not matter. In those circumstances, I usually set the ISO lower than you do, but the last Leica I used was the M240.
But I've been doing the latter because it feels better. Psychologically it feels better because part of me likes to think the engineers who designed this entire ISO thing with the sensor and hardware and software in this camera knew something,
I'm thinking that the salient features of the camera wert ISO settings probably came from the product managers, not the engineers.
a little something, that might not matter a lot, but matters maybe just a little bit on the margins, that I don't as I adjust these LR sliders.
That is a testable assertion.

--
 
That is a testable assertion.
Thanks.

I have tested it. I already know the answer, at least from the dumb trial-and-error user perspective - it does matter, just a little bit. The colors are a little truer if you use the appropriate in-camera ISO setting, at least with the GRIII and Leica M10-P, than totally underbaking the ISO and abusing the sliders.

I do appreciate the response. I forget where the dual-gain is on my cameras - I looked them up years ago, which is when I decided to go the auto-ISO approach. It's lower than anything I usually use (ISO400 or 800 or something like that).

I ask these questions because these are all things I know I don't really understand, but am trying to learn, and you're a great resource. Thank you.
 
That is a testable assertion.
Thanks.

I have tested it. I already know the answer, at least from the dumb trial-and-error user perspective - it does matter, just a little bit. The colors are a little truer if you use the appropriate in-camera ISO setting, at least with the GRIII and Leica M10-P,
I don't know about the M10-P, but the M240 has big black point calibration issues, which limited one's ability to push the shadows in post.
than totally underbaking the ISO and abusing the sliders.
Use isn't necessarily abuse.
I do appreciate the response. I forget where the dual-gain is on my cameras - I looked them up years ago, which is when I decided to go the auto-ISO approach. It's lower than anything I usually use (ISO400 or 800 or something like that).

I ask these questions because these are all things I know I don't really understand, but am trying to learn, and you're a great resource. Thank you.
Thanks.
 
Framing and Focal Length

To frame the same subject from the same position with different sensor sizes, you must keep the ratio of the focal length to the sensor width, diagonal, or height the same.

So, to maintain the same field of view, a smaller sensor requires a proportionally shorter lens.

For example, if a full-frame camera uses a 50 mm lens, then an APS-C (1.5× crop) needs about a 33 mm lens to frame the same subject from the same distance.
Agree.
Depth of Field (DOF)

Depth of field is inversely proportional to the square of the aperture diameter for a given subject distance:

DOF = constant * f^2 / (N*c)

Where:
  • f is focal length
  • N is f-number (focal length / aperture diameter)
  • c is the circle of confusion diameter
When sensor size decreases and focal length is reduced to match framing, the DOF increases unless aperture is opened proportionally.

To keep constant DOF when switching to a smaller sensor:
  • Reduce focal length (to match framing)
  • Open the aperture proportionally (e.g., from f/4 to f/2.8 for a 1.5× crop
Agree. But one has to define, whether c is constant or whether (c/f) is constant. The latter seems to be more logical to me and I think that's what was meant.

Also there may be some special problems with classical macro setups. I'm not sure, whether normal photography approximations also apply to something, that can be considered a microscope.
Exposure

The exposure (in lux-seconds) required for proper tone mapping is independent of sensor size. But the signal-to-noise ratio (SNR) depends on both the amount of light per unit area and the total collecting area (sensor size):
  • If you open the aperture on a smaller sensor to match DOF, you increase irradiance so to maintain the same exposure level, you need to increase the shutter speed.
It's detail thinking, but where is the exposure measured? I think at the sensor plane, right?
Noise Performance

The total number of collected photons at a given exposure is proportional to the sensor area. Thus, larger sensors produce lower noise when:
  • Framed the same
  • With matched DOF (i.e., aperture scaled accordingly)
  • And equal shutter speeds
Smaller sensors can achieve the same noise performance only if:
  • They receive the same number of photons per image, which typically means using wider apertures
I don't understand this from a linguistic point of view. Especially the second point.

At the same exposure (Lux-seconds at the sensor plane), it's logical that the number of photons captured increases with the sensor area.
But if I have the same "exposure" for different formats, I will not have a equal DoF.

To my physical understanding, (at the same FoV and shutter speed) the number of photons is directly correlated to the DoF of the setup.

Sensor size comes into play, as you have no F0.5 lens for small sensors.
Diffraction

Diffraction blur increases as the f-number increases. One definition of the the Airy disk diameter is:

d=2.44*λ*N, where λ is the wavelength of the light.

To keep the same depth of field, smaller formats use wider aperture, which also reduces diffraction. This means that, for a given DOF, smaller formats are diffraction-limited at smaller physical apertures, but not necessarily at smaller f-numbers.
The term diffraction limited is misleading, to my understanding.
It can mean, that diffraction limits the available resolution of an imaging system (lens + sensor). Let's say F22 on a 35 mm format camera with 60 MP will result in some sort of softness.
But diffraction limited also means, that all aberrations of a lens are so well corrected, that you are close to the diffraction limit of that lens (Strehl ratio > 0,8). That does not include the resolving power of the sensor.

From an optics point of view DoF and resolution are strictly coupled. Basic laws of microscopy from Ernst Abbe.
 
Framing and Focal Length

To frame the same subject from the same position with different sensor sizes, you must keep the ratio of the focal length to the sensor width, diagonal, or height the same.

So, to maintain the same field of view, a smaller sensor requires a proportionally shorter lens.

For example, if a full-frame camera uses a 50 mm lens, then an APS-C (1.5× crop) needs about a 33 mm lens to frame the same subject from the same distance.
Agree.
Depth of Field (DOF)

Depth of field is inversely proportional to the square of the aperture diameter for a given subject distance:

DOF = constant * f^2 / (N*c)

Where:
  • f is focal length
  • N is f-number (focal length / aperture diameter)
  • c is the circle of confusion diameter
When sensor size decreases and focal length is reduced to match framing, the DOF increases unless aperture is opened proportionally.

To keep constant DOF when switching to a smaller sensor:
  • Reduce focal length (to match framing)
  • Open the aperture proportionally (e.g., from f/4 to f/2.8 for a 1.5× crop
Agree. But one has to define, whether c is constant or whether (c/f) is constant. The latter seems to be more logical to me and I think that's what was meant.
What you want is constant CoC on the print at the same print size.
Also there may be some special problems with classical macro setups. I'm not sure, whether normal photography approximations also apply to something, that can be considered a microscope.
True enough.
Exposure

The exposure (in lux-seconds) required for proper tone mapping is independent of sensor size. But the signal-to-noise ratio (SNR) depends on both the amount of light per unit area and the total collecting area (sensor size):
  • If you open the aperture on a smaller sensor to match DOF, you increase irradiance so to maintain the same exposure level, you need to increase the shutter speed.
It's detail thinking, but where is the exposure measured? I think at the sensor plane, right?
That's right.
Noise Performance

The total number of collected photons at a given exposure is proportional to the sensor area. Thus, larger sensors produce lower noise when:
  • Framed the same
  • With matched DOF (i.e., aperture scaled accordingly)
  • And equal shutter speeds
Smaller sensors can achieve the same noise performance only if:
  • They receive the same number of photons per image, which typically means using wider apertures
I don't understand this from a linguistic point of view. Especially the second point.

At the same exposure (Lux-seconds at the sensor plane), it's logical that the number of photons captured increases with the sensor area.
True.
But if I have the same "exposure" for different formats, I will not have a equal DoF.
Also true.
To my physical understanding, (at the same FoV and shutter speed) the number of photons is directly correlated to the DoF of the setup.
As I have said repeatedly on this forum, to get all the advantages of larger formats you need more light or longer exposures.
Sensor size comes into play, as you have no F0.5 lens for small sensors.
Diffraction

Diffraction blur increases as the f-number increases. One definition of the the Airy disk diameter is:

d=2.44*λ*N, where λ is the wavelength of the light.

To keep the same depth of field, smaller formats use wider aperture, which also reduces diffraction. This means that, for a given DOF, smaller formats are diffraction-limited at smaller physical apertures, but not necessarily at smaller f-numbers.
The term diffraction limited is misleading, to my understanding.
It can mean, that diffraction limits the available resolution of an imaging system (lens + sensor).
That's what I mean. There are technical definitions of diffraction limited.
Let's say F22 on a 35 mm format camera with 60 MP will result in some sort of softness.
But diffraction limited also means, that all aberrations of a lens are so well corrected, that you are close to the diffraction limit of that lens (Strehl ratio > 0,8).
I don't see that as inherently different.
That does not include the resolving power of the sensor.

From an optics point of view DoF and resolution are strictly coupled. Basic laws of microscopy from Ernst Abbe.
 
Framing and Focal Length

To frame the same subject from the same position with different sensor sizes, you must keep the ratio of the focal length to the sensor width, diagonal, or height the same.

So, to maintain the same field of view, a smaller sensor requires a proportionally shorter lens.

For example, if a full-frame camera uses a 50 mm lens, then an APS-C (1.5× crop) needs about a 33 mm lens to frame the same subject from the same distance.
Agree.
Depth of Field (DOF)

Depth of field is inversely proportional to the square of the aperture diameter for a given subject distance:

DOF = constant * f^2 / (N*c)

Where:
  • f is focal length
  • N is f-number (focal length / aperture diameter)
  • c is the circle of confusion diameter
When sensor size decreases and focal length is reduced to match framing, the DOF increases unless aperture is opened proportionally.

To keep constant DOF when switching to a smaller sensor:
  • Reduce focal length (to match framing)
  • Open the aperture proportionally (e.g., from f/4 to f/2.8 for a 1.5× crop
Agree. But one has to define, whether c is constant or whether (c/f) is constant. The latter seems to be more logical to me and I think that's what was meant.
What you want is constant CoC on the print at the same print size.
Also there may be some special problems with classical macro setups. I'm not sure, whether normal photography approximations also apply to something, that can be considered a microscope.
True enough.
Exposure

The exposure (in lux-seconds) required for proper tone mapping is independent of sensor size. But the signal-to-noise ratio (SNR) depends on both the amount of light per unit area and the total collecting area (sensor size):
  • If you open the aperture on a smaller sensor to match DOF, you increase irradiance so to maintain the same exposure level, you need to increase the shutter speed.
It's detail thinking, but where is the exposure measured? I think at the sensor plane, right?
That's right.
Noise Performance

The total number of collected photons at a given exposure is proportional to the sensor area. Thus, larger sensors produce lower noise when:
  • Framed the same
  • With matched DOF (i.e., aperture scaled accordingly)
  • And equal shutter speeds
Smaller sensors can achieve the same noise performance only if:
  • They receive the same number of photons per image, which typically means using wider apertures
I don't understand this from a linguistic point of view. Especially the second point.

At the same exposure (Lux-seconds at the sensor plane), it's logical that the number of photons captured increases with the sensor area.
True.
But if I have the same "exposure" for different formats, I will not have a equal DoF.
Also true.
Just to expand a bit on that bullet point, I believe Jim is using the term, apertures, as a reference to f-stops. For example, f/4 would be a wider aperture than f/5.6 and would allow the smaller sensor to collect, "the same number of photons per image," by virtue of working with the same size entrance pupil.

In that context, an alternate way of putting the end phrase of that bullet point would be to say, "which typically means using the same size entrance pupil diameter.
To my physical understanding, (at the same FoV and shutter speed) the number of photons is directly correlated to the DoF of the setup.
As I have said repeatedly on this forum, to get all the advantages of larger formats you need more light or longer exposures.
Sensor size comes into play, as you have no F0.5 lens for small sensors.
Diffraction

Diffraction blur increases as the f-number increases. One definition of the the Airy disk diameter is:

d=2.44*λ*N, where λ is the wavelength of the light.

To keep the same depth of field, smaller formats use wider aperture, which also reduces diffraction. This means that, for a given DOF, smaller formats are diffraction-limited at smaller physical apertures, but not necessarily at smaller f-numbers.
The term diffraction limited is misleading, to my understanding.
It can mean, that diffraction limits the available resolution of an imaging system (lens + sensor).
That's what I mean. There are technical definitions of diffraction limited.
Let's say F22 on a 35 mm format camera with 60 MP will result in some sort of softness.
But diffraction limited also means, that all aberrations of a lens are so well corrected, that you are close to the diffraction limit of that lens (Strehl ratio > 0,8).
I don't see that as inherently different.
That does not include the resolving power of the sensor.

From an optics point of view DoF and resolution are strictly coupled. Basic laws of microscopy from Ernst Abbe.
 
Framing and Focal Length

To frame the same subject from the same position with different sensor sizes, you must keep the ratio of the focal length to the sensor width, diagonal, or height the same.

So, to maintain the same field of view, a smaller sensor requires a proportionally shorter lens.

For example, if a full-frame camera uses a 50 mm lens, then an APS-C (1.5× crop) needs about a 33 mm lens to frame the same subject from the same distance.
Agree.
Depth of Field (DOF)

Depth of field is inversely proportional to the square of the aperture diameter for a given subject distance:

DOF = constant * f^2 / (N*c)

Where:
  • f is focal length
  • N is f-number (focal length / aperture diameter)
  • c is the circle of confusion diameter
When sensor size decreases and focal length is reduced to match framing, the DOF increases unless aperture is opened proportionally.

To keep constant DOF when switching to a smaller sensor:
  • Reduce focal length (to match framing)
  • Open the aperture proportionally (e.g., from f/4 to f/2.8 for a 1.5× crop
Agree. But one has to define, whether c is constant or whether (c/f) is constant. The latter seems to be more logical to me and I think that's what was meant.
What you want is constant CoC on the print at the same print size.
Also there may be some special problems with classical macro setups. I'm not sure, whether normal photography approximations also apply to something, that can be considered a microscope.
True enough.
Exposure

The exposure (in lux-seconds) required for proper tone mapping is independent of sensor size. But the signal-to-noise ratio (SNR) depends on both the amount of light per unit area and the total collecting area (sensor size):
  • If you open the aperture on a smaller sensor to match DOF, you increase irradiance so to maintain the same exposure level, you need to increase the shutter speed.
It's detail thinking, but where is the exposure measured? I think at the sensor plane, right?
That's right.
Noise Performance

The total number of collected photons at a given exposure is proportional to the sensor area. Thus, larger sensors produce lower noise when:
  • Framed the same
  • With matched DOF (i.e., aperture scaled accordingly)
  • And equal shutter speeds
Smaller sensors can achieve the same noise performance only if:
  • They receive the same number of photons per image, which typically means using wider apertures
I don't understand this from a linguistic point of view. Especially the second point.

At the same exposure (Lux-seconds at the sensor plane), it's logical that the number of photons captured increases with the sensor area.
True.
But if I have the same "exposure" for different formats, I will not have a equal DoF.
Also true.
Just to expand a bit on that bullet point, I believe Jim is using the term, apertures, as a reference to f-stops. For example, f/4 would be a wider aperture than f/5.6 and would allow the smaller sensor to collect, "the same number of photons per image," by virtue of working with the same size entrance pupil.

In that context, an alternate way of putting the end phrase of that bullet point would be to say, "which typically means using the same size entrance pupil diameter.
True enough, but I was afraid I've lose a lot of people if I said that.
 
And also, ISO is IMHO a bigger source of confusion than equivalence. The exposure triangle has polluted so many minds to a point where they are almost beyond saving.
It is easy to expose optimally in low light. Maximize the exposure (shutter speed and aperture) and keep the ISO low enough so the relevant highlights do not clip. You seem to be doing that. The only problem is the view in the EVF, which may be too dark. The solution is to switch off exposure simulation. That invalidates the histogram, but the exposure indicator may be good enough (it is still accurate).
The exposure preview in the viewfinder can't be disabled for some burst shootings modes. There may also arise problems with AF accuracy (/subject detection) if you underexpose a lot, as the algorithms usually don't search for faces in the deep shaddows of a scene. At least OM-System cameras are limited in that scope and using less/more(?) than -2 EV compensation is sometimes troublesome.

For eratic moving subjects (humans, animals), it's also a veritable solution to increase the exposure time beyond the normally acceptable for preventing motion blur (need to have image stabilisation for that!) and just take a burst of images. You can than simply scroll through the 100 or 200 images in the camera and write protect the few with least motion blur (and right facial expression) in the camera and delete the rest.
 
And also, ISO is IMHO a bigger source of confusion than equivalence. The exposure triangle has polluted so many minds to a point where they are almost beyond saving.
It is easy to expose optimally in low light. Maximize the exposure (shutter speed and aperture) and keep the ISO low enough so the relevant highlights do not clip. You seem to be doing that. The only problem is the view in the EVF, which may be too dark. The solution is to switch off exposure simulation. That invalidates the histogram, but the exposure indicator may be good enough (it is still accurate).
The exposure preview in the viewfinder can't be disabled for some burst shootings modes. There may also arise problems with AF accuracy (/subject detection) if you underexpose a lot, as the algorithms usually don't search for faces in the deep shaddows of a scene.
That is the advantage of turning exposure simulation off: no faces are in deep shadows. Some cameras brighten quickly the EVF display so that the camera can grab focus. The brightness of the subject in the EVF influences the accuracy of focusing.
At least OM-System cameras are limited in that scope and using less/more(?) than -2 EV compensation is sometimes troublesome.

For eratic moving subjects (humans, animals), it's also a veritable solution to increase the exposure time beyond the normally acceptable for preventing motion blur (need to have image stabilisation for that!) and just take a burst of images. You can than simply scroll through the 100 or 200 images in the camera and write protect the few with least motion blur (and right facial expression) in the camera and delete the rest.
 
Framing and Focal Length

To frame the same subject from the same position with different sensor sizes, you must keep the ratio of the focal length to the sensor width, diagonal, or height the same.

So, to maintain the same field of view, a smaller sensor requires a proportionally shorter lens.

For example, if a full-frame camera uses a 50 mm lens, then an APS-C (1.5× crop) needs about a 33 mm lens to frame the same subject from the same distance.
Agree.
Depth of Field (DOF)

Depth of field is inversely proportional to the square of the aperture diameter for a given subject distance:

DOF = constant * f^2 / (N*c)

Where:
  • f is focal length
  • N is f-number (focal length / aperture diameter)
  • c is the circle of confusion diameter
When sensor size decreases and focal length is reduced to match framing, the DOF increases unless aperture is opened proportionally.

To keep constant DOF when switching to a smaller sensor:
  • Reduce focal length (to match framing)
  • Open the aperture proportionally (e.g., from f/4 to f/2.8 for a 1.5× crop
Agree. But one has to define, whether c is constant or whether (c/f) is constant. The latter seems to be more logical to me and I think that's what was meant.
What you want is constant CoC on the print at the same print size.
Also there may be some special problems with classical macro setups. I'm not sure, whether normal photography approximations also apply to something, that can be considered a microscope.
True enough.
Exposure

The exposure (in lux-seconds) required for proper tone mapping is independent of sensor size. But the signal-to-noise ratio (SNR) depends on both the amount of light per unit area and the total collecting area (sensor size):
  • If you open the aperture on a smaller sensor to match DOF, you increase irradiance so to maintain the same exposure level, you need to increase the shutter speed.
It's detail thinking, but where is the exposure measured? I think at the sensor plane, right?
That's right.
Noise Performance

The total number of collected photons at a given exposure is proportional to the sensor area. Thus, larger sensors produce lower noise when:
  • Framed the same
  • With matched DOF (i.e., aperture scaled accordingly)
  • And equal shutter speeds
Smaller sensors can achieve the same noise performance only if:
  • They receive the same number of photons per image, which typically means using wider apertures
I don't understand this from a linguistic point of view. Especially the second point.

At the same exposure (Lux-seconds at the sensor plane), it's logical that the number of photons captured increases with the sensor area.
True.
But if I have the same "exposure" for different formats, I will not have a equal DoF.
Also true.
Just to expand a bit on that bullet point, I believe Jim is using the term, apertures, as a reference to f-stops. For example, f/4 would be a wider aperture than f/5.6 and would allow the smaller sensor to collect, "the same number of photons per image," by virtue of working with the same size entrance pupil.

In that context, an alternate way of putting the end phrase of that bullet point would be to say, "which typically means using the same size entrance pupil diameter.
In the early mid 1070's, I got a call from Charlie Pugh who was at UCB at the time asking if I along with our common thesis advisor, Phil Hartman, would be interested in proof reading a manuscript for a upcoming book by his colleague Sachs and Wu called "General Relativity for Mathematicians". At that time GR was a convoluted subject because it was expressed in mathematical concepts of the early 1900's. Wu specialty was modern differential geometry and he and Sachs rewrote the General Theory using the machinery of coordinate free differential geometry. The first thing they did was to change the units so that the speed of light was 1 and dimensionless. The concept had been around for quite awhile in mathematical physics. That made the theory much more elegant and enhanced the geometric aspects of the subject. The same approach was what mathematician Roger Penrose used to show that the theory of GR implied the existence of Black Holes. This approach has become a standard in mathematic physics.

When the book first appeared, most of the physics community was not very flattering. One would hear comments about "bastardization of physics" by a bunch of mathematicians. Today the book is still a classic. So sometimes normalizing physical concepts can work well.

Does anyone know the history around how the dimensionless concept known as the f-number came about as is the standard in geometric optics vs. using the actual area of the entrance pupil.
To my physical understanding, (at the same FoV and shutter speed) the number of photons is directly correlated to the DoF of the setup.
As I have said repeatedly on this forum, to get all the advantages of larger formats you need more light or longer exposures.
Sensor size comes into play, as you have no F0.5 lens for small sensors.
Diffraction

Diffraction blur increases as the f-number increases. One definition of the the Airy disk diameter is:

d=2.44*λ*N, where λ is the wavelength of the light.

To keep the same depth of field, smaller formats use wider aperture, which also reduces diffraction. This means that, for a given DOF, smaller formats are diffraction-limited at smaller physical apertures, but not necessarily at smaller f-numbers.
The term diffraction limited is misleading, to my understanding.
It can mean, that diffraction limits the available resolution of an imaging system (lens + sensor).
That's what I mean. There are technical definitions of diffraction limited.
Let's say F22 on a 35 mm format camera with 60 MP will result in some sort of softness.
But diffraction limited also means, that all aberrations of a lens are so well corrected, that you are close to the diffraction limit of that lens (Strehl ratio > 0,8).
I don't see that as inherently different.
That does not include the resolving power of the sensor.

From an optics point of view DoF and resolution are strictly coupled. Basic laws of microscopy from Ernst Abbe.
 
Framing and Focal Length

To frame the same subject from the same position with different sensor sizes, you must keep the ratio of the focal length to the sensor width, diagonal, or height the same.

So, to maintain the same field of view, a smaller sensor requires a proportionally shorter lens.

For example, if a full-frame camera uses a 50 mm lens, then an APS-C (1.5× crop) needs about a 33 mm lens to frame the same subject from the same distance.
Agree.
Depth of Field (DOF)

Depth of field is inversely proportional to the square of the aperture diameter for a given subject distance:

DOF = constant * f^2 / (N*c)

Where:
  • f is focal length
  • N is f-number (focal length / aperture diameter)
  • c is the circle of confusion diameter
When sensor size decreases and focal length is reduced to match framing, the DOF increases unless aperture is opened proportionally.

To keep constant DOF when switching to a smaller sensor:
  • Reduce focal length (to match framing)
  • Open the aperture proportionally (e.g., from f/4 to f/2.8 for a 1.5× crop
Agree. But one has to define, whether c is constant or whether (c/f) is constant. The latter seems to be more logical to me and I think that's what was meant.
What you want is constant CoC on the print at the same print size.
Also there may be some special problems with classical macro setups. I'm not sure, whether normal photography approximations also apply to something, that can be considered a microscope.
True enough.
Exposure

The exposure (in lux-seconds) required for proper tone mapping is independent of sensor size. But the signal-to-noise ratio (SNR) depends on both the amount of light per unit area and the total collecting area (sensor size):
  • If you open the aperture on a smaller sensor to match DOF, you increase irradiance so to maintain the same exposure level, you need to increase the shutter speed.
It's detail thinking, but where is the exposure measured? I think at the sensor plane, right?
That's right.
Noise Performance

The total number of collected photons at a given exposure is proportional to the sensor area. Thus, larger sensors produce lower noise when:
  • Framed the same
  • With matched DOF (i.e., aperture scaled accordingly)
  • And equal shutter speeds
Smaller sensors can achieve the same noise performance only if:
  • They receive the same number of photons per image, which typically means using wider apertures
I don't understand this from a linguistic point of view. Especially the second point.

At the same exposure (Lux-seconds at the sensor plane), it's logical that the number of photons captured increases with the sensor area.
True.
But if I have the same "exposure" for different formats, I will not have a equal DoF.
Also true.
Just to expand a bit on that bullet point, I believe Jim is using the term, apertures, as a reference to f-stops. For example, f/4 would be a wider aperture than f/5.6 and would allow the smaller sensor to collect, "the same number of photons per image," by virtue of working with the same size entrance pupil.

In that context, an alternate way of putting the end phrase of that bullet point would be to say, "which typically means using the same size entrance pupil diameter.
In the early mid 1070's, I got a call from Charlie Pugh who was at UCB at the time asking if I along with our common thesis advisor, Phil Hartman, would be interested in proof reading a manuscript for a upcoming book by his colleague Sachs and Wu called "General Relativity for Mathematicians". At that time GR was a convoluted subject because it was expressed in mathematical concepts of the early 1900's. Wu specialty was modern differential geometry and he and Sachs rewrote the General Theory using the machinery of coordinate free differential geometry. The first thing they did was to change the units so that the speed of light was 1 and dimensionless. The concept had been around for quite awhile in mathematical physics. That made the theory much more elegant and enhanced the geometric aspects of the subject. The same approach was what mathematician Roger Penrose used to show that the theory of GR implied the existence of Black Holes. This approach has become a standard in mathematic physics.

When the book first appeared, most of the physics community was not very flattering. One would hear comments about "bastardization of physics" by a bunch of mathematicians. Today the book is still a classic. So sometimes normalizing physical concepts can work well.

Does anyone know the history around how the dimensionless concept known as the f-number came about as is the standard in geometric optics vs. using the actual area of the entrance pupil.
I've traced f-stop as far back as 1904 to an early edition of the Ilford "Manual of Photography". On page 45, author C. H. Bothamley walks the reader through the lens aperture (diameter of the stop) determining how much light is collected from the scene and the f-stop or ratio of focal length to aperture determining the efficiency of the lens at transmitting light to the plate. To paraphrase, all lenses with the same ratio have the same efficiency, barring differences due to glass thickness, at transmitting light. On pages 44 and 45, Bothamley explains that the time of exposure and f-stop determine the exposure (light energy per unit area of the light-sensitive medium) used to make the photo.

Though I've not found a copy, I suspect the 1890 first edition is probably the first reference to this in the Ilford manual series. Bothamley was the author. I have not sought out and, therefore, do not know of earlier published references to an f-stop in photography.
To my physical understanding, (at the same FoV and shutter speed) the number of photons is directly correlated to the DoF of the setup.
As I have said repeatedly on this forum, to get all the advantages of larger formats you need more light or longer exposures.
Sensor size comes into play, as you have no F0.5 lens for small sensors.
Diffraction

Diffraction blur increases as the f-number increases. One definition of the the Airy disk diameter is:

d=2.44*λ*N, where λ is the wavelength of the light.

To keep the same depth of field, smaller formats use wider aperture, which also reduces diffraction. This means that, for a given DOF, smaller formats are diffraction-limited at smaller physical apertures, but not necessarily at smaller f-numbers.
The term diffraction limited is misleading, to my understanding.
It can mean, that diffraction limits the available resolution of an imaging system (lens + sensor).
That's what I mean. There are technical definitions of diffraction limited.
Let's say F22 on a 35 mm format camera with 60 MP will result in some sort of softness.
But diffraction limited also means, that all aberrations of a lens are so well corrected, that you are close to the diffraction limit of that lens (Strehl ratio > 0,8).
I don't see that as inherently different.
That does not include the resolving power of the sensor.

From an optics point of view DoF and resolution are strictly coupled. Basic laws of microscopy from Ernst Abbe.
 
Framing and Focal Length

To frame the same subject from the same position with different sensor sizes, you must keep the ratio of the focal length to the sensor width, diagonal, or height the same.

So, to maintain the same field of view, a smaller sensor requires a proportionally shorter lens.

For example, if a full-frame camera uses a 50 mm lens, then an APS-C (1.5× crop) needs about a 33 mm lens to frame the same subject from the same distance.
Agree.
Depth of Field (DOF)

Depth of field is inversely proportional to the square of the aperture diameter for a given subject distance:

DOF = constant * f^2 / (N*c)

Where:
  • f is focal length
  • N is f-number (focal length / aperture diameter)
  • c is the circle of confusion diameter
When sensor size decreases and focal length is reduced to match framing, the DOF increases unless aperture is opened proportionally.

To keep constant DOF when switching to a smaller sensor:
  • Reduce focal length (to match framing)
  • Open the aperture proportionally (e.g., from f/4 to f/2.8 for a 1.5× crop
Agree. But one has to define, whether c is constant or whether (c/f) is constant. The latter seems to be more logical to me and I think that's what was meant.
What you want is constant CoC on the print at the same print size.
Also there may be some special problems with classical macro setups. I'm not sure, whether normal photography approximations also apply to something, that can be considered a microscope.
True enough.
Exposure

The exposure (in lux-seconds) required for proper tone mapping is independent of sensor size. But the signal-to-noise ratio (SNR) depends on both the amount of light per unit area and the total collecting area (sensor size):
  • If you open the aperture on a smaller sensor to match DOF, you increase irradiance so to maintain the same exposure level, you need to increase the shutter speed.
It's detail thinking, but where is the exposure measured? I think at the sensor plane, right?
That's right.
Noise Performance

The total number of collected photons at a given exposure is proportional to the sensor area. Thus, larger sensors produce lower noise when:
  • Framed the same
  • With matched DOF (i.e., aperture scaled accordingly)
  • And equal shutter speeds
Smaller sensors can achieve the same noise performance only if:
  • They receive the same number of photons per image, which typically means using wider apertures
I don't understand this from a linguistic point of view. Especially the second point.

At the same exposure (Lux-seconds at the sensor plane), it's logical that the number of photons captured increases with the sensor area.
True.
But if I have the same "exposure" for different formats, I will not have a equal DoF.
Also true.
Just to expand a bit on that bullet point, I believe Jim is using the term, apertures, as a reference to f-stops. For example, f/4 would be a wider aperture than f/5.6 and would allow the smaller sensor to collect, "the same number of photons per image," by virtue of working with the same size entrance pupil.

In that context, an alternate way of putting the end phrase of that bullet point would be to say, "which typically means using the same size entrance pupil diameter.
In the early mid 1070's, I got a call from Charlie Pugh who was at UCB at the time asking if I along with our common thesis advisor, Phil Hartman, would be interested in proof reading a manuscript for a upcoming book by his colleague Sachs and Wu called "General Relativity for Mathematicians". At that time GR was a convoluted subject because it was expressed in mathematical concepts of the early 1900's. Wu specialty was modern differential geometry and he and Sachs rewrote the General Theory using the machinery of coordinate free differential geometry. The first thing they did was to change the units so that the speed of light was 1 and dimensionless. The concept had been around for quite awhile in mathematical physics. That made the theory much more elegant and enhanced the geometric aspects of the subject. The same approach was what mathematician Roger Penrose used to show that the theory of GR implied the existence of Black Holes. This approach has become a standard in mathematic physics.

When the book first appeared, most of the physics community was not very flattering. One would hear comments about "bastardization of physics" by a bunch of mathematicians. Today the book is still a classic. So sometimes normalizing physical concepts can work well.

Does anyone know the history around how the dimensionless concept known as the f-number came about as is the standard in geometric optics vs. using the actual area of the entrance pupil.
I've traced f-stop as far back as 1904 to an early edition of the Ilford "Manual of Photography". On page 45, author C. H. Bothamley walks the reader through the lens aperture (diameter of the stop) determining how much light is collected from the scene and the f-stop or ratio of focal length to aperture determining the efficiency of the lens at transmitting light to the plate. To paraphrase, all lenses with the same ratio have the same efficiency, barring differences due to glass thickness, at transmitting light. On pages 44 and 45, Bothamley explains that the time of exposure and f-stop determine the exposure (light energy per unit area of the light-sensitive medium) used to make the photo.

Though I've not found a copy, I suspect the 1890 first edition is probably the first reference to this in the Ilford manual series. Bothamley was the author. I have not sought out and, therefore, do not know of earlier published references to an f-stop in photography.
Bill, I started poking around after I proposed this question. I found this article to be fairly informative article about how the f-number is not a "natural" physical quality.


I also found a discussion of the f-number on Photography Stack Exchange. Consider the equation for hyperlocal distance, H below. f - focal length, N - f-number, c - circle of confusion and D - diameter of entrance pupil.

𝐻=𝑓^2/𝑁𝑐+𝑓 = 𝑓(𝑓/𝑁𝑐+1) =𝑓(𝐷/𝑐+1) ≈ 𝑓𝐷/𝑐 (because 𝐷≫𝑐)

The f^2 term gives the a false impression that the hyperfocal distance for a camera lens is somehow dependent on the focal length squared. That is not real but a relic of f-number. In reality is is directly proportional to both the diameter of the entrance pupil and inversely proportional to c.

In the discussion there it states the importance of f-number was more important in the development of light meters to ensure the same exposure as measured by a light meter was valid for any camera. That is ss=1/60, f = 5.6 would give the same exposure.
 
That is the advantage of turning exposure simulation off: no faces are in deep shadows. Some cameras brighten quickly the EVF display so that the camera can grab focus. The brightness of the subject in the EVF influences the accuracy of focusing.
You can disable the exposure preview, in Olympus-terms that is called "S-OVF-Mode". No problem.

But at least there it comes with a hefty speed penalty in terms of possible fps with burst mode. With S-OVF the exposure of the sensor is different between a "live-view-image" and a "captured image". At least in the OM-System-world, with S-OVF the lens aperture is opened to the max between each exposure, the exposure time/or gain for the sensor is changend for evf display and focussing and after that it's closed again with a different exposure time or gain for capturing one image. That's quite slow.

There is another burst mode (SH2) with black-out-free LiveView, which is much faster. There the aperture is closed to the desired value at the first shot and then the sensor just reads with about 100 or 120 Hz (each second frame for C-AF-measuring). From that resulting stream of images you can then save with about 50, 25, 16 or 12 fps RAW files to your card. 12fps is close to endless, at 16 fps your buffer fills in 20 seconds or about 330 frames.

Besides the lack of burst speed there is also a wear problem with S-OVF. Like mechanical shutters are restricted in maximal number actuation, the same also applies to aperture mechanics. But lenses are typically longer in use than cameras.

Of course it can be different in a Nikon, Sony, Canon or Panasonic-world. But in principle the exposure of the sensor between captured image and live-view-image has to be changed, which suggests less speed.
 
That is the advantage of turning exposure simulation off: no faces are in deep shadows. Some cameras brighten quickly the EVF display so that the camera can grab focus. The brightness of the subject in the EVF influences the accuracy of focusing.
You can disable the exposure preview, in Olympus-terms that is called "S-OVF-Mode". No problem.

But at least there it comes with a hefty speed penalty in terms of possible fps with burst mode. With S-OVF the exposure of the sensor is different between a "live-view-image" and a "captured image". At least in the OM-System-world, with S-OVF the lens aperture is opened to the max between each exposure, the exposure time/or gain for the sensor is changend for evf display and focussing and after that it's closed again with a different exposure time or gain for capturing one image. That's quite slow.

There is another burst mode (SH2) with black-out-free LiveView, which is much faster. There the aperture is closed to the desired value at the first shot and then the sensor just reads with about 100 or 120 Hz (each second frame for C-AF-measuring). From that resulting stream of images you can then save with about 50, 25, 16 or 12 fps RAW files to your card. 12fps is close to endless, at 16 fps your buffer fills in 20 seconds or about 330 frames.

Besides the lack of burst speed there is also a wear problem with S-OVF. Like mechanical shutters are restricted in maximal number actuation, the same also applies to aperture mechanics. But lenses are typically longer in use than cameras.

Of course it can be different in a Nikon, Sony, Canon or Panasonic-world. But in principle the exposure of the sensor between captured image and live-view-image has to be changed, which suggests less speed.
Yes, I have always assigned S-OVF to a function button to disable/enable it quickly when I need to frame rather than when I need to set the exposure. I was unaware of the limitation when shooting burst (I do not use it). Thanks for letting me know.

The Olympus implementation is the best I know. Other cameras (Leica, Hasselblad) allow disabling exposure simulation only in M mode or, like Fuji, require you to cycle through several useless modes before turning it on or off.
 

Keyboard shortcuts

Back
Top