Because so many atrocities and outrages against photography have been committed in its name, it is time to abandon the term “f/stop” and with it all the useless emotional and linguistic baggage it has accumulated in the last few years on these forums.
Now, with f/stop permanently in the rearview mirror, what would be a good name for the adjustable-sized hole in a lens that regulates the amount of energy passing through?
If you are asking a serious question, the logical replacements for f/stop and focal length would be aperture diameter and angle of view.
Knowing the focal length and your sensor size allows you to calculate quickly in head the answer for two questions:
- What is the required distance Z from camera to the subject to get the framing size of X * Y
- What is the required focal length to get the framing size of X * Y at distance Z.
Yes, if you know the focal length and sensor size, you can compute the angle of view. However, focal length and sensor sizes are implementation details. As a photographer, angle of view and aperture diameter are actually more important.
If we are going to start with the assumption that we want to continue to use conventions developed in the days of film, prior to the invention of computers, then f/stop and focal length is the answer.
if we are going to allow the computer in the digital camera to do some simple calculations, there is no reason why it can't display aperture diameter and angle of view.
As the camera knows the subject distance, it can also tell us the field of view (how wide it is at the subject) and the depth of field.
Knowing the field of view is not helpful in that case. As it is much more difficult to translate example 73° x 53° to distances.
The same thing is otherwise easy.
- You change from 50 mm to 100 mm but you want same framing, how much you need to change your camera distance from the subject?
Yes, if you are going to use lenses labeled on focal length rather than angle of view, then it makes sense to think in focal lengths.
However, we are talking about what system would we use if we were starting from scratch. In that case, lenses would be labeled in angle of view. Yes, the angle of view changes with sensor size, but that's the same issue as "effective focal length".
Modern cameras contain very powerful computers. They really can compute and display angle of view.
- You change from 39° x 27° to 20° x 13° but want the same framing, how much you need to change your camera distance from the subject?
Or how easy it goes this way,
- Your current lens says 24° x 16° and you mount it to the APS-C body, now what is the new distance that you need to get the same framing with crop factor 1.5 change?
- Your aperture is 12.5mm and you change your lens from 24° x 16° to lens with 73° x 53°, do you need to change your ISO and/or shutter speed from 200 and 1/60 to get same exposure?
if two cameras are set for the same aperture diameter, angle of view, and shutter speed, they will produce similar results in terms of framing, depth of field, and visible image noise.
And different exposure, a critical point of the photography and so on not sensible. And noise amount is not dependent from exposure as much as it is from the visual observation based the final image size, viewing distance, person eye sight and the subject even.
Of course exposure is important, but not in the same way it was in the days of film.
The response curve of film is "S" shaped. You really need to hit a certain point on the curve in order to get a high quality negative. That means that if you are shooting Tri-X, you need the same exposure per unit area whether you are shooting half frame 35mm, or 8x10 sheet film. Now the quality of the results from Tri-X vary with film size, but you have to live with that. The entire film workflow is built around hitting a small target range for your light per unit area on the film.
Digital sensors work very differently. There is a much wider latitude of exposures that will give you a good image. Put your camera in Auto-ISO mode, and you can vary the exposure by quite a few stops, and still get a great image.
You can argue that if the exposure is too low, the image looks noisy. However, the criteria for "too low" varies with sensor size. The exposure (light per unit area) that yields too low of an exposure for a 2X crop camera, may yield a very useable exposure on a full frame.
With digital, we don't need to build our workflow around light per unit area, it makes more sense to build it around total light captured. If the angle of view, aperture diameter, and shutter speeds are the same, we capture about the same total light, no matter what the sensor size.
With a modern car, I don't need to worry about what gear I am in, or the engine RPM. I can worry about what speed I want the car to go, and the car will figure out the implementation detail of gear ratio and engine RPM.
With a modern digital camera, I shouldn't have to worry about sensor size and ISO. Those are implementation details. If I know angle of view and aperture diameter, I don't need to know sensor size to know how the image will look.
The key is to get away from worrying about trying to get the same light per unit area no matter what the sensor size.
The key is to forget the sensor areas affecting the exposure or DOF etc, and focus to photography (perspective, composition, framing, timing, time control, exposure).
If you also want to think about overall image noise and depth of field, then angle of view and aperture diameter are important. Angle of view, it critical for framing and perspective.
That's a hold over from the days of film. f/stops are very helpful for getting uniform light per unit area no matter what the sensor size, and independent of the results that will be produced.
It is a ratio, it doesn't matter do you make a cake that is enough for 8 people or just two people, if the receipt says you need 1:4 ratio of chocolate and flour, then you use 1:4 ratio. If for two people serve it takes 200g flour and 50g chocolate, then for 8 people it is 800g and 200g. To make something bigger, doesn't mean that ratio changes suddenly to something else like 1:8.
I think a better analogy is that if the recipe calls for 8 cups of flour, it doesn't matter whether you fill a 2 cup container 4 times, or a 1/2 cup container 16 times. Those are implementation details. With a digital camera that's light per unit area and sensor total area. What we really care about is how much total flour was measured, or how much total light was captured. How it was sliced up is an implementation detail.
Want to make a same quality print as it is in A2 size but in A0 size? You have the ratio, distance, size etc. But it is not about changing the exposure.
Actually, the same exposure (light per unit area) might produce visibly different prints with different sensor sizes. A smaller sensor may produce a noticeable noisy print. A larger sensor may produce a creamy smooth noise free print. Now if the exposures (light per unit area) was different, but both sensors captured the same total light, then the prints will look very similar.