Fast lenses, and High ISO

Started 5 months ago | Discussions
MFog
Forum MemberPosts: 75Gear list
Like?
Re: Fast lenses, and High ISO
In reply to hotdog321, 5 months ago

Chikoo wrote:

Fast lenses, as they are called allow for more light to hit the sensor and in turn allow for fast(er) shutter speeds. The F number provides a relative measure of how much this ability is.

In this age of ever increasing ISO, are fast lenses needed anymore? The only ability I see the fast lenses provide was actually a disadvantage that happened to become a feature, and that is shallow DoF, allowing for separation of subject from the background.

That said, should they be called Fast Lenses or Shallow Lenses?

Since they've always been called fast lenses, I think we should stick with established tradition in the interest of simplicity and to avoid confusion. Why borrow trouble?

You opening question is more interesting. Back in the film days, we would eagerly pay hundreds or thousands of dollars more for a lens that was 1/3 to 1 stop faster. This miniscule difference often spelled the difference between getting the picture or missing it altogether in sports or news photography. 3200 film, for instance, was often so "crunchy" that it was unsuitable even for newsprint.

But these days we use those wonderful modern digital sensors that allow us to shoot at 3200-6400 or higher and still capture really excellent images. Bokeh aficionados might still desire really fast lenses, but most of us can probably do without.

For instance, I recently bought Canon's 16-35 f/4L IS lens because of massive improvements in edge sharpness. My old 16-35 f/2.8 (version 1) is going into a drawer as backup. I'm a photojournalist, but I'm not worried about the f/4 speed. I wish it was f/2.8, but it's not a deal-breaker.

Agreed. My go to zoom is the 24-105. The f4 aperture works well with the high ISO of a 6d , i sold my sigma 24-70 2.8 i used on my 5d. I compliment the f4 with fast primes and it is quite an effective rig

 MFog's gear list:MFog's gear list
Canon EOS 5D Canon EOS 7D Nikon 1 J1 Canon EOS M Canon EOS 6D +15 more
Reply   Reply with quote   Complain
jonas ar
Regular MemberPosts: 314
Like?
Re: You missed the question.
In reply to EinsteinsGhost, 5 months ago

EinsteinsGhost wrote:

Because a competent photographer knows exposure is not dependent on format. It is dependent on: ISO, Aperture and Shutter values for a given scene brightness. That is it.

According to wikipedia photographic exposure is the amount of light (the image plane illuminance times the exposure time) reaching a photographic film (and you may substitute film with "sensor" if you like).

I can see how the aperture and shutterspeed (and the scene brightness) affect the exposure in this defintion, but please do explain how ISO affects the amount of light reaching the sensor. Or do you not agree with wikipedia on the definition of exposure?

Can you not see that changing the ISO is fundamentally different from changing the amount of light recorded?

Do you really think that we change the exposure when we use the exposure slider in raw processors (or modulate the image brightness of the developed picture in other ways by software)?

Reply   Reply with quote   Complain
bobn2
Forum ProPosts: 32,245
Like?
Re: Exposure and brightness.
In reply to Great Bustard, 5 months ago

Great Bustard wrote:

bobn2 wrote:

Great Bustard wrote:

bobn2 wrote:

Not exactly. Luminous exposure, which is what we use in photography, is measured in luminous flux times time, which is luminous energy. Not quite the same thing as 'photons / mm². If we were radiologists using radiometric exposure, then we'd be measuring that in W/m², and it would become clearer that it is a power density, and integrated over time it becomes an energy density, which can then be related to a photon count, given some assumed distribution of the photon energies. The relationship between radiometric and luminous exposure is that luminous exposure is weighted by the luminosity function, that is, includes only visible energy. So, while our cameras do in fact work as photon counters, that is an approximation tio what they should be doing photographically (film cameras are also photon counters, they just count photons in pairs and their QE is rather low).

We discussed this a while back:

http://www.dpreview.com/forums/post/39448295

So are you saying, for example, that one billion red photons falling on the sensor does not result in the same exposure as one billion blue photons falling on the sensor? If a room were illuminated by a 100W red LED, would the camera meter differently than if it the room were illuminated by a 100W blue LED?

It should. Exposure is, as you know, measured in lux seconds. Read about what the 'lux' is here: http://en.wikipedia.org/wiki/Lux. Specifically it is lumens per square meter. Lumens here: http://en.wikipedia.org/wiki/Lumen_(unit)

Luminous flux differs from power (radiant flux) in that luminous flux measurements reflect the varying sensitivity of the human eye to different wavelengths of light, while radiant flux measurements indicate the total power of all electromagnetic waves emitted, independent of the eye's ability to perceive it.

The number of candelas or lumens from a source also depends on its spectrum, via the nominal response of the human eye as represented in the luminosity function.

http://en.wikipedia.org/wiki/Luminosity_function

Photopic (black) and scotopic (green) luminosity functions.[c 1] The photopic includes the CIE 1931 standard[c 2] (solid), the Judd–Vos 1978 modified data[c 3] (dashed), and the Sharpe, Stockman, Jagla & Jägle 2005 data[c 4] (dotted). The horizontal axis is wavelength in nm.

So, indeed green and red (or blue) light is differently weighted with respect to exposure.

Would it be correct to say that the density of light falling on the sensor (photons / mm²) is mapped into the exposure via the luminosity function?

  • Brightness (photons / mm²) = Exposure (photons / mm²) · Amplification (unitless)

No, no, no. You don't get 'brightness' by 'amplifying' exposure. The luminance of a viewed image depends on what you're using to view it. The luminosity of your TV,monitor or projector or the strength of the light that you're using to view a print. It also continues to provide light as long as you view it, it's luminous energy is not limited by the luminous energy in the exposure. So, not 'amplification' at all. This is the root of the 'ISO' misunderstanding. So, the output of`the photo isn't a luminance. In film days it was a 'density', in digital days it's a file value denoting a grey scale. It simply represents the value from black to whiter than white that we want this exposure to represent (white is set a bit grey so we can have convincing light sources and specular reflections in our photos, if what was white, they'd just look white). Generally rather than 'amplification' it is a 'mapping', which maps the set of exposures which the sensor measures to a set of grey scale values which will include gamma correction and very likely a film like S- curve.

When I talk about "brightness", I was thinking in terms of the nominal values for the color channels in the image file. For example, (200, 200, 200) is "brighter" than (100, 100, 100). Now, I'm sure the mapping is non-linear, so I'm not saying that (200, 200, 200) is twice as bright as (100, 100, 100). But that's what I meant by "brightness". (Note that "amplification" does not necessarily mean "gain", although an analog gain could be part of the amplification).

The word 'amplification' implies that it is linear (unless you qualify that it isn't) and the proposition that is is unitless implies that you are getting the same thing out as goes in. That is false. The photographic reproduction chain falls short of the viewers eye. It isn't a case of reproducing the light that was incident on the sensor - that is the mistake of thinking that makes people believe that somehow ISO is a magic light amplifier. Best to steer clear from any such misleading terminology. There is no 'amplification' needed - because the output from the photographic process is the analogue of what was in film days a 'density' - simply a filter value from letting no light through (or reflecting no light) to letting some arbitrary amount greater than zero through. Film ISOs are defined in terms of exposure to produce a given density and digital ISOs in terms of exposure to produce a given file value. It absolutely is not either 'gain' or 'amplification'. As I said, once you start thinking that way, confusion results, so best not to propagate such a confusing

For example, if we took a photo of a white wall at f/4 1/100 ISO 400 on mFT and f/8 1/100 ISO 1600 on FF, and the ISOs were calibrated the same, then if a pixel for mFT read (100, 100, 100), it would read the same on FF. On the other hand, if the FF photo were taken at f/8 1/100 ISO 400, then the nominal values for the color channels in the image file would be lower -- "less bright".

Yes, because ISO 1600 maps one quarter of the exposure to 12.7% (or 100% depending on the ISO you want to use) than does ISO 400. There is no 'amplification', just a different mapping, or 'scaling' if you like. The important thing is what's coming out isn't light, so the idea that there is some unitless 'gain' or 'amplification' in there is dead wrong.

OK, this is a little more work. Tell me how you like this:

A certain number of photons fall on a pixel releasing a certain number of electrons which generates a charge. A gain may, or may not, be applied to this charge as a function of the ISO setting on the camera. The charge is then converted into a digital number by the ADC (Analog to Digital Converter). The RAW converter (or in-camera JPG engine) processes groups of digital numbers into RGB values for the image file where the ISO setting on the camera maps these values so that they have the brightness that the corresponds to the exposure and ISO setting.

You're making it too complex, getting bogged down on the detail of the technology, rather than what is the process being undertaken. The camera counts the photons, in three different bags (red, green or blue). Then it translates those photon counts into grey scale values. The exact translation done depends on the ISO setting, or to reverse that the translation done determines the ISO. The rest is implementation details. If we could engineer a sensor that directly output a count from each pixel, the basic process would be the same, and no-one would get bogged down in issues of 'ADC' or 'gain' or any of that stuff. That is just the way we engineer a photon counter at present. Film was also a photon counter. A pair of photons would reduce a reactive spot in a silver grain which catalysed its chemical reduction to silver. Each grain represented two photons collected.

-- hide signature --

Bob

Reply   Reply with quote   Complain
jrtrent
Veteran MemberPosts: 3,411
Like?
Re: You missed the question.
In reply to jonas ar, 5 months ago

jonas ar wrote:

EinsteinsGhost wrote:

Because a competent photographer knows exposure is not dependent on format. It is dependent on: ISO, Aperture and Shutter values for a given scene brightness. That is it.

According to wikipedia photographic exposure is the amount of light (the image plane illuminance times the exposure time) reaching a photographic film (and you may substitute film with "sensor" if you like).

I can see how the aperture and shutterspeed (and the scene brightness) affect the exposure in this defintion, but please do explain how ISO affects the amount of light reaching the sensor.

The light falling on the scene and the speed of the film are conditions that help us determine what exposure settings (aperture and shutter speed combinations) are needed. The sunny f/16 rule tells us that for a front-lit subject on a bright, sunny day, f/16 plus a shutter speed that is the reciprocal of the film speed should result in a good exposure. ISO therefore affects the amount of light we want to reach the film (or sensor).  For Kodachrome 200, you might use f/16 and 1/200 (or some equivalent pair such as f/8 and 1/800); for Fuji Astia, f/16 and 1/100 would be correct (or, again, some other equivalent pair such as f/8 and 1/400).

Reply   Reply with quote   Complain
Austinian
Senior MemberPosts: 1,746Gear list
Like?
Re: You missed the question.
In reply to jonas ar, 5 months ago

jonas ar wrote:

EinsteinsGhost wrote:

Because a competent photographer knows exposure is not dependent on format. It is dependent on: ISO, Aperture and Shutter values for a given scene brightness. That is it.

According to wikipedia photographic exposure is the amount of light (the image plane illuminance times the exposure time) reaching a photographic film (and you may substitute film with "sensor" if you like).

I can see how the aperture and shutterspeed (and the scene brightness) affect the exposure in this defintion, but please do explain how ISO affects the amount of light reaching the sensor. Or do you not agree with wikipedia on the definition of exposure?

Can you not see that changing the ISO is fundamentally different from changing the amount of light recorded?

Do you really think that we change the exposure when we use the exposure slider in raw processors (or modulate the image brightness of the developed picture in other ways by software)?

I've been waiting for somebody to bring this up. If history is any guide, now the real fun begins.

 Austinian's gear list:Austinian's gear list
Sony a77 II Sigma 10-20mm F4-5.6 EX DC HSM Sony DT 55-300mm F4.5-5.6 SAM Sony DT 35mm F1.8 SAM Sony DT 16-50mm F2.8 SSM +3 more
Reply   Reply with quote   Complain
jonas ar
Regular MemberPosts: 314
Like?
Re: You missed the question.
In reply to jrtrent, 5 months ago

jrtrent wrote:

jonas ar wrote:

EinsteinsGhost wrote:

Because a competent photographer knows exposure is not dependent on format. It is dependent on: ISO, Aperture and Shutter values for a given scene brightness. That is it.

According to wikipedia photographic exposure is the amount of light (the image plane illuminance times the exposure time) reaching a photographic film (and you may substitute film with "sensor" if you like).

I can see how the aperture and shutterspeed (and the scene brightness) affect the exposure in this defintion, but please do explain how ISO affects the amount of light reaching the sensor.

The light falling on the scene and the speed of the film are conditions that help us determine what exposure settings (aperture and shutter speed combinations) are needed. The sunny f/16 rule tells us that for a front-lit subject on a bright, sunny day, f/16 plus a shutter speed that is the reciprocal of the film speed should result in a good exposure. ISO therefore affects the amount of light we want to reach the film (or sensor). For Kodachrome 200, you might use f/16 and 1/200 (or some equivalent pair such as f/8 and 1/800); for Fuji Astia, f/16 and 1/100 would be correct (or, again, some other equivalent pair such as f/8 and 1/400).

The ISO rating of a film was only specified for a very specific proces. If you changed the processing you would change the ISO. Precisely as is the case for digital processing og raw files.

As bobn2 has patiently explained it imay not be a very good practice to let the exposure follow the ISO (instead of letting the ISO follow the exposure) because it will often lead to a lower exposure (and hence more noise) than strictkly required.

Reply   Reply with quote   Complain
jrtrent
Veteran MemberPosts: 3,411
Like?
Re: You missed the question.
In reply to jonas ar, 5 months ago

jonas ar wrote:

jrtrent wrote:

jonas ar wrote:

EinsteinsGhost wrote:

Because a competent photographer knows exposure is not dependent on format. It is dependent on: ISO, Aperture and Shutter values for a given scene brightness. That is it.

According to wikipedia photographic exposure is the amount of light (the image plane illuminance times the exposure time) reaching a photographic film (and you may substitute film with "sensor" if you like).

I can see how the aperture and shutterspeed (and the scene brightness) affect the exposure in this defintion, but please do explain how ISO affects the amount of light reaching the sensor.

The light falling on the scene and the speed of the film are conditions that help us determine what exposure settings (aperture and shutter speed combinations) are needed. The sunny f/16 rule tells us that for a front-lit subject on a bright, sunny day, f/16 plus a shutter speed that is the reciprocal of the film speed should result in a good exposure. ISO therefore affects the amount of light we want to reach the film (or sensor). For Kodachrome 200, you might use f/16 and 1/200 (or some equivalent pair such as f/8 and 1/800); for Fuji Astia, f/16 and 1/100 would be correct (or, again, some other equivalent pair such as f/8 and 1/400).

The ISO rating of a film was only specified for a very specific proces. If you changed the processing you would change the ISO.

This is true; for example, in dark conditions I would often shoot 400 speed Tri-X at ISO 800, then have the film push-processed.  However, I would set my meter for 800, knowing the very specific process that would follow, and my exposures therefore were still based on a particular ISO.

Precisely as is the case for digital processing og raw files.

I don't currently shoot raw, preferring out-of-camera JPEG's, but when I did, I still set my meter for a particular ISO value and exposed accordingly.  Perhaps you can expand on the role ISO plays, if any, in your shooting and processing procedures.

As bobn2 has patiently explained it imay not be a very good practice to let the exposure follow the ISO (instead of letting the ISO follow the exposure) because it will often lead to a lower exposure (and hence more noise) than strictkly required.

On my digital cameras, I have typically found one ISO setting that best balances highlight detail retention and low noise, and I leave my cameras at those respective "best" settings.  On my GX-1S, I like ISO 400 best, so that's what I always shoot that camera at; for an E-450, I liked ISO 200 best; the SD14 was kept at ISO 100 (for color) or ISO 400 (for black and white under low light conditions).  Since the ISO is established, and constant, exposure is based on that.

Reply   Reply with quote   Complain
bobn2
Forum ProPosts: 32,245
Like?
Re: You missed the question.
In reply to jonas ar, 5 months ago

jonas ar wrote:

jrtrent wrote:

jonas ar wrote:

EinsteinsGhost wrote:

Because a competent photographer knows exposure is not dependent on format. It is dependent on: ISO, Aperture and Shutter values for a given scene brightness. That is it.

According to wikipedia photographic exposure is the amount of light (the image plane illuminance times the exposure time) reaching a photographic film (and you may substitute film with "sensor" if you like).

I can see how the aperture and shutterspeed (and the scene brightness) affect the exposure in this defintion, but please do explain how ISO affects the amount of light reaching the sensor.

The light falling on the scene and the speed of the film are conditions that help us determine what exposure settings (aperture and shutter speed combinations) are needed. The sunny f/16 rule tells us that for a front-lit subject on a bright, sunny day, f/16 plus a shutter speed that is the reciprocal of the film speed should result in a good exposure. ISO therefore affects the amount of light we want to reach the film (or sensor). For Kodachrome 200, you might use f/16 and 1/200 (or some equivalent pair such as f/8 and 1/800); for Fuji Astia, f/16 and 1/100 would be correct (or, again, some other equivalent pair such as f/8 and 1/400).

The ISO rating of a film was only specified for a very specific proces. If you changed the processing you would change the ISO. Precisely as is the case for digital processing og raw files.

As bobn2 has patiently explained it imay not be a very good practice to let the exposure follow the ISO (instead of letting the ISO follow the exposure) because it will often lead to a lower exposure (and hence more noise) than strictkly required.

I'm glad there's someone here who gets it.

-- hide signature --

Bob

Reply   Reply with quote   Complain
bobn2
Forum ProPosts: 32,245
Like?
Re: You missed the question.
In reply to jrtrent, 5 months ago

jrtrent wrote:

jonas ar wrote:

jrtrent wrote:

jonas ar wrote:

EinsteinsGhost wrote:

Because a competent photographer knows exposure is not dependent on format. It is dependent on: ISO, Aperture and Shutter values for a given scene brightness. That is it.

According to wikipedia photographic exposure is the amount of light (the image plane illuminance times the exposure time) reaching a photographic film (and you may substitute film with "sensor" if you like).

I can see how the aperture and shutterspeed (and the scene brightness) affect the exposure in this defintion, but please do explain how ISO affects the amount of light reaching the sensor.

The light falling on the scene and the speed of the film are conditions that help us determine what exposure settings (aperture and shutter speed combinations) are needed. The sunny f/16 rule tells us that for a front-lit subject on a bright, sunny day, f/16 plus a shutter speed that is the reciprocal of the film speed should result in a good exposure. ISO therefore affects the amount of light we want to reach the film (or sensor). For Kodachrome 200, you might use f/16 and 1/200 (or some equivalent pair such as f/8 and 1/800); for Fuji Astia, f/16 and 1/100 would be correct (or, again, some other equivalent pair such as f/8 and 1/400).

The ISO rating of a film was only specified for a very specific proces. If you changed the processing you would change the ISO.

This is true; for example, in dark conditions I would often shoot 400 speed Tri-X at ISO 800, then have the film push-processed. However, I would set my meter for 800, knowing the very specific process that would follow, and my exposures therefore were still based on a particular ISO.

Precisely as is the case for digital processing og raw files.

I don't currently shoot raw, preferring out-of-camera JPEG's, but when I did, I still set my meter for a particular ISO value and exposed accordingly. Perhaps you can expand on the role ISO plays, if any, in your shooting and processing procedures.

As bobn2 has patiently explained it imay not be a very good practice to let the exposure follow the ISO (instead of letting the ISO follow the exposure) because it will often lead to a lower exposure (and hence more noise) than strictkly required.

On my digital cameras, I have typically found one ISO setting that best balances highlight detail retention and low noise, and I leave my cameras at those respective "best" settings. On my GX-1S, I like ISO 400 best, so that's what I always shoot that camera at; for an E-450, I liked ISO 200 best; the SD14 was kept at ISO 100 (for color) or ISO 400 (for black and white under low light conditions). Since the ISO is established, and constant, exposure is based on that.

You are pretty firmly wedded to the film emulation methodology, particularly if using in-camera JPEGs, because that is what they are based on. That means effectively that you preselect the exposure that you want to use with your film or ISO selection. Once you do that, what you're aiming to do when you set the exposure is ensure that the output tonality is 'correct'. However, that way of working is based on the idea that once you make the film and processing decision, that decision is fixed for the roll. Simply not true for digital, so you can choose to say that the fine tuning of tonality will be done in processing, given that we have tools that allow just that, and you'll set exposure to give you the most information for the processing to work on. This means minimising SNR which means maximising exposure (the real exposure, not 'brightness' or ISO). One method, but not the only one, is ETTR. The other is simply to set your exposure to the maximum subject to any other constraints that you are giving yourself (DOF, motion blur, camera FWC) and set the ISO control so you don't lose the highlights.

-- hide signature --

Bob

Reply   Reply with quote   Complain
Lee Jay
Forum ProPosts: 45,253Gear list
Like?
Why worship exposure?
In reply to EinsteinsGhost, 5 months ago

EinsteinsGhost wrote:

bobn2 wrote:

EinsteinsGhost wrote:

bobn2 wrote:

Albert Silver wrote:

tko wrote:

Remember that F4.0 is considered kind of slow on FF, but is equal to F2.0 on M43rds, which is considered "fast."

That's not entirely accurate. You are describing the depth of field equivalence, from one sensor to the next, not the light. f/2 on a m43rds may have the depth of field of f/4 on a full-frame, but the light will still be f/2.

The 'light' of a FF f/2 and a FT f/4 will be the same, which is the point he is making. In the end, given equally efficient sensors, you can achieve the same result at the same shutter speed using an f/4 on FF as you can on FT. The density of the light of the f/4 is one quarter but there is a sensor four times the area to collect it, so it ends up the same.

-- hide signature --

Bob

The point tko is making is wrong. DOF equivalence applies, exposure equivalence does not (for the reason you state above).

Whether the point tko is making is wrong or not depends entirely on what you think is the definition of 'fast'. If you think that 'fast' is to do with exposure when comparing between formats, then he is wrong. However, that's not a very sensible point of view, so if you assume that he thinks that 'fast' means 'puts more light on the sensor' then he is right.

Fast has only one meaning: Shutter Speed, as in exposure time, the time value part of exposure. And that makes you just as much wrong as tko.

Photographic exposure is independent of media size.

The fastest speed at which I can set my shutter speed is limited by the amount of noise I'll accept in the final image.

The noise is driven by two things - the technology used in the sensor and the rest of the camera pipeline, and the TOTAL amount of light that falls of the sensor.

It makes no difference if that light is thinly spread out (slow f-stop, large sensor) or concentrated down (fast f-stop, small sensor). This is why f-stop equivalence holds across sensor sizes and why, say, ISO 400 and f/2 on 4/3 is equivalent to ISO 1600 and f/4 on full frame - they both produce the same image with the same noise from the same shutter speed given the same technology.

-- hide signature --

Lee Jay

 Lee Jay's gear list:Lee Jay's gear list
Canon IXUS 310 HS Canon PowerShot SX260 HS Canon EOS 5D Canon EOS 20D Canon EOS 550D +23 more
Reply   Reply with quote   Complain
John Sheehy
Forum ProPosts: 16,299
Like?
Re: Understanding ISO
In reply to Great Bustard, 5 months ago

Great Bustard wrote:

I've always wondered what the difference in QE is between the different color channels -- do you know?

If you shoot a gray card under mid-day sun, you get 1/2 stop less light captured in the blue pixels, and 1 stop less in the red ones, typically. And of course, if our ideal is a wavelength-sorting sensor rather than a CFA, only 1/4 of the pixels are that sensitive to red, and the other 3/4 are far less sensitive to red.

2-3 electrons is not a big issue at base ISO, with big pixels, but for small pixels and and high ISOs it is tremendous read noise.

This is why I said below:

For sure, the read noise represents a kind of floor beyond which the noise in the photo quickly diminishes.

Did you mean "above which"?

Well, yes, statistically, read noise diminishes greatly above the so-called noise floor at base ISO, but it can still be quite visible when it is correlated or has banding. Otherwise, ideal Poisson noise and Gaussian noise are indistinguishable as photon populations get above 12 (12 is the threshold that many Poisson noise generators use; above 12 mean photons, they typically use a Gaussian noise generator because it is faster). I've seen banding and blotching in highlight areas of base ISO when using a lot of sharpening, and contrast or saturation boosts.

And of course, real world read noise does not look like gaussian noise created by a computer; it is highly correlated, visually, even if not significantly so, as a statistic.

I'd be interested in more info on that. I've heard others say that read noise is pretty much gaussian (assuming we don't count banding as noise, since it is systematic).

This is one of the bad sides of knowledge, IMO. Lots of folks have learned about signal acquisition and processing in school or in their jobs, but a lot of times knowledge makes people too quick to deal with phenomena and close the case. Yes, the histogram of read noise is often close to pure Gaussian, but that tells you nothing about how the noise interferes with our perception of images. We don't see monolithic histograms of noise; we see noise particles of various shapes and sizes interfering with the discerning of detail in the subject, or creating distractions.

Imagine that you had a black frame from a camera with crazy low-frequency noise blotches and thick, heavy banding that caused different color casts in different parts of the frame. Now, imagine that the same exact pixels were in another frame, with the same exact histogram, but you randomly scrambled the pixels in the image so that all of the patterns of the original were gone. They have exactly the same histogram, possibly even near-perfectly Gaussian, but would they look anything alike? They would look different enough zoomed into 100% pixel view, but reduce them in size to the size of a business card, and the original is just a bunch of lumps and bands, while the scrambled/randomized one with exactly the same population of pixels and histograms is much closer to black. Histograms are only meaningful when the character of noise is already known and accounted for. This is why I mentioned to Jim Kasson that generating Gaussian noise in simulations is overly optimistic; it is more practical to use real-world black frames to simulate read noise.

That limit is much farther away, and is only limited in regard to how much we want to magnify the capture in display. Pure photon captures can work very well with very, very tiny signals if you do not need to display them large. They have no contrast issues in near-blacks, and no linearity problems down there, either.

The thing is, though, while lower read noise will certainly push the noise floor further out, the photo will still be rather noisy due to low signal. For example, while we may be able to shift the "noise floor cliff" from ISO 25600 to 102400 with lower read noise, we're never going to make ISO 25600 look as good as ISO 3200.

ISO 25600 will never have as many photons as ISO 3200, except maybe in the red or blue channel with an ideal wavelength-sorting filter system. The shadows of ISO 25600, however, would be extremely usable on a FF camera with no read noise.

Any read noise standard deviation higher than about 0.15 interferes with very high ISOs. 0.15 would be visible at very high ISOs, but since at 0.15 each photon count has its own virtually distinct histogram not overlapping the higher and lower photon counts' histograms, one could reduce all the values in the bell curve to a single value with a very high level of confidence. And of course, your digital gain must be high enough to render the individual bell-curves. I'd prefer a magical black-box photon counter and wavelength recorder, of course.

IMO, it is futile to try to predict the usability of extremely high ISOs with only photon noise based on monolithic SNR values as would exist in gray patches of various intensity, and assuming that photon noise is just like read noise but is proportional to the square root of the signal. The differences go far beyond that, because image subjects are not big gray squares. Images are modulating signals, where a thin white hair can exist against a black background, and with pure shot noise, there is no loud noise in the black part next to the white hair to obscure the hair. The white may not even be contiguous with a low signal level, but it still clearly suggests the line. I don't know where it is now, but a few years back I took a shot into an abandoned warehouse through a hole in a piece of wood in a window, with spiderwebs beyond the hole, and I decided when I got home to simulate, based on the camera's QE, what it would look like at ISO 6 million with only photon noise (of course, the simulation also included the noise from the actual capture, so it was worse than it would be with pure shot noise). Guess what; I could see the threads in the spiderwebs at 100%, against a dark background. At 6x4 inches or so on my monitor, you had to stare at the image to notice any noise, and it looked more like very fine sand art than it did a typical high-ISO digital capture.

Reply   Reply with quote   Complain
Great Bustard
Forum ProPosts: 24,681
Like?
Re: Exposure and brightness.
In reply to bobn2, 5 months ago

bobn2 wrote:

Great Bustard wrote:

OK, this is a little more work. Tell me how you like this:

A certain number of photons fall on a pixel releasing a certain number of electrons which generates a charge. A gain may, or may not, be applied to this charge as a function of the ISO setting on the camera. The charge is then converted into a digital number by the ADC (Analog to Digital Converter). The RAW converter (or in-camera JPG engine) processes groups of digital numbers into RGB values for the image file where the ISO setting on the camera maps these values so that they have the brightness that the corresponds to the exposure and ISO setting.

You're making it too complex, getting bogged down on the detail of the technology, rather than what is the process being undertaken.

Is that not necessary, however?  I mean, if people are to understand what an ISOless sensor is all about, for example, then the ADC needs to be discussed, as well as understanding why, for example, a photo at f/2.8 1/100 ISO 1600 is less noisy than a photo of the same scene at f/2.8 1/100 ISO 100 pushed four stops for a sensor that is not ISOless.

The camera counts the photons, in three different bags (red, green or blue). Then it translates those photon counts into grey scale values. The exact translation done depends on the ISO setting, or to reverse that the translation done determines the ISO. The rest is implementation details.

Sure, but at least some of the details are important, methinks.

If we could engineer a sensor that directly output a count from each pixel, the basic process would be the same, and no-one would get bogged down in issues of 'ADC' or 'gain' or any of that stuff. That is just the way we engineer a photon counter at present. Film was also a photon counter. A pair of photons would reduce a reactive spot in a silver grain which catalysed its chemical reduction to silver. Each grain represented two photons collected.

Ideally, of course, we would have a sensor that recorded every photon that landed on it along with the wavelength (and momentum) of each photo that added no additional electronic noise.

However, in my opinion, I think the processing chain, from the capture to the image file, is necessary to describe as it does apply to how current cameras are used.  Like I said, there is a reason, other than operational convenience, to use ISO 1600 rather than ISO 100 and push, on many cameras, and that shouldn't be glossed over.

Reply   Reply with quote   Complain
Great Bustard
Forum ProPosts: 24,681
Like?
Re: Understanding ISO
In reply to John Sheehy, 5 months ago

John Sheehy wrote:

Great Bustard wrote:

I've always wondered what the difference in QE is between the different color channels -- do you know?

If you shoot a gray card under mid-day sun, you get 1/2 stop less light captured in the blue pixels, and 1 stop less in the red ones, typically.

But is this due to differences in the transmission of the colors in the CFA, or due to the differences in the amount of different wavelengths of light falling on the sensor?

For example, the QE in the green channel is 50% on most modern sensors.  My understanding is that means that half the light falling on the green dye over a pixel is recorded.  Is that the same for pixels under the red and blue dyes?

And of course, if our ideal is a wavelength-sorting sensor rather than a CFA, only 1/4 of the pixels are that sensitive to red, and the other 3/4 are far less sensitive to red.

2-3 electrons is not a big issue at base ISO, with big pixels, but for small pixels and and high ISOs it is tremendous read noise.

This is why I said below:

For sure, the read noise represents a kind of floor beyond which the noise in the photo quickly diminishes.

Did you mean "above which"?

Well, yes, statistically, read noise diminishes greatly above the so-called noise floor at base ISO, but it can still be quite visible when it is correlated or has banding. Otherwise, ideal Poisson noise and Gaussian noise are indistinguishable as photon populations get above 12 (12 is the threshold that many Poisson noise generators use; above 12 mean photons, they typically use a Gaussian noise generator because it is faster). I've seen banding and blotching in highlight areas of base ISO when using a lot of sharpening, and contrast or saturation boosts.

Again, I'm not discussing banding.  This is a separate issue from noise as it is systematic as opposed to random.

And of course, real world read noise does not look like gaussian noise created by a computer; it is highly correlated, visually, even if not significantly so, as a statistic.

I'd be interested in more info on that. I've heard others say that read noise is pretty much gaussian (assuming we don't count banding as noise, since it is systematic).

This is one of the bad sides of knowledge, IMO. Lots of folks have learned about signal acquisition and processing in school or in their jobs, but a lot of times knowledge makes people too quick to deal with phenomena and close the case. Yes, the histogram of read noise is often close to pure Gaussian, but that tells you nothing about how the noise interferes with our perception of images. We don't see monolithic histograms of noise; we see noise particles of various shapes and sizes interfering with the discerning of detail in the subject, or creating distractions.

If read noise is Gaussian, then how does that differ, qualitatively, from photon noise which is Poissonian?

Imagine that you had a black frame from a camera with crazy low-frequency noise blotches and thick, heavy banding that caused different color casts in different parts of the frame.

Does that represent Gaussian noise, though?

Now, imagine that the same exact pixels were in another frame, with the same exact histogram, but you randomly scrambled the pixels in the image so that all of the patterns of the original were gone. They have exactly the same histogram, possibly even near-perfectly Gaussian, but would they look anything alike? They would look different enough zoomed into 100% pixel view, but reduce them in size to the size of a business card, and the original is just a bunch of lumps and bands, while the scrambled/randomized one with exactly the same population of pixels and histograms is much closer to black. Histograms are only meaningful when the character of noise is already known and accounted for. This is why I mentioned to Jim Kasson that generating Gaussian noise in simulations is overly optimistic; it is more practical to use real-world black frames to simulate read noise.

I agree that the theory must correspond to reality.  However, as I said, banding is a separate issue from read noise.  So, taking banding out of the equation, what is bad about read noise that is not also bad about photon noise?

That limit is much farther away, and is only limited in regard to how much we want to magnify the capture in display. Pure photon captures can work very well with very, very tiny signals if you do not need to display them large. They have no contrast issues in near-blacks, and no linearity problems down there, either.

The thing is, though, while lower read noise will certainly push the noise floor further out, the photo will still be rather noisy due to low signal. For example, while we may be able to shift the "noise floor cliff" from ISO 25600 to 102400 with lower read noise, we're never going to make ISO 25600 look as good as ISO 3200.

ISO 25600 will never have as many photons as ISO 3200, except maybe in the red or blue channel with an ideal wavelength-sorting filter system.

You mean it's possible to record 8x more red and blue light than now?

The shadows of ISO 25600, however, would be extremely usable on a FF camera with no read noise.

As I said, the read noise represents a kind of floor for the highest "usable" ISO setting.

Any read noise standard deviation higher than about 0.15 interferes with very high ISOs. 0.15 would be visible at very high ISOs, but since at 0.15 each photon count has its own virtually distinct histogram not overlapping the higher and lower photon counts' histograms, one could reduce all the values in the bell curve to a single value with a very high level of confidence. And of course, your digital gain must be high enough to render the individual bell-curves. I'd prefer a magical black-box photon counter and wavelength recorder, of course.

I don't understand any of that.  Could you explain in more detail, please?

IMO, it is futile to try to predict the usability of extremely high ISOs with only photon noise based on monolithic SNR values as would exist in gray patches of various intensity, and assuming that photon noise is just like read noise but is proportional to the square root of the signal. The differences go far beyond that, because image subjects are not big gray squares. Images are modulating signals, where a thin white hair can exist against a black background, and with pure shot noise, there is no loud noise in the black part next to the white hair to obscure the hair. The white may not even be contiguous with a low signal level, but it still clearly suggests the line. I don't know where it is now, but a few years back I took a shot into an abandoned warehouse through a hole in a piece of wood in a window, with spiderwebs beyond the hole, and I decided when I got home to simulate, based on the camera's QE, what it would look like at ISO 6 million with only photon noise (of course, the simulation also included the noise from the actual capture, so it was worse than it would be with pure shot noise). Guess what; I could see the threads in the spiderwebs at 100%, against a dark background. At 6x4 inches or so on my monitor, you had to stare at the image to notice any noise, and it looked more like very fine sand art than it did a typical high-ISO digital capture.

I like to think of read noise as like the CMB (Cosmic Background Radiation) -- it represents a limit as to how low the light can be before you can see structure, just as the CMB limits how far back in time we can see.

However, that structure will still be rather noisy due to the low signal, just as we could not resolve the structures we could see if there were no CMB as well as our own galaxy.

Reply   Reply with quote   Complain
bobn2
Forum ProPosts: 32,245
Like?
Re: Exposure and brightness.
In reply to Great Bustard, 5 months ago

Great Bustard wrote:

bobn2 wrote:

Great Bustard wrote:

OK, this is a little more work. Tell me how you like this:

A certain number of photons fall on a pixel releasing a certain number of electrons which generates a charge. A gain may, or may not, be applied to this charge as a function of the ISO setting on the camera. The charge is then converted into a digital number by the ADC (Analog to Digital Converter). The RAW converter (or in-camera JPG engine) processes groups of digital numbers into RGB values for the image file where the ISO setting on the camera maps these values so that they have the brightness that the corresponds to the exposure and ISO setting.

You're making it too complex, getting bogged down on the detail of the technology, rather than what is the process being undertaken.

Is that not necessary, however? I mean, if people are to understand what an ISOless sensor is all about, for example, then the ADC needs to be discussed, as well as understanding why, for example, a photo at f/2.8 1/100 ISO 1600 is less noisy than a photo of the same scene at f/2.8 1/100 ISO 100 pushed four stops for a sensor that is not ISOless.

They are important when they are important, but not before they are important. The problem we have is that people think that ISO is the implementation details. That misunderstanding is not their fault, it is because a great many sources, including this site have told them that it is so. So, the first thing people need to get is what ISO actually is. Once they have that, they might want to know about how the ISO control works on their camera, because it may have side effects, which are not ISO but are worth knowing. I would say that those side effects are not relevant in terms of this discussion. The difference really is between there being no ill-effect of 'turning up the ISO' and it being beneficial. The mindset equivalence deniers tend to have is that turning up the ISO as of itself causes 'ISO noise'.

The camera counts the photons, in three different bags (red, green or blue). Then it translates those photon counts into grey scale values. The exact translation done depends on the ISO setting, or to reverse that the translation done determines the ISO. The rest is implementation details.

Sure, but at least some of the details are important, methinks.

Depends on the discussion whether or not they are important. In terms of this discussion, I'd think they add unnecessary detail, not needed unless someone raises the old saw about 'amplifier noise'. As I said, first get across what ISO actually is, then look at implementations details on a case by case basis. Lumping them together just makes people think that ISO is 'amplification', which is where we came in.

If we could engineer a sensor that directly output a count from each pixel, the basic process would be the same, and no-one would get bogged down in issues of 'ADC' or 'gain' or any of that stuff. That is just the way we engineer a photon counter at present. Film was also a photon counter. A pair of photons would reduce a reactive spot in a silver grain which catalysed its chemical reduction to silver. Each grain represented two photons collected.

Ideally, of course, we would have a sensor that recorded every photon that landed on it along with the wavelength (and momentum) of each photo that added no additional electronic noise.

However, in my opinion, I think the processing chain, from the capture to the image file, is necessary to describe as it does apply to how current cameras are used. Like I said, there is a reason, other than operational convenience, to use ISO 1600 rather than ISO 100 and push, on many cameras, and that shouldn't be glossed over.

You have to be very sure not to mix the what with the how to the extent that people think that 'ISO' is amplification, because that is where a lot of these misunderstandings come from. In my experience as an educator, I would say, try to teach one concept at a time. First let's get an understanding of the black box 'ISO Engine', which takes a photon count and translates it to a grey scale value. That is something most people simply don't have. They think ISO magnifies the light. When people are clear what the ISO engine does, they can learn about the innards as they are today. It's like telling a learner driver what the engine does and thinking that then you have to go straight into pistons and camshafts and stuff.

-- hide signature --

Bob

Reply   Reply with quote   Complain
bobn2
Forum ProPosts: 32,245
Like?
Re: Understanding ISO
In reply to Great Bustard, 5 months ago

Great Bustard wrote:

I agree that the theory must correspond to reality. However, as I said, banding is a separate issue from read noise. So, taking banding out of the equation, what is bad about read noise that is not also bad about photon noise?

That much is quite straight forward. Photon noise is less as the signal is less, while read noise is there whatever the signal level. So, with zero read noise a sensor has an infinite dynamic range, whatever the signal level. That dynamic range applies even if the signal level is very low (very low exposures). However, the result of very low exposures is that the brighter parts will still appear noisy, even if the darker parts have no noise (with in between parts having in between noise). Essentially it's like printing the image through a random texture mask, which is not unattractive. Read noise adds a layer of random grey (or worse, mixed colours) over that, which messes up the shadows and leads to a very low SNR.

-- hide signature --

Bob

Reply   Reply with quote   Complain
Great Bustard
Forum ProPosts: 24,681
Like?
Re: Exposure and brightness.
In reply to bobn2, 5 months ago

bobn2 wrote:

Great Bustard wrote:

bobn2 wrote:

Great Bustard wrote:

OK, this is a little more work. Tell me how you like this:

A certain number of photons fall on a pixel releasing a certain number of electrons which generates a charge. A gain may, or may not, be applied to this charge as a function of the ISO setting on the camera. The charge is then converted into a digital number by the ADC (Analog to Digital Converter). The RAW converter (or in-camera JPG engine) processes groups of digital numbers into RGB values for the image file where the ISO setting on the camera maps these values so that they have the brightness that the corresponds to the exposure and ISO setting.

You're making it too complex, getting bogged down on the detail of the technology, rather than what is the process being undertaken.

Is that not necessary, however? I mean, if people are to understand what an ISOless sensor is all about, for example, then the ADC needs to be discussed, as well as understanding why, for example, a photo at f/2.8 1/100 ISO 1600 is less noisy than a photo of the same scene at f/2.8 1/100 ISO 100 pushed four stops for a sensor that is not ISOless.

They are important when they are important, but not before they are important.

The problem we have is that people think that ISO is the implementation details. That misunderstanding is not their fault, it is because a great many sources, including this site have told them that it is so. So, the first thing people need to get is what ISO actually is. Once they have that, they might want to know about how the ISO control works on their camera, because it may have side effects, which are not ISO but are worth knowing.

OK -- I can absolutely agree with that.

I would say that those side effects are not relevant in terms of this discussion. The difference really is between there being no ill-effect of 'turning up the ISO' and it being beneficial. The mindset equivalence deniers tend to have is that turning up the ISO as of itself causes 'ISO noise'.

Sure. But after that, I do think it's important to discuss the role the camera's ISO setting plays in read noise via the ADC.

The camera counts the photons, in three different bags (red, green or blue). Then it translates those photon counts into grey scale values. The exact translation done depends on the ISO setting, or to reverse that the translation done determines the ISO. The rest is implementation details.

Sure, but at least some of the details are important, methinks.

Depends on the discussion whether or not they are important. In terms of this discussion, I'd think they add unnecessary detail, not needed unless someone raises the old saw about 'amplifier noise'. As I said, first get across what ISO actually is, then look at implementations details on a case by case basis. Lumping them together just makes people think that ISO is 'amplification', which is where we came in.

OK, sure.

If we could engineer a sensor that directly output a count from each pixel, the basic process would be the same, and no-one would get bogged down in issues of 'ADC' or 'gain' or any of that stuff. That is just the way we engineer a photon counter at present. Film was also a photon counter. A pair of photons would reduce a reactive spot in a silver grain which catalysed its chemical reduction to silver. Each grain represented two photons collected.

Ideally, of course, we would have a sensor that recorded every photon that landed on it along with the wavelength (and momentum) of each photo that added no additional electronic noise.

However, in my opinion, I think the processing chain, from the capture to the image file, is necessary to describe as it does apply to how current cameras are used. Like I said, there is a reason, other than operational convenience, to use ISO 1600 rather than ISO 100 and push, on many cameras, and that shouldn't be glossed over.

You have to be very sure not to mix the what with the how to the extent that people think that 'ISO' is amplification, because that is where a lot of these misunderstandings come from. In my experience as an educator, I would say, try to teach one concept at a time. First let's get an understanding of the black box 'ISO Engine', which takes a photon count and translates it to a grey scale value. That is something most people simply don't have. They think ISO magnifies the light. When people are clear what the ISO engine does, they can learn about the innards as they are today. It's like telling a learner driver what the engine does and thinking that then you have to go straight into pistons and camshafts and stuff.

I hear what your saying, and it appears that I was using the term "amplification" in a misleading manner. I simply meant that higher ISOs would result in higher values in the image file, not that there was a linear mapping or that higher ISO settings increased the light.

Do you think it's correct to say that the density of light falling on the sensor (photons / mm²) is mapped into the exposure via the luminosity function which is then mapped into the image file by an ISO function?

Reply   Reply with quote   Complain
Great Bustard
Forum ProPosts: 24,681
Like?
Re: Understanding ISO
In reply to bobn2, 5 months ago

bobn2 wrote:

Great Bustard wrote:

I agree that the theory must correspond to reality. However, as I said, banding is a separate issue from read noise. So, taking banding out of the equation, what is bad about read noise that is not also bad about photon noise?

That much is quite straight forward. Photon noise is less as the signal is less, while read noise is there whatever the signal level. So, with zero read noise a sensor has an infinite dynamic range, whatever the signal level. That dynamic range applies even if the signal level is very low (very low exposures). However, the result of very low exposures is that the brighter parts will still appear noisy, even if the darker parts have no noise (with in between parts having in between noise). Essentially it's like printing the image through a random texture mask, which is not unattractive. Read noise adds a layer of random grey (or worse, mixed colours) over that, which messes up the shadows and leads to a very low SNR.

Yes!  That's exactly what I was trying to say when I said read noise represents a kind of noise floor and my analogy to the CMB.

Reply   Reply with quote   Complain
brian
Senior MemberPosts: 1,002
Like?
Re: Fast lenses, and High ISO
In reply to 67gtonr, 5 months ago

67gtonr wrote:

67gtonr wrote:

EinsteinsGhost wrote:

67gtonr wrote:

EinsteinsGhost wrote:

Fast lenses, as they are called allow for more light to hit the sensor and in turn allow for fast(er) shutter speeds. The F number provides a relative measure of how much this ability is.

In this age of ever increasing ISO, are fast lenses needed anymore? The only ability I see the fast lenses provide was actually a disadvantage that happened to become a feature, and that is shallow DoF, allowing for separation of subject from the background.

That said, should they be called Fast Lenses or Shallow Lenses

Now, I took some available light family portraits 750ft below ground at Carlsbad Caverns recently using Sony NEX-6. There isn't much light there (in this case, there was some light from the cafe but still too dark to see in person). I had Minolta 50/1.4 on Speedbooster, which gave me an effective 35mm f/1 lens. I used ISO 3200 and still had only 1/30s for shutter speed (could have improved it a bit with spot metering but some of the ambiance would be lost). The scene brightness value was -5EV. This is an example of when just a superfast lens or high ISO capability worked better when combined.

Would I have loved having Sony a7s instead? You bet! Would have gotten same exposure at ISO 6400 which is impressively clean and composed in that camera.

I am confused by this, I thought when a Speedbooster is used on an APS-C sensored camera it essentially negated the crop factor, so instead of your 50/1.4 shooting as a 75/1.8 it shoots 50/1.4, approximately?

It is not an exact compensation (IIRC, 0.71x, so almost). So, using 50/1.4 via speed booster is like using 35/1 without it (the resulting FOV and DOF will be that of about 50/1.5 on FF).

So the Speedbooster gave you a 50/1.5 not a 35/1, if you had used the same lens and Speedbooster on a full frame camera it would have given you 35/1.

I see what your saying, that the lens & booster gave you the same results as using a 35/1 lens on your NEX

A 50mm f/1.4 lens plus a 0.71x focal reducer *is* a 35mm f/1.0 lens.  Pure and simple, no ifs, ands, or buts.

-- hide signature --

Brian Caldwell

Reply   Reply with quote   Complain
EinsteinsGhost
Forum ProPosts: 11,977Gear list
Like?
Re: Aside from DOF and noise...
In reply to Great Bustard, 5 months ago

Great Bustard wrote:

EinsteinsGhost wrote:

I don't care about different people. I only care about why I would call a lens fast.

...why do you care about the "speed" of a lens?

Exposure Values. You might learn, if you tried, may be.

 EinsteinsGhost's gear list:EinsteinsGhost's gear list
Sony Cyber-shot DSC-F828 Sony SLT-A55 Sony Alpha NEX-6 Sigma 18-250mm F3.5-6.3 DC OS HSM Sony 135mm F2.8 (T4.5) STF +12 more
Reply   Reply with quote   Complain
EinsteinsGhost
Forum ProPosts: 11,977Gear list
Like?
Re: You missed the question.
In reply to Great Bustard, 5 months ago

Great Bustard wrote:

EinsteinsGhost wrote:

Great Bustard wrote:

EinsteinsGhost wrote:

Great Bustard wrote:

A competent mFT photographer with an EM10 + 12-40 / 2.8 shoots a scene at 25mm f/5.6 1/100 ISO 400. What settings would result in the "correct exposure" if they had instead been using FF with a 6D + 24-70 / 2.8 VC?

For same exposure...

I didn't ask about the setting for the same exposure -- I asked about the settings that resulted in the "correct exposure".

Tell me what you mean by "correct exposure".

That's what I was hoping you would answer.

...any competent photographer would use f/5.6, 1/100 and ISO 400 on ANY format. This, of course assumes same shooting conditions, transmissive properties of the lens, and metering used in respective cameras.

Why would a competent photographer necessarily use the same exposure with different formats?

Because a competent photographer knows exposure is not dependent on format. It is dependent on: ISO, Aperture and Shutter values for a given scene brightness. That is it.

So, for a given scene luminance, f/2 1/100 ISO 400 and f/4 1/100 ISO 1600 have the same exposure? That's not what you said in another post. You said that the f/4 photo had two stops lower exposure.

No. You're using a lower exposure by 2-stops, and using 2-stops of "sensitivity" increase to overcome it. Why else would you think ISO 1600?

So, in what way does the density of light falling on the sensor matter more than the total amount of light falling on the sensor?

The answer to the "correct exposure" question should make it easier to answer the question immediately above.

Only if you knew what "correct exposure" is. Can you quantify and describe it?

That's what I was asking you. Here, I'll repeat the question:

A competent mFT photographer with an EM10 + 12-40 / 2.8 shoots a scene at 25mm f/5.6 1/100 ISO 400. What settings would result in the "correct exposure" if they had instead been using FF with a 6D + 24-70 / 2.8 VC?

Note that I am asking for the settings for the "correct exposure" (whatever you think makes that exposure "correct" -- that's the reason for the question), not the settings for the same exposure.

Correct exposure in a comparison would entail, same exposure.

Ah -- so if f/2 1/100 ISO 400 on mFT gives the "correct exposure" then f/2 1/100 ISO 400 on FF would also give the "correct exposure"? So, if you used a different exposure on FF, then it would be "incorrect"?

"Correct" or "Incorrect" is your fantasy. I speak of exposure, same or different. In other words, f/2 and 1/100s on FF or m43 or whatever is: Same exposure.

Whether it is correct or incorrect by sensor size, I will let you dwell in that since exposure values are independent of format (something you don't know).

 EinsteinsGhost's gear list:EinsteinsGhost's gear list
Sony Cyber-shot DSC-F828 Sony SLT-A55 Sony Alpha NEX-6 Sigma 18-250mm F3.5-6.3 DC OS HSM Sony 135mm F2.8 (T4.5) STF +12 more
Reply   Reply with quote   Complain
Keyboard shortcuts:
FForum MMy threads