DPReview.com is closing April 10th - Find out more

Help me understand the role of pixel size

Started Feb 22, 2022 | Questions
Muster Mark Contributing Member • Posts: 691
Help me understand the role of pixel size

I am primarily concerned with dynamic range.

I think I understand the basics of the definition for DR: ratio of highest signal value to lowest, where lowest is determined by the value at which SNR = 1.

Now photonstophotos insists that DR is purely related to sensor area, all else being equal. And they have the data to back this up. This doesn't make sense to me. (Note I am not saying it's wrong).

There are a lot of places on the internet that say full well capacity is a function of pixel size and this capacity determines the highest signal value in our definition of DR. For example this paper: Microsoft Word - Farrell_Feng_Kavusi_SPIE06_final.doc (stanford.edu)

Is the paper wrong? Is there actually no tradeoff (with a given technology) to arbitrarily large pixel counts? If someone can point me to some place that explains this I would appreciate.

-- hide signature --

Cheers,
-Ian

 Muster Mark's gear list:Muster Mark's gear list
Olympus E-3 Olympus Zuiko Digital ED 12-60mm 1:2.8-4.0 SWD Olympus Zuiko Digital ED 50-200mm 1:2.8-3.5 SWD Olympus Zuiko Digital ED 50mm 1:2.0 Macro
ANSWER:
kenw
kenw Veteran Member • Posts: 7,095
Try this article:
5

There is a halfway decent article on this here on DPR:

https://www.dpreview.com/articles/5365920428/the-effect-of-pixel-and-sensor-sizes-on-noise

Give that a read and see if it helps.

What you’ve read about larger pixels having more DR is absolutely correct at the pixel level. But in general we don’t stare at photos at the pixel level, we view them as a whole image at some given output size (an A3 print, or a 27” monitor or whatever). Resizing the image to fit that output largely negates the effects of pixel size on DR and noise and instead what matters is largely sensor size. Again, try the linked article.

-- hide signature --

Ken W
See profile for equipment list

 kenw's gear list:kenw's gear list
Panasonic Lumix DMC-GM1 Nikon Z7 Panasonic Leica DG Summilux 15mm F1.7 ASPH Nikon Z 14-30mm F4 Nikon Z 24-200mm F4-6.3 VR +46 more
OP Muster Mark Contributing Member • Posts: 691
Re: Try this article:

thanks for the link! I think I understand why noise is basically independent of pixel size, but that's only one half of the DR equation. The other is full well capacity, which should change from binning (unless you have a mix of less sensitive pixels involved as I think are done with quad bayer sensors?). Am I wrong here?

-- hide signature --

Cheers,
-Ian

 Muster Mark's gear list:Muster Mark's gear list
Olympus E-3 Olympus Zuiko Digital ED 12-60mm 1:2.8-4.0 SWD Olympus Zuiko Digital ED 50-200mm 1:2.8-3.5 SWD Olympus Zuiko Digital ED 50mm 1:2.0 Macro
kenw
kenw Veteran Member • Posts: 7,095
Re: Try this article:
3

Muster Mark wrote:

thanks for the link! I think I understand why noise is basically independent of pixel size, but that's only one half of the DR equation. The other is full well capacity, which should change from binning (unless you have a mix of less sensitive pixels involved as I think are done with quad bayer sensors?). Am I wrong here?

So for a given process/sensor technology the full well capacity is mostly down to the area of the pixel.  So a pixel 2um pixel will have a FWC four times larger than a 1um pixel because it has four times the area.  But of course we have four of the 1um pixels and when we add up their full well capacities it comes out to be just about the same as the full well capacity of the single 2um sensor.

So much like noise in the shadows it turns out that highlight clipping (FWC) is also mostly independent of pixel size when we look at the final image as a whole.

So far, if we look at Sony FF sensors, it appears the higher resolution sensors (40~60MP) are usually designed such that they can support a somewhat lower base ISO and "per area" FWC than their near equivalent lower resolution sensors (24MP).  So these higher resolution, smaller pixel sensors do end up with a bit more base ISO DR than the lower resolution sensors.  I haven't a clue as to whether that is just a consequence of the lower resolution sensors usually being optimized for better high ISO noise performance which then compromises their base ISO DR a bit or if it is some fundamental consequence of the relationship between pixel size and FWC not being strictly linear with area.

-- hide signature --

Ken W
See profile for equipment list

 kenw's gear list:kenw's gear list
Panasonic Lumix DMC-GM1 Nikon Z7 Panasonic Leica DG Summilux 15mm F1.7 ASPH Nikon Z 14-30mm F4 Nikon Z 24-200mm F4-6.3 VR +46 more
selected answer This post was selected as the answer by the original poster.
Erik Kaffehr
Erik Kaffehr Veteran Member • Posts: 7,154
Re: Help me understand the role of pixel size
5

Hi,

Engineering DR is full well capacity divided by readout noise. Halving the pixel area halves the FWC. Read out noise does reduce with pixel size, but not proportionally.

So making the pixels smaller, Engineering DR goes down per pixel.

But you have more pixels. That compensates for the reduction of DR per pixel, but not fully.

Bill Claff’s Photographic DR takes that into account.

Now, readout noise is pixel design dependent and newer designs have less readout noise than older designs. This is an area that improves rapidly.

The other point is that DR only affects the darkest parts of the image, in most parts of the image shot noise dominates over readout noise and that noise depends only on FWC, full well capacity. With shot noise increasing the number of pixels almost fully compensates for the reduction of FWC per pixel.

This can be illustrated quite well by a technology developed by DALSA for Phase One.

Some Phase One sensors have something called Sensor+. This bins four pixels into one at some ISO setting. So, say a 60 MP sensors now delivers 15 MP, but with increased DR.

In screen mode, Sensor+ has a huge effect. But what screen mode means that we view the image at half the size.

Switching to print mode, the image is viewed at the same size, and Sensor+ has a smaller effect.

Looking at SNR at the pixel level Sensor+ has a huge effect.

But, looking at same image size, the advantage of Sensor+ is mostly gone.

Now, consider that readout noise mostly affects the very darkest parts of the image, especially with modern sensors that have low readout noise. Those parts of the image will be probably near black in a real world image.

Many modern sensors have 'dual gain' conversion. This works by reducing full well capacity at some ISO. Making FWC smaller, the voltage swing over the photodiode will increase, leading to a reduction of readout noise.

Per pixel read noise at base ISO is a bit lower on the A7rIV compared the older A7II that has larger pixels. At 320 ISO the A7rIV sensor switches to high gain conversion, which halves the readout noise. Normally, we would reduce exposure when increasing ISO, which would mean that we don't use the full well capacity. So giving up on FWC in order to improve readout noise makes some sense.

Best regards

Erik

-- hide signature --

Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic uses to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles

Jack Hogan Veteran Member • Posts: 8,199
Clipping and Gain
4

Muster Mark wrote:

thanks for the link! I think I understand why noise is basically independent of pixel size, but that's only one half of the DR equation. The other is full well capacity, which should change from binning (unless you have a mix of less sensitive pixels involved as I think are done with quad bayer sensors?). Am I wrong here?

Some good answers above. At its simplest, DR is a function of FWC and read noise. Two important parameters to keep in mind in such discussions are clipping and gain.

What we informally call Full Well Count around here is in practice clipping. We typically never see actual Full Well Counts in consumer sensors even at base ISO - because near full well the response of the pixel becomes non linear. Therefore manufacturers ensure that the ADC reaches full scale lower than that, in the linear region, clipping the signal below FWC. Different manufacturers may choose different criteria for maximum deviation from linearity (For instance I have heard it said that the first ISO64 FF camera, the Nikon D810, could be almost 10% off linear at clipping).

Clipping in recent sensors tends to occur in the range of 2500-3500 e-/um^2 of pixel area. Near the top of the range and beyond we suspect noise reduction.

Gain, say in DN/e-, also plays an important role in DR because the higher the gain, the lower the input-referred electronic (read) noise aotbe - but also the faster clipping is achieved, hence the range above.

So gain affects both the denominator and the numerator in DR. Different applications require different compromises, with different gains. Think Video vs Landscape or low-light vs daylight.

What's your camera aimed at? Mine at landscapes, so it carries its compromises.

Jack

Entropy512 Veteran Member • Posts: 6,019
Re: Help me understand the role of pixel size
4

Erik Kaffehr wrote:

Now, readout noise is pixel design dependent and newer designs have less readout noise than older designs. This is an area that improves rapidly.

I would disagree on rapid improvement.  There used to be rapid improvement, but we've been in an area of diminishing returns since (roughly) the time of the A7RII

Also, historically, smaller pixels led to reduced PDR due to reduced quantum efficiency (area shadowed by circuitry insensitive to photons).  Gapless microlenses, finer-pitch manufacturing processes, and/or BSI have led to us appearing what appears to be the limits of QE for Bayer-on-silicon, thus again we're seeing diminishing returns since (roughly) the A7RII.

For modern sensors, the only significant benefit I've seen in the past 5-7 years for larger pixels is a reduction in bandwidth/throughput/readout rate requirements.

-- hide signature --

Context is key. If I have quoted someone else's post when replying, please do not reply to something I say without reading text that I have quoted, and understanding the reason the quote function exists.

 Entropy512's gear list:Entropy512's gear list
Sony a6000 Pentax K-5 Pentax K-01 Sony a6300 Canon EF 85mm F1.8 USM +5 more
OP Muster Mark Contributing Member • Posts: 691
This makes a lot of sense

For some reason I wasn't thinking about per-area FWC and only per pixel FWC, which is where the disconnect was. Assuming FWC is in a linear relationship with pixel area, then this all makes sense. Thanks so much.

-- hide signature --

Cheers,
-Ian

 Muster Mark's gear list:Muster Mark's gear list
Olympus E-3 Olympus Zuiko Digital ED 12-60mm 1:2.8-4.0 SWD Olympus Zuiko Digital ED 50-200mm 1:2.8-3.5 SWD Olympus Zuiko Digital ED 50mm 1:2.0 Macro
hjulenissen Senior Member • Posts: 2,442
Re: Clipping and Gain
1

Jack Hogan wrote:

...near full well the response of the pixel becomes non linear. Therefore manufacturers ensure that the ADC reaches full scale lower than that, in the linear region, clipping the signal below FWC. Different manufacturers may choose different criteria for maximum deviation from linearity (For instance I have heard it said that the first ISO64 FF camera, the Nikon D810, could be almost 10% off linear at clipping).

Why are manufacturers doing this? Are they being economic with AD # bits (rather than clipping prior to saturation, I would guess that 1-2 more bits could capture whatever signal is there)? Or do they think that raw pipelines cannot handle a signal that is not linear-light at its top end? Or it there some analog electronics reason why clipping the signal early on is a better option?

If this non linearity is at all predictable, I would assume that highlight recovery would be a lot easier for a softclipped signal than a hard clipped one. Perhaps even look nicer with minimal processing (as long as white point/tint is ensured).

-h

Erik Kaffehr
Erik Kaffehr Veteran Member • Posts: 7,154
Re: Clipping and Gain
3

hjulenissen wrote:

Jack Hogan wrote:

...near full well the response of the pixel becomes non linear. Therefore manufacturers ensure that the ADC reaches full scale lower than that, in the linear region, clipping the signal below FWC. Different manufacturers may choose different criteria for maximum deviation from linearity (For instance I have heard it said that the first ISO64 FF camera, the Nikon D810, could be almost 10% off linear at clipping).

Why are manufacturers doing this? Are they being economic with AD # bits (rather than clipping prior to saturation, I would guess that 1-2 more bits could capture whatever signal is there)? Or do they think that raw pipelines cannot handle a signal that is not linear-light at its top end? Or it there some analog electronics reason why clipping the signal early on is a better option?

If this non linearity is at all predictable, I would assume that highlight recovery would be a lot easier for a softclipped signal than a hard clipped one. Perhaps even look nicer with minimal processing (as long as white point/tint is ensured).

-h

Because there would be severe distortion in color if channels were allowed to go into nonlinear.

Nikon probably achieves it's ISO 64 setting by utilizing a longer part of the 'photon transfer curve.

Best regards

Erik

-- hide signature --

Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic uses to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles

hjulenissen Senior Member • Posts: 2,442
Re: Clipping and Gain

Erik Kaffehr wrote:

Because there would be severe distortion in color if channels were allowed to go into nonlinear.

Nikon probably achieves it's ISO 64 setting by utilizing a longer part of the 'photon transfer curve.

Best regards

Erik

Worst case, you could always just clip the raw file at 2^14 and be where we are today.

Best case, there would be some scene-related information in that msb that could allow for better rendering. At the cost of 1 bit more ADC and corresponding bandwidth and storage.

You could say that having raw files at all means horrible color rendition. That it would be safer to bake in color correction. But the philosophy of raw is to give everything the sensor offers in the hope that later processing is going to make better use of it.

-h

Entropy512 Veteran Member • Posts: 6,019
Re: Clipping and Gain

Erik Kaffehr wrote:

hjulenissen wrote:

Jack Hogan wrote:

...near full well the response of the pixel becomes non linear. Therefore manufacturers ensure that the ADC reaches full scale lower than that, in the linear region, clipping the signal below FWC. Different manufacturers may choose different criteria for maximum deviation from linearity (For instance I have heard it said that the first ISO64 FF camera, the Nikon D810, could be almost 10% off linear at clipping).

Why are manufacturers doing this? Are they being economic with AD # bits (rather than clipping prior to saturation, I would guess that 1-2 more bits could capture whatever signal is there)? Or do they think that raw pipelines cannot handle a signal that is not linear-light at its top end? Or it there some analog electronics reason why clipping the signal early on is a better option?

If this non linearity is at all predictable, I would assume that highlight recovery would be a lot easier for a softclipped signal than a hard clipped one. Perhaps even look nicer with minimal processing (as long as white point/tint is ensured).

-h

Because there would be severe distortion in color if channels were allowed to go into nonlinear.

Nikon probably achieves it's ISO 64 setting by utilizing a longer part of the 'photon transfer curve.

Best regards

Erik

Or simply using a different JPEG tone curve.

Remember, ISO ratings are derived from JPEG performance characteristics (specifically what lands at code value 128 in the JPEG, or was it 127?) and have little to no connection to how the raw sensor data behaves.

-- hide signature --

Context is key. If I have quoted someone else's post when replying, please do not reply to something I say without reading text that I have quoted, and understanding the reason the quote function exists.

 Entropy512's gear list:Entropy512's gear list
Sony a6000 Pentax K-5 Pentax K-01 Sony a6300 Canon EF 85mm F1.8 USM +5 more
Mark S Abeln
Mark S Abeln Forum Pro • Posts: 19,548
Quad Bayer
3

Muster Mark wrote:

(unless you have a mix of less sensitive pixels involved as I think are done with quad bayer sensors?). Am I wrong here?

From what I understand, Quad Bayer sensors (also known as Tetracell sensors) with 4x4 adjacent pixels of the same color filter, and the similar Nonacell sensors with 3x3 adjacent pixels, are used to avoid the problems with crosstalk, where photons initially impinging on the surface of one pixel instead get accidentally recorded by a neighboring pixel. With these technologies, you are increasing the odds that a photon still gets detected by a pixel that has the intended color.

While this seems to reduce color resolution, at least some luminance signal can be estimated from each pixel, no matter the color, so the increase of resolution is at least partly preserved. And luma detail is typically more important than chroma detail.

Another, perhaps better system, is having secondary color filters between the primaries, such as magenta pixels between red and blue pixels, yellow between red and green, and cyan pixels between green and blue pixels. This would complicate processing a bit, and I'm not sure if this has other problems associated with it.

This kind pf system offers no advantages other than making the most of a bad situation. You're much better off reducing crosstalk if you can.

 Mark S Abeln's gear list:Mark S Abeln's gear list
Nikon D200 Nikon D7000 Nikon D750 Canon EOS M5 Nikon AF Nikkor 50mm f/1.8D +5 more
SrMi
SrMi Veteran Member • Posts: 4,377
Re: Quad Bayer

Mark Scott Abeln wrote:

Muster Mark wrote:

(unless you have a mix of less sensitive pixels involved as I think are done with quad bayer sensors?). Am I wrong here?

From what I understand, Quad Bayer sensors (also known as Tetracell sensors) with 4x4 adjacent pixels of the same color filter, and the similar Nonacell sensors with 3x3 adjacent pixels, are used to avoid the problems with crosstalk, where photons initially impinging on the surface of one pixel instead get accidentally recorded by a neighboring pixel. With these technologies, you are increasing the odds that a photon still gets detected by a pixel that has the intended color.

While this seems to reduce color resolution, at least some luminance signal can be estimated from each pixel, no matter the color, so the increase of resolution is at least partly preserved. And luma detail is typically more important than chroma detail.

Another, perhaps better system, is having secondary color filters between the primaries, such as magenta pixels between red and blue pixels, yellow between red and green, and cyan pixels between green and blue pixels. This would complicate processing a bit, and I'm not sure if this has other problems associated with it.

This kind pf system offers no advantages other than making the most of a bad situation. You're much better off reducing crosstalk if you can.

Olympus has launched a new camera (OM-1) that incorporates a Quad Bayer sensor. The sub-pixels are apparently used mainly for AF.

Interceptor121 Veteran Member • Posts: 8,691
it is about millions not single pixels
3

While the models refer to one pixel and are of course correct when you model a sensor with several millions of them you are in the domain of statistics especially as light itself follows a distribution model.

Think about light as rain and pixels as buckets

What the evidence suggest is that 1000 buckets of 10 liters collect the same light of 10000 of 1 liter

What matter is the sum of the component not how big is each one

You can go and spend some time to understand the real reasons of all factors at play however while that is going to greatly improve your knowledge will not move much the needle from the layman statement I put up there

There are of course issue with having too few pixels (zero resolution) and too many (other effects come into play) but in general terms nothing to overthink

 Interceptor121's gear list:Interceptor121's gear list
Panasonic Lumix DC-GH5 II Panasonic Lumix DC-GH6 Sony a1 Panasonic Lumix G Vario 14-42mm F3.5-5.6 II ASPH Mega OIS Olympus M.Zuiko Digital ED 60mm F2.8 Macro +24 more
Brian Kimball
Brian Kimball Contributing Member • Posts: 756
Re: Clipping and Gain

Entropy512 wrote:

Erik Kaffehr wrote:

hjulenissen wrote:

Jack Hogan wrote:

...near full well the response of the pixel becomes non linear. Therefore manufacturers ensure that the ADC reaches full scale lower than that, in the linear region, clipping the signal below FWC. Different manufacturers may choose different criteria for maximum deviation from linearity (For instance I have heard it said that the first ISO64 FF camera, the Nikon D810, could be almost 10% off linear at clipping).

Why are manufacturers doing this? Are they being economic with AD # bits (rather than clipping prior to saturation, I would guess that 1-2 more bits could capture whatever signal is there)? Or do they think that raw pipelines cannot handle a signal that is not linear-light at its top end? Or it there some analog electronics reason why clipping the signal early on is a better option?

If this non linearity is at all predictable, I would assume that highlight recovery would be a lot easier for a softclipped signal than a hard clipped one. Perhaps even look nicer with minimal processing (as long as white point/tint is ensured).

-h

Because there would be severe distortion in color if channels were allowed to go into nonlinear.

Nikon probably achieves it's ISO 64 setting by utilizing a longer part of the 'photon transfer curve.

Best regards

Erik

Or simply using a different JPEG tone curve.

Remember, ISO ratings are derived from JPEG performance characteristics (specifically what lands at code value 128 in the JPEG, or was it 127?)

Trivia time!

The current ISO standard provides roughly 4 entirely different methods for defining/specifying a camera's sensitivity at a particular camera sensitivity setting:

  1. ISO Speed (saturation based).  Measured by determining the minimum exposure at the focal plane that causes final image saturation.
  2. ISO Speed (noise based).  Measured by determining the focal plane exposure that produces a final image with either a SNR of 40, or alternatively a SNR of 10.
  3. SOS, or Standard Output Sensitivity.  Measured by determining what focal plane exposure produces a final image signal equal to Omax*461/1000, where Omax is the maximum output value allowed in the final image format.  For example, in an 8-bit image, Omax is 255, so SOS is determined by the focal plane exposure that achieves a final image value of 118.  This is likely what you were referring to.
  4. REI, or Recommended Exposure Index.  Nothing is measured, determined, nor estimated.  The camera maker just "Recommends" a quantity that when fed into a formula produces their desired ISO (REI).

and have little to no connection to how the raw sensor data behaves.

Personally I think there is an exceptionally strong connection between ISO values and raw sensor data in the vast majority of cameras.  Is a connection between the two required by the standard?  No, it doesn't address raw data at all.  Is a connection almost always there anyway?  Yes.

Hope that helps.

robert1955 Veteran Member • Posts: 7,312
Re: Clipping and Gain

Brian Kimball wrote:

Entropy512 wrote:

Erik Kaffehr wrote:

hjulenissen wrote:

Jack Hogan wrote:

...near full well the response of the pixel becomes non linear. Therefore manufacturers ensure that the ADC reaches full scale lower than that, in the linear region, clipping the signal below FWC. Different manufacturers may choose different criteria for maximum deviation from linearity (For instance I have heard it said that the first ISO64 FF camera, the Nikon D810, could be almost 10% off linear at clipping).

Why are manufacturers doing this? Are they being economic with AD # bits (rather than clipping prior to saturation, I would guess that 1-2 more bits could capture whatever signal is there)? Or do they think that raw pipelines cannot handle a signal that is not linear-light at its top end? Or it there some analog electronics reason why clipping the signal early on is a better option?

If this non linearity is at all predictable, I would assume that highlight recovery would be a lot easier for a softclipped signal than a hard clipped one. Perhaps even look nicer with minimal processing (as long as white point/tint is ensured).

-h

Because there would be severe distortion in color if channels were allowed to go into nonlinear.

Nikon probably achieves it's ISO 64 setting by utilizing a longer part of the 'photon transfer curve.

Best regards

Erik

Or simply using a different JPEG tone curve.

Remember, ISO ratings are derived from JPEG performance characteristics (specifically what lands at code value 128 in the JPEG, or was it 127?)

Trivia time!

The current ISO standard provides roughly 4 entirely different methods for defining/specifying a camera's sensitivity at a particular camera sensitivity setting:

  1. ISO Speed (saturation based). Measured by determining the minimum exposure at the focal plane that causes final image saturation.
  2. ISO Speed (noise based). Measured by determining the focal plane exposure that produces a final image with either a SNR of 40, or alternatively a SNR of 10.
  3. SOS, or Standard Output Sensitivity. Measured by determining what focal plane exposure produces a final image signal equal to Omax*461/1000, where Omax is the maximum output value allowed in the final image format. For example, in an 8-bit image, Omax is 255, so SOS is determined by the focal plane exposure that achieves a final image value of 118. This is likely what you were referring to.
  4. REI, or Recommended Exposure Index. Nothing is measured, determined, nor estimated. The camera maker just "Recommends" a quantity that when fed into a formula produces their desired ISO (REI).

note that CIPA the Japanese camera makers association prescribes REI for its members

and have little to no connection to how the raw sensor data behaves.

Personally I think there is an exceptionally strong connection between ISO values and raw sensor data in the vast majority of cameras. Is a connection between the two required by the standard? No, it doesn't address raw data at all. Is a connection almost always there anyway? Yes.

Hope that helps.

Brian Kimball
Brian Kimball Contributing Member • Posts: 756
Re: Clipping and Gain

robert1955 wrote:

Brian Kimball wrote:

[...]

REI, or Recommended Exposure Index. Nothing is measured, determined, nor estimated. The camera maker just "Recommends" a quantity that when fed into a formula produces their desired ISO (REI).

note that CIPA the Japanese camera makers association prescribes REI for its members

That's actually really interesting.  Do you know why they favored REI over SOS?  And yet some Japanese manufacturers still use SOS, like Fujifilm and Olympus.  Fascinating.

Entropy512 Veteran Member • Posts: 6,019
Re: Clipping and Gain

Brian Kimball wrote:

  1. SOS, or Standard Output Sensitivity. Measured by determining what focal plane exposure produces a final image signal equal to Omax*461/1000, where Omax is the maximum output value allowed in the final image format. For example, in an 8-bit image, Omax is 255, so SOS is determined by the focal plane exposure that achieves a final image value of 118. This is likely what you were referring to.

Yup. Whoops, 118 not 127 or 128. But this is what nearly every camera manufacturer uses.

and have little to no connection to how the raw sensor data behaves.

Personally I think there is an exceptionally strong connection between ISO values and raw sensor data in the vast majority of cameras.

There's no connection whatsoever in, at least, Pentax and Sony.

Pentax: Set "DRO" - JPEG tone curve is shifted, RAW exposure shifts by an entire stop for a given ISO

Sony: Change from "normal" to S-Log3, ISO changes by 3 stops, RAW exposure doesn't shift at all. As to why HLG only shifts by 25% - that's because the "default" transfer function is not sRGB, it is an S-curve followed by the sRGB transfer function

They are both clearly using SOS, or maaaybe REI - but it is telling that the actual implemented S-Log2 and S-Log3 curves intersect at roughly the midpoint code value.  Although S-Log3 plays fast and loose with Omax here - the "limit" is 255 for the format (when shooting JPEG), but when S-Log3 is active, the real maximum is somewhere in the 220s or 230s.  (I forget at the moment).  Have fun when the format switches to H.264 in an MP4 container and that pesky luma range flag comes into play...  Let's just say have fun guessing what Sony is actually doing for a given setting.

I believe Fuji is the same - they have some form of "dynamic range optimization" that disconnects raw exposure from the ISO rating by something on the order of one stop.

-- hide signature --

Context is key. If I have quoted someone else's post when replying, please do not reply to something I say without reading text that I have quoted, and understanding the reason the quote function exists.

 Entropy512's gear list:Entropy512's gear list
Sony a6000 Pentax K-5 Pentax K-01 Sony a6300 Canon EF 85mm F1.8 USM +5 more
Brian Kimball
Brian Kimball Contributing Member • Posts: 756
Re: Clipping and Gain

Entropy512 wrote:

Brian Kimball wrote:

[...]

and have little to no connection to how the raw sensor data behaves.

Personally I think there is an exceptionally strong connection between ISO values and raw sensor data in the vast majority of cameras. Is a connection between the two required by the standard? No, it doesn't address raw data at all. Is a connection almost always there anyway? Yes.

There's no connection whatsoever in, at least, Pentax and Sony.

Better tell bclaff to delete all his Pentax and Sony data then. No point in graphing Sony & Pentax data vs ISO setting if there's no connection whatsoever!

It's interesting to explore examples where the connection or correlation gets shifted or doesn't hold, but stating those examples demonstrate there's no connection whatsoever is just silly exaggeration. You're overstating your case.

Plus, what raw shooter is going to turn on a jpg HDR function, or set their camera profile to a log-style video-centric setting?

Finally note that ISO is only defined at the camera's default settings, so turning on any HDR or log-style video centric profiles lead to results outside the scope of the ISO standard, just like shooting raw. Which I suspect is why bclaff graphs his data vs ISO Setting, rather than ISO Value.

Pentax: Set "DRO" - JPEG tone curve is shifted, RAW exposure shifts by an entire stop for a given ISO

For a given ISO setting, but the actual ISO value is undefined here.

Sony: Change from "normal" to S-Log3, ISO changes by 3 stops, RAW exposure doesn't shift at all. As to why HLG only shifts by 25% - that's because the "default" transfer function is not sRGB, it is an S-curve followed by the sRGB transfer function

Same as above.

They are both clearly using SOS, or maaaybe REI - but it is telling that the actual implemented S-Log2 and S-Log3 curves intersect at roughly the midpoint code value. Although S-Log3 plays fast and loose with Omax here - the "limit" is 255 for the format (when shooting JPEG), but when S-Log3 is active, the real maximum is somewhere in the 220s or 230s. (I forget at the moment). Have fun when the format switches to H.264 in an MP4 container and that pesky luma range flag comes into play... Let's just say have fun guessing what Sony is actually doing for a given setting.

The ISO standard I've been discussing is for DSCs, digital still cameras. It does not address video output in anyway. As mentioned above, a DSC shooting video may still offer sensitivity settings labeled as ISO, but they are mere setting labels, not actual values that need to conform to ISO 12232:2019.

I believe Fuji is the same - they have some form of "dynamic range optimization" that disconnects raw exposure from the ISO rating by something on the order of one stop.

Yup, HDR200 or HDR400 IIRC. And once you turn it on and leave it on, you'll still see the expected connection between raw values and ISO settings.

I think we're straying to far from the original topic. Feel free to start a new thread if you want to continue.  I don't want to derail this one.

Keyboard shortcuts:
FForum MMy threads