18% Gray code value in 16 bit Linear RAW?

Started 3 months ago | Questions thread
Re: 18% Gray code value in 16 bit Linear RAW?

Mandem wrote:

spider-mario wrote:

If you have measured that the raw file clips at 2000% diffuse reflectance then 18% reflectance would presumably be 18/2000 of the way between the black point and the clipping point, so at “black point + (18/2000) × (maximum value − black point)”. Assuming that the full 16-bit range is used, the maximum value would be 65 535, and so the code value for 18% would be “black point + (18/2000) × (65 535 − black point)”.

Thanks for that. You put black point instead of 0 because you're assuming that code value 0 isn't necessarily black correct? If it was then it would just be 18/2000 x 65 535?

Yup.  For various reasons, many sensors (most modern ones) have black correspond to a nonzero ADC value.

(This allows ADC noise to not be mirrored around zero, which makes it somewhat easier to handle noise and/or black point shifts/offsets in postprocessing - and yes, black points often shift slightly depending on the exact sensor configuration.)

Another question(although unrelated but perhaps you could help as I'm a bit lost). When discussing Color Space Transforms, it is said that the encoded image/video is "Linearised"(I am mainly talking about video). Is this what they mean? Or is that something else entirely different?? Words like Linear, Floating Point Operations etc... start popping up. I'm mainly trying to understand what is the difference between Linear RAW shooting and Linearizing for Color Space Transforms(Very confused about this). This is all from Davinci Resolve's point of view.

Fully raw sensor data is almost always linear (nonlinear ADCs exist but are extremely rare, logarithmic/nonlinear representation is often used as a basic form of lossy compression), in what is often referred to as "camera native" gamut (which is a function of the spectral sensitivity function of the CFA filters and underlying photosites), and Bayer-mosaiced (each pixel value corresponds to only an R, G, or B sample)

If you linearize something with the CST plugin, you will wind up with:

Linear data (some operations are best performed on linear data - for example exposure compensation is a simple multiplication on linear data)

Some gamut other than camera-native (Manufacturer-specific gamuts like S-Gamut and the like aren't camera native - sometimes they're close to camera-native for one particular model, but are not native for all models.  Alternatively they may be in a more standardized gamut like Rec2020 or Rec709)

The image data will be demosaiced - each pixel has R, G, and B values (which may have been generated by interpolation/demosaicing of a Bayer CFA)

(Side note - they are now rather rare, but video cameras that use three sensors and dichroic prisms to split colors do exist - in fact they were typical/standard back in the early days of color television but have largely disappeared.  These don't need to be demosaiced but do sometimes need R, G, and B channels to be slightly scaled or aligned.  It's extremely unlikely you'll run into video from such a source nowadays though...)

-- hide signature --

Context is key. If I have quoted someone else's post when replying, please do not reply to something I say without reading text that I have quoted, and understanding the reason the quote function exists.

Entropy512's gear list:Entropy512's gear list
Sony a6000 Pentax K-5 Pentax K-01 Sony a6300 Canon EF 85mm F1.8 USM +5 more
Complain
Post ()
Keyboard shortcuts: