I’ve been trying to improve my understanding of how raw photo processing works, but I think I’m getting hung up on black point subtraction. RawDigger tells me that the black point for my A7IV is 512. I’m assuming that this is 512 out of 16383 (.03125), and not after scaling to fill the full 16bit range (.0078125), correct?
So, if 512 is 9 bits of data, how is the camera encoding 14stops of linear dynamic range in only the upper 5 bits of data? Is the data encoding only linear after black point subtraction is performed?
And if the camera black point is 9 log2 DNS (512/16383) why does the photon transfer curve show that noise maxes out around 6 log2 DNs (64/16383)?
Sorry in advance, I know these are elementary questions. Just trying to locate the misunderstanding them might stem from. Thanks!
So, if 512 is 9 bits of data, how is the camera encoding 14stops of linear dynamic range in only the upper 5 bits of data? Is the data encoding only linear after black point subtraction is performed?
And if the camera black point is 9 log2 DNS (512/16383) why does the photon transfer curve show that noise maxes out around 6 log2 DNs (64/16383)?
Sorry in advance, I know these are elementary questions. Just trying to locate the misunderstanding them might stem from. Thanks!

