Fuji RAW/RAF - understanding Bits..

andrejubert

Well-known member
Messages
150
Solutions
1
Reaction score
31
Hey,

I'm struggling to get my head round this..

I think (think) my Fuji X-H1 shoots 14 bit RAW still images(?) Is that colour depth?

My MacBook Pros GPU can output 10 Bit (as opposed to 8) and my exteral BenQ can display 10 Bit (albeit 8 Bit+FRC).

So in theory, am I viewing 10 bit colours in my images?

I did notice yesterday when I opened an image in Photoshop that it was described as 8 Bit.. confused. Can anyone clarify.

Thanks
 
Hey,

I'm struggling to get my head round this..

I think (think) my Fuji X-H1 shoots 14 bit RAW still images(?) Is that colour depth?
Yes
My MacBook Pros GPU can output 10 Bit (as opposed to 8) and my exteral BenQ can display 10 Bit (albeit 8 Bit+FRC).

So in theory, am I viewing 10 bit colours in my images?
Yes, but up to the image. E.g. JPG is 8 bits. Tiff can be higher.
I did notice yesterday when I opened an image in Photoshop that it was described as 8 Bit.. confused. Can anyone clarify.
Not familiar with Fuji RAW, but generally PS cannot open RAW file by itself, it will let ARC to convert RAW to JPG for PS to handle the editing.

This might be a reason you find a 8 bits file.
 
Hey, thanks for the info.

So when I open a RAW (Fuji RAF) file in Lr, should it be in its full 14 bit potential? I mean is that what I am seeing, but in 10 bit (8 bit + frc..)] as facilitated by my hardware set up..

I haven't been able to find that level of info of images within Lr Classic before.
 
This is pretty much irrelevant .

The latest Mac Pro has a screen resolution of 3024 x 1964. To simplify the math I will call that 3000 x 2000.

Your Fuji has a sensor resolution of 6000 x 4000.

That means it takes at least 4 FujiPixels to produce 1 MacDot. There is a whole lot of processing that takes place between those two instantiations of the data.

The developers at Fuji will develop software the works in a 14 bit space to produce a raw file with their best possible data.

The developers at at apple will develop software that produced the best possible 10 bit data from the file they receive. If you are starting with a .jpeg, produced in the camera that 14 bit data has already ended up as 8 bit but the Mac output will be 10 bit.

In the middle of these two processes are whatever you use to convert your raw files. You may use whatever software is provided by your operating system or a specialized program from Adobe, Affinity, one of the many programs built on SilkyPix or a proprietary program provided by your camera maker.

All of this stuff is just data until you decide how you want to present it. At that point you have to consider whether it will be printed, directly viewed on a screen or projected.

All of this is bothersome nitpicking over what will finally be minor differences.

Since you are on a Mac, I'm sure you have seen some absolutely wonderful images produced with a different camera on a PC with software that won't work on your Mac. I know I've seen some produced on Non-Nikon cameras on a Mac.
 
Hey,

I'm struggling to get my head round this..

I think (think) my Fuji X-H1 shoots 14 bit RAW still images(?) Is that colour depth?
Raw bit depth
My MacBook Pros GPU can output 10 Bit (as opposed to 8) and my exteral BenQ can display 10 Bit (albeit 8 Bit+FRC).
displays and bit color depth
So in theory, am I viewing 10 bit colours in my images?
14bit/12bit Raw does not correlate directly to your monitor's color bit depth
 
Hey, thanks for the info.

So when I open a RAW (Fuji RAF) file in Lr, should it be in its full 14 bit potential? I mean is that what I am seeing, but in 10 bit (8 bit + frc..)] as facilitated by my hardware set up..

I haven't been able to find that level of info of images within Lr Classic before.
This might help.


I have been involved in enough data processing projects over the years that I have an understanding of the complexities of this stuff. In this situation, without a whiteboard and multiple, multicolor erasable markers I am not prepared to go any further with this.
 
The raw data captured by your camera is 14 bits.

By default, LR will export an opened raw file to Photoshop as a 16 bit TIFF (see edit>preferences>external editing). So, it expands the bit depth, which means you get fewer bit errors when editing.

However, if you export a file from LR as a JPEG, it will be 8-bit because that's all JPEG can support. So JPEG images from your camera are 8-bit.

To export files directly with more than 8 bits, you would need to export them as 16-bit PSD, TIFF or PNG files.
 
I'm struggling to get my head round this..
Yeah, it's complicated.
I think (think) my Fuji X-H1 shoots 14 bit RAW still images(?)
I guess so?
Is that colour depth?
No it is not. Raw data has to be processed in order to convert the raw values to specific humanly-identifiable colors, and this processing requires lots of arithmetic, and this arithmetic, in effect, lowers the precision of the final results. The less color-accurate the camera inherently happens to be, the more processing required, and consequently the more bits are going to be lost, and the more noisy the final result (or the more noise reduction will be needed).

And not all of those 14 bits per color channel are actually used all of the time. For example, if you aren't "exposing to the right" then you may not be using all the bits. Furthermore, typically the red and blue color pixels aren't as sensitive to light as the green pixels, and so in daylight both are relatively underexposed by one or two stops, and not all of the bits for those two channels are used. Under extreme color temperatures, one or another of the color channels are going to be extremely underexposed, like the blue channel under incandescent lighting, or the red channel at dusk, which might be many stops underexposed relative to the green channel.

Cameras have a largely linear response to light, so double the exposure leads to double the signal value recorded. This is rather wasteful since human vision is more sensitive to changes in dark tones than to light tones: it's frequently said that half of the image data in raw files are allocated to just the brightest stop; and subsequently, half of what's left is left to the second brightest stop, and so forth and so on, so the bulk of the tonality in a image uses only a small fraction of the data.

Compressed encoding schemes, where more data values are allocated to darker tones than lighter, can allow fewer bits to encode an equal range of tones. This compression is sometimes referred to as a gamma curve.
My MacBook Pros GPU can output 10 Bit (as opposed to 8) and my exteral BenQ can display 10 Bit (albeit 8 Bit+FRC).

So in theory, am I viewing 10 bit colours in my images?
Not in theory, but actuality. The display driver will convert whatever bits in your image , whatever format it happens to be in, to the 10 bit format provided by the display. That doesn't mean that the images will necessarily look better on this display than on lower bit depth ones. It's possible that you can see banding in the shadows of low bit-depth images; however, I have a 10 bit display and I've never seen any of these kinds of artifacts.
I did notice yesterday when I opened an image in Photoshop that it was described as 8 Bit.. confused. Can anyone clarify.
JPEGs offer gamma-encoded images that simplify and compress high frequency textures, and reduce color resolution, typically using 8 bits per color channel.
 
Hey,

I'm struggling to get my head round this..

I think (think) my Fuji X-H1 shoots 14 bit RAW still images(?) Is that colour depth?

My MacBook Pros GPU can output 10 Bit (as opposed to 8) and my exteral BenQ can display 10 Bit (albeit 8 Bit+FRC).

So in theory, am I viewing 10 bit colours in my images?

I did notice yesterday when I opened an image in Photoshop that it was described as 8 Bit.. confused. Can anyone clarify.

Thanks
Raw bit depth is about dynamic range, not the number of colors you get to capture
 

Keyboard shortcuts

Back
Top