I'm struggling to get my head round this..
Yeah, it's complicated.
I think (think) my Fuji X-H1 shoots 14 bit RAW still images(?)
I guess so?
No it is not. Raw data has to be processed in order to convert the raw values to specific humanly-identifiable colors, and this processing requires lots of arithmetic, and this arithmetic, in effect, lowers the precision of the final results. The less color-accurate the camera inherently happens to be, the more processing required, and consequently the more bits are going to be lost, and the more noisy the final result (or the more noise reduction will be needed).
And not all of those 14 bits per color channel are actually used all of the time. For example, if you aren't "exposing to the right" then you may not be using all the bits. Furthermore, typically the red and blue color pixels aren't as sensitive to light as the green pixels, and so in daylight both are relatively underexposed by one or two stops, and not all of the bits for those two channels are used. Under extreme color temperatures, one or another of the color channels are going to be extremely underexposed, like the blue channel under incandescent lighting, or the red channel at dusk, which might be many stops underexposed relative to the green channel.
Cameras have a largely linear response to light, so double the exposure leads to double the signal value recorded. This is rather wasteful since human vision is more sensitive to changes in dark tones than to light tones: it's frequently said that half of the image data in raw files are allocated to just the brightest stop; and subsequently, half of what's left is left to the second brightest stop, and so forth and so on, so the bulk of the tonality in a image uses only a small fraction of the data.
Compressed encoding schemes, where more data values are allocated to darker tones than lighter, can allow fewer bits to encode an equal range of tones. This compression is sometimes referred to as a gamma curve.
My MacBook Pros GPU can output 10 Bit (as opposed to 8) and my exteral BenQ can display 10 Bit (albeit 8 Bit+FRC).
So in theory, am I viewing 10 bit colours in my images?
Not in theory, but actuality. The display driver will convert whatever bits in your image , whatever format it happens to be in, to the 10 bit format provided by the display. That doesn't mean that the images will necessarily look better on this display than on lower bit depth ones. It's possible that you can see banding in the shadows of low bit-depth images; however, I have a 10 bit display and I've never seen any of these kinds of artifacts.
I did notice yesterday when I opened an image in Photoshop that it was described as 8 Bit.. confused. Can anyone clarify.
JPEGs offer gamma-encoded images that simplify and compress high frequency textures, and reduce color resolution, typically using 8 bits per color channel.