RAW to TIF - any data loss? Any disadvantage?

Perhaps I misunderstood the OP, but I don't think he is confused about NEF and TIF files. The fact is, you can do edits with your RAW converter or you can do the same thing on a TIF file that has already been converted. These edits include things like color balance, highlight recovey, shadow detail recovery and that sort of thing. I have always done as much as possible with the NEF converter before converting to 16 bit TIF and then finishing the job. The only exception is I don't sharpen NEFs. I think the OP asked an intelligent queston about whether to do it like I do, or to convert to TIF in some default fashion and then do most of the editing on the TIF file. A 16 bit TIF is certainly able to represent the color space of a 12 or 14 bit sensor. The same obscured shadow detail ought to be buried in a 16 bit TIF just like it si in the original NEF and you ought to be able to recover it either way.
 
Perhaps I misunderstood the OP, but I don't think he is confused about NEF and TIF files. The fact is, you can do edits with your RAW converter or you can do the same thing on a TIF file that has already been converted. These edits include things like color balance, highlight recovey, shadow detail recovery and that sort of thing. I have always done as much as possible with the NEF converter before converting to 16 bit TIF and then finishing the job. The only exception is I don't sharpen NEFs. I think the OP asked an intelligent queston about whether to do it like I do, or to convert to TIF in some default fashion and then do most of the editing on the TIF file. A 16 bit TIF is certainly able to represent the color space of a 12 or 14 bit sensor. The same obscured shadow detail ought to be buried in a 16 bit TIF just like it si in the original NEF and you ought to be able to recover it either way.
My original point exactly and very clearly stated.

Whether NEF and TIF files are technically similar or different doesn't interest or concern me. The point is, how different is the result of post processing TIFs in PS3 rather than NEFs in CNX2?

I've tried brief tests since and seem to find I get results I like better using NEFs in CNX2.
--
Rens
 
I guess "file structure" was the wrong term. As you correctly described (above), the Channel Mode shows the 4 pixels in each "block". I think that is quite different than what PS shows, which is after Bayer De-mosaicing.
correct. but it's not completely different -- in principle it's doing the same thing, and only displaying that particular channel. it's just doing it before bayer interpolation has been applied, whereas photoshop does it after . however, it's still decoding the color information, which is not actually contained within the raw file itself.
Not B&W as much as mono-chromatic. Each of the 4 photosites in a "block" has a colored filter on it, so each "represents" a specific part of the visible spectrum. That wavelength info is not conveyed in the photosite data, but it's elsewhere. That you decide to visualize that channel data as a B&W (which is a common method) doesn't change the fact that each photosite position, within the Bayer "block" represents a specific color. Bayer de-mosaicing doesn't magically "add" color...it just makes SWAGs about what other color values would have been, if the photosite was poly-chromatic.
correct. however, it's important to note that it doesn't record this data as color . it records it as shades of gray, in 8-16 bits, depending on your camera. it is the bayer interpolation algorithm that applies the color.
Thom writes in that linked article:

" Raw files do not store color. A photosite (individual unit in the sensor) collects light photons and converts them into electrons. The number of electrons that get collected this way are an analog value (as in "I've converted 62,138 photons into electrons"). This analog value is made into a digital value by an ADC. However, that value is simply a count of electrons. The "color" comes about because each individual photosite has one of three different color filters over it. "

I think he's making a silly distinction. The same statement could be made about a JPEG or TIFF file! If you print out the contents of any image file, regardless of the format, you just get ones and zeros... ;-)
no no, that's not silly at all. the difference is that, at any particular pixel, a color JPEG records three values, RGB, whereas a RAW file records one . it's not about it being ones and zeros, it's about how many bits of data are there and what they are describing. in the case of sensor data, it's only recording a single set of luminance values, in a single image, which the software then has to interpolate into color images. this is wholly different than simple decoding of color values which are actually stored in the file.
 
If I open a 16 MB Nikon NEF file with either PS3 or Capture NX2, I can then open it in PS as a 69MB TIF (or psd, whatever this is).

If I do my PP on this file in PS, will I have lost any original data? If not, is there any reason to manipulate files as NEFs rather than as TIFs?
When you convert a RAW file to a 16 bit TIFF you lose absolutely no image information. The image file contains all the image data even though it is compressed - this is because the compression is "lossless".

However, and it is a BIG "however", what you have "lost" is the original data itself (I understand that the data isn't really lost as such because you get to keep the RAW file) which means that you cannot "undo" any processing you have done in the same way as you can with the RAW file. Also, any subsequent changes to a TIFF are changes to the image itself and not changes to the instructions attached to the RAW data on how to convert the data into an image.

The way I work even though it is with Canon and DPP is to make any changes to the RAW file and save it as a 16 bit TIFF. Any subsequent changes are made to the RAW file and resaved by over-writing the original TIFF or if it is a different version of the image (monochrome for example) resaved as a different TIFF.
I ask this because I'm familiar with PS and mostly prefer using it.
--
Rens

PS Also posted in the Nikon D700 forum
 
If not, the tiff will be a generic rgb for graphic programs. If yes, programs can use that data to fine tune their algorithms, this is clear for illumination correction, but may be used for most editing algorithms like sharpness, contrast, or even white balance adjustments.
If I open a 16 MB Nikon NEF file with either PS3 or Capture NX2, I can then open it in PS as a 69MB TIF (or psd, whatever this is).

If I do my PP on this file in PS, will I have lost any original data? If not, is there any reason to manipulate files as NEFs rather than as TIFs?

I ask this because I'm familiar with PS and mostly prefer using it.
--
Rens

PS Also posted in the Nikon D700 forum
 
Our discussion has collapsed into a semantic one, it seems... :-(
I guess "file structure" was the wrong term. As you correctly described (above), the Channel Mode shows the 4 pixels in each "block". I think that is quite different than what PS shows, which is after Bayer De-mosaicing.
correct. but it's not completely different -- in principle it's doing the same thing, and only displaying that particular channel. it's just doing it before bayer interpolation has been applied, whereas photoshop does it after . however, it's still decoding the color information, which is not actually contained within the raw file itself.
That seems patently incorrect. If the RAW file didn't contain color information, then there would be no possibility of rendering it to a RGB version (w/o just making up random colors!). My contention is that every color image format, including all the versions of RAW, encode color by position within the file format. There are (at least) two ways to do this, that I can think of...
  1. Locate all the similar color values together...like have an area that contains only the red photosite data and another that contains only the green photosite data. These would probably be called the "red block" and the "green block". Each one would have identical structure, representing the X-Y sensor dimensions. The header of the file would probably contain 1) index values to the start of each color block and 2) a value representing the size of each block.
  2. Locate all photosite data elements exactly as received from the sensor. This will be in strings of RGRGRGRGRGRGRGRGR... and GBGBGBGBGBGBGBGBG... Each row of this data represents a physical row of photosites. Again, the header will have values that tell a reading application where all this is located and how big it is.
In both these, the color information is a location, either which block or which row/odd-even value.

In RGB files, the color information is still encoded by position. Each 24-bit or 32-bit or 48-bit word has the colors in R-G-B order. By knowing this order and the size of the word, applications can access each color "channel" separately or use all three to blend a color at that position in the image.
Not B&W as much as mono-chromatic. Each of the 4 photosites in a "block" has a colored filter on it, so each "represents" a specific part of the visible spectrum. That wavelength info is not conveyed in the photosite data, but it's elsewhere. That you decide to visualize that channel data as a B&W (which is a common method) doesn't change the fact that each photosite position, within the Bayer "block" represents a specific color. Bayer de-mosaicing doesn't magically "add" color...it just makes SWAGs about what other color values would have been, if the photosite was poly-chromatic.
correct. however, it's important to note that it doesn't record this data as color . it records it as shades of gray, in 8-16 bits, depending on your camera. it is the bayer interpolation algorithm that applies the color.
Again, this is terribly semantic. In an RGB file, the word that has the 3 color values is just like the three shades of grey that you think is in a RAW file. There is no difference.

As an overly simple example, let's take 4 photosites in an RGGB array and record their "RAW" data:

R G
G B

6 4
2 8

The RAW data is 6,4,2,8...the color code is R,G,G,B.

Let's surround these 4 photosites with 12 zero photosites (because in order to perform matrix operations, we must have some data there. Now the 4X4 array looks like:

0 0 0 0
0 6 4 0
0 2 8 0
0 0 0 0

The code for this data is:

B G B G
G R G R
B G B G
G R G R

We can demosaic this data and generate four RGB words by scanning a 3X3 window over it:

6/3/2, 3/4/4, 3/2/4, 1.5/2/8

Thus, in this overly simple demosaicing scheme, the 2X2 matrix becomes:

6/3/2 | 3/4/4
3/2/4 | 1.5/2/8

Each of those values, such as "6" are what you call "shades of grey". If you and I didn't know that the first value in each grouping represented "red", we would not be able to tell by looking at the value itself!

--
Charlie Davis
Nikon 5700, Sony R1, Nikon D50, Nikon D300
HomePage: http://www.1derful.info
“...photography for and of itself – photographs taken
from the world as it is – are misunderstood as a
collection of random observations and lucky moments...
Paul Graham
 
Thom writes in that linked article:

" Raw files do not store color. A photosite (individual unit in the sensor) collects light photons and converts them into electrons. The number of electrons that get collected this way are an analog value (as in "I've converted 62,138 photons into electrons"). This analog value is made into a digital value by an ADC. However, that value is simply a count of electrons. The "color" comes about because each individual photosite has one of three different color filters over it. "

I think he's making a silly distinction. The same statement could be made about a JPEG or TIFF file! If you print out the contents of any image file, regardless of the format, you just get ones and zeros... ;-)
no no, that's not silly at all. the difference is that, at any particular pixel, a color JPEG records three values, RGB, whereas a RAW file records one . it's not about it being ones and zeros, it's about how many bits of data are there and what they are describing. in the case of sensor data, it's only recording a single set of luminance values, in a single image, which the software then has to interpolate into color images. this is wholly different than simple decoding of color values which are actually stored in the file.
Again that's a semantic argument. Consider this example:

"173540913652586754002..." represents some image data...an unknown number of pixels...an unknown bit-depth...an unknown format. We don't know if it's a B&W image or a color image...if it's an RGB format of some type or a RAW format of some type. We don't know how to "parse" it!

It could be:
  • 173540,913652,586754,002...
  • 17354,09136,52586,75400,2...
  • 1735,4091,3652,5867,5400,2...
  • 173,540,913,652,586,754,002
  • 17,35,40,91,36,52,58,67,54,00,2...
Only by knowing the position of the delimiters and the meaning of the sequences can we know the "color".

The first line could represent RGB data, such that 17=R, 35=G, and 40=B of the first pixel (where that "first" pixel is located, we can't tell, just by inspection of the data). The last line could represent RAW data, such that 17=R, 35=G, 40=G, and 91=B. I can't see that the first parsing scheme is more "colorful" than the last!

--
Charlie Davis
Nikon 5700, Sony R1, Nikon D50, Nikon D300
HomePage: http://www.1derful.info
“...photography for and of itself – photographs taken
from the world as it is – are misunderstood as a
collection of random observations and lucky moments...
Paul Graham
 
If not, the tiff will be a generic rgb for graphic programs. If yes, programs can use that data to fine tune their algorithms, this is clear for illumination correction, but may be used for most editing algorithms like sharpness, contrast, or even white balance adjustments.
Yes, but once you have made any assumptions regarding sharpness, contrast, WB, etc, you can't go back, even with a TIFF. AND here is the kicker...when you render a RAW file to produce a TIFF file, all programs must make some assumptions...they either have their own defaults, your defaults, or read the defaults that the camera used to render JPEG files (which are stored in the RAW file). Sure, a 16-bit TIFF has "all" the data, but it's been fiddled with, sometimes heavily.

Take one example...headroom. A given camera might establish a white "threshold" about 1-stop from the absolute max data values and use this to create the in-camera JPEGs it produces. The curve that creates that 1-stop threshold is stored in the RAW format. When you launch the manufacturers RAW editor (DPP or NX2, for example) it reads those settings, including the curve. You now have an RGB image on your screen that looks "exactly" like a JPEG of that same image. When you save the image as a TIFF, it has that same curve applied to the data as was applied to the JPEG...this truncates the white data. You no longer have about 1-stop more white data in that TIFF file...the RAW file still has it (if you haven't destroyed it).

Subsequent edits of the 16-bit TIFF image are little better than they would be with a 16-bit JPEG 2000 image...the only difference is that the JPEG file is compressed more (lossy) and the TIFF was compressed less (non-lossy). But with mild JPEG compression, you can't see the difference. ;-)

Bottom Line: Do as much editing of your RAW images as possible before rendering an RGB file (of any sort), because you DO lose some information...

--
Charlie Davis
Nikon 5700, Sony R1, Nikon D50, Nikon D300
HomePage: http://www.1derful.info
“...photography for and of itself – photographs taken
from the world as it is – are misunderstood as a
collection of random observations and lucky moments...
Paul Graham
 
That seems patently incorrect. If the RAW file didn't contain color information, then there would be no possibility of rendering it to a RGB version (w/o just making up random colors!).
sure there is. it's not that hard, actually. the software just has to know which pixel has which color filter over it, and interpolate the other values. the first part is why the specific camera has to be supported in your RAW program or plugin.
In both these, the color information is a location, either which block or which row/odd-even value.
yes. more like number 2.
In RGB files, the color information is still encoded by position. Each 24-bit or 32-bit or 48-bit word has the colors in R-G-B order. By knowing this order and the size of the word, applications can access each color "channel" separately or use all three to blend a color at that position in the image.
sure. the difference is, as i stated above, that you're recording three colors for each pixel in any RGB format, such as JPEG, and only one in RAW. i'm not sure why this difference is so hard to understand.
Again, this is terribly semantic. In an RGB file, the word that has the 3 color values is just like the three shades of grey that you think is in a RAW file. There is no difference.
no, "3 is different than 1" is not semantics.

arguing that color = monochrome, on the other hand, is.
Each of those values, such as "6" are what you call "shades of grey". If you and I didn't know that the first value in each grouping represented "red", we would not be able to tell by looking at the value itself!
that's not exactly the point. the point is that a RAW file only stores luminance values at each pixel, as opposed to color values. it's not storing things in RGB.
 
We're talking here about two possible representations of RAW files:
  1. One monochrome image at full sensor resolution, with some function that tells us the primary color recorded by each pixel.
  2. Four monochrome images at half sensor resolution, each tagged with a primary color and a shift relative to one of the others.
Those two alternative representations are completely equivalent.
they are, yes. however, one is actually the way it's stored, and one is not.
 
Again that's a semantic argument. Consider this example:
this is not semantics. you are attempting to confound a perfectly simple point with numbers, here. that fact that some knowledge is needed to decode any file does not mean that all files are encoded in the same manner.

if they were effectively the same, what purpose do you suppose the bayer interpolation serves, exactly?
 
if they were effectively the same, what purpose do you suppose the bayer interpolation serves, exactly?
Here's another way to express that:

A TIFF image represents exactly one image. To change it properly, the data has to be changed. The decoding necessary to view the image is supposed to be fixed to produce the correct result.

The data in a RAW file is fixed, is never changed, and represents an infinite number of different possible images that depend on how the data is decoded. There is no one single image, there is no one single way to decode an image from the data. And the most significant point is that the data is never edited and saved as a new set.

In essence, a RAW file contains sensor data and a TIFF image file contains one single image. They are not even close to being the same thing.
 
Perhaps I misunderstood the OP, but I don't think he is confused about NEF and TIF files. The fact is, you can do edits with your RAW converter or you can do the same thing on a TIF file that has already been converted. These edits include things like color balance, highlight recovey, shadow detail recovery and that sort of thing. I have always done as much as possible with the NEF converter before converting to 16 bit TIF and then finishing the job. The only exception is I don't sharpen NEFs. I think the OP asked an intelligent queston about whether to do it like I do, or to convert to TIF in some default fashion and then do most of the editing on the TIF file. A 16 bit TIF is certainly able to represent the color space of a 12 or 14 bit sensor. The same obscured shadow detail ought to be buried in a 16 bit TIF just like it si in the original NEF and you ought to be able to recover it either way.
My original point exactly and very clearly stated.

Whether NEF and TIF files are technically similar or different doesn't interest or concern me. The point is, how different is the result of post processing TIFs in PS3 rather than NEFs in CNX2?

I've tried brief tests since and seem to find I get results I like better using NEFs in CNX2.
--
Rens
  • NEF and TIF are different and it should concern and interest you. .NEF is cooking instructions and ingredients in one file, .TIF is a mixed batter and all ingredients blended. With .NEF, you can avoid sugar if you don't like sugar, with .TIF, the sugar has already been mixed in, you have to find some other way to change the sweetness of the cake - you cannot take out the sugar.
  • Post processing in PS3 and CNX2 (CNX2 is alien to me, PS I have taken for a drive) - is completely different from the file itself. I would expect that each program has a different calculation engine, different sliders, different behaviour. By useage and habit, you may prefer to use one program more than another just because you like how one program has made their sliders work. The analogy used in the first point is a little crude - it may be your favouring one program more than another is a more powerful tool / effect / result than whether the two files are different.
--



Ananda
http://anandasim.blogspot.com

'There are a whole range of greys and colours - from
the photographer who shoots everything in iA / green
AUTO to the one who shoots Manual Everything. There
is no right or wrong - there are just instances of
individuality and individual choice.'
 
When you convert a RAW file to a 16 bit TIFF you lose absolutely no image information. The image file contains all the image data even though it is compressed - this is because the compression is "lossless".
That is incorrect. I won't bore you with the math unless you or someone else asks, but here are some numbers for the color variations available at different points in the processing an image:

14-bit Bayer encoded camera raw data:
8.5e+37 colors per pixel (minimum) of the final image

Computed RGB 32-bit values within the software before saving to some given image file format:
7.9e+28 colors per pixel of the final image

Saved RGB 16-bit values saved to a TIFF image format:
2.8e+14 colors per pixel of the final image:

Saved RGB 8-bit values saved to a JPEG image format:
1.67e+7 colors per pixel of the final image:

Virtually all computer monitors, with RGB 6-bit depth:
262,144 colors per pixel of the final image.

Rather clearly at each stage along the way there is a significant "loss" of possible variation. To make a TIFF image from the RAW data only a very tiny subset of the data is retained. Orignally the RAW data can have 85 billion billion billion billion variations per pixel, but the TIFF image format can only save some 280 trillion of those variations, which is a very tiny fraction of what the RAW data was able to represent.
The way I work even though it is with Canon and DPP is to make any changes to the RAW file and save it as a 16 bit TIFF.
You can't really do that. What you are actually doing is converting the RAW data to an RGB image, each and every time. You are not making changes to the RAW file at all. You might be generating the same RGB image, but more likely you are changing the way it is generated and are then, each time, editing an entirely different RGB image. Regardless there are no changes to the RAW data, only to RGB data.
Any subsequent changes are made to the RAW file and resaved by over-writing the original TIFF or if it is a different version of the image (monochrome for example) resaved as a different TIFF.
But only the TIFF is ever changed. Not the RAW file or the RAW data.
 
That seems patently incorrect. If the RAW file didn't contain color information, then there would be no possibility of rendering it to a RGB version (w/o just making up random colors!).
sure there is. it's not that hard, actually. the software just has to know which pixel has which color filter over it, and interpolate the other values. the first part is why the specific camera has to be supported in your RAW program or plugin.
He has an excellent point! The RAW data does in fact contain color information encoded in the data.

The mistaken concept is that each 12 or 14 bit byte of data in the RAW file represents one pixel in the final image. In fact each image pixel is the result of decoding data from at least a 3x3 matrix of those 14-bit bytes in the RAW data.

Just as an RGB pixel has 3 channels, a RAW pixel has at least 9 channels. So a 16 bit RGB pixel has 2^(3*16) possible color values, an 8 bit RGB pixel has 2^(3*8) possible color values... and both are far far outsized by a 14-bit depth raw file with 2^(9*14) possible variations per pixel in the final image.
 
When you convert a RAW file to a 16 bit TIFF you lose absolutely no image information. The image file contains all the image data even though it is compressed - this is because the compression is "lossless".
That is incorrect. I won't bore you with the math unless you or someone else asks, but here are some numbers for the color variations available at different points in the processing an image:

14-bit Bayer encoded camera raw data:
8.5e+37 colors per pixel (minimum) of the final image

Computed RGB 32-bit values within the software before saving to some given image file format:
7.9e+28 colors per pixel of the final image

Saved RGB 16-bit values saved to a TIFF image format:
2.8e+14 colors per pixel of the final image:

Saved RGB 8-bit values saved to a JPEG image format:
1.67e+7 colors per pixel of the final image:

Virtually all computer monitors, with RGB 6-bit depth:
262,144 colors per pixel of the final image.

Rather clearly at each stage along the way there is a significant "loss" of possible variation. To make a TIFF image from the RAW data only a very tiny subset of the data is retained. Orignally the RAW data can have 85 billion billion billion billion variations per pixel, but the TIFF image format can only save some 280 trillion of those variations, which is a very tiny fraction of what the RAW data was able to represent.
No, it is correct - no image information is lost when converting to TIFF. Certainly the TIFF does not contain all the information of the RAW data because the RAW data is not an image file. All 14 bits of each pixel colour are retained in a TIFF image. We are not talking about subsequent losses in the monitor or printing process, which I agree can be substantial.
The way I work even though it is with Canon and DPP is to make any changes to the RAW file and save it as a 16 bit TIFF.
You can't really do that. What you are actually doing is converting the RAW data to an RGB image, each and every time. You are not making changes to the RAW file at all. You might be generating the same RGB image, but more likely you are changing the way it is generated and are then, each time, editing an entirely different RGB image. Regardless there are no changes to the RAW data, only to RGB data.
I think you misunderstand. I am making changes to the RAW file but not the RAW data. I make changes to the metafile which is a part of the RAW file and which as you know acts as a "recipe" for making a JPEG or TIFF - even though no changes are made to the RAW data itself, changes to the metafile are indeed changes to the RAW file.
Any subsequent changes are made to the RAW file and resaved by over-writing the original TIFF or if it is a different version of the image (monochrome for example) resaved as a different TIFF.
But only the TIFF is ever changed. Not the RAW file or the RAW data.
Sorry, the RAW file is changed, the RAW data is not.
 
Here's another way to express that:

A TIFF image represents exactly one image. To change it properly, the data has to be changed. The decoding necessary to view the image is supposed to be fixed to produce the correct result.

The data in a RAW file is fixed, is never changed, and represents an infinite number of different possible images that depend on how the data is decoded. There is no one single image, there is no one single way to decode an image from the data. And the most significant point is that the data is never edited and saved as a new set.

In essence, a RAW file contains sensor data and a TIFF image file contains one single image. They are not even close to being the same thing.
This is clearly put.
NEF and TIF are different and it should concern and interest you.
Yes, of course it interests me. But in the same way working my computer interests me. I need to know the different effects of different actions and programs but not how these are achieved.

Similarly I want to know the effects of using either TIFs or NEFs to PP, without needing to know the mechanism by which they achieve these effects.

Chris59 seems to be saying (my summary)
I would lose no information PPing a TIF file as opposed to a NEF file. The disadvantage would be that having saved this, the changes would be non reversible, I'd need to go back to the NEF and start again to make changes without losing information.
This was my guess from the start.

--
Rens
 
When you convert a RAW file to a 16 bit TIFF you lose absolutely no image information. The image file contains all the image data even though it is compressed - this is because the compression is "lossless".
That is incorrect. I won't bore you with the math unless you or someone else asks, but here are some numbers for the color variations available at different points in the processing an image:

14-bit Bayer encoded camera raw data:
8.5e+37 colors per pixel (minimum) of the final image
...
Saved RGB 16-bit values saved to a TIFF image format:
2.8e+14 colors per pixel of the final image:
...
Rather clearly at each stage along the way there is a significant "loss" of possible variation. To make a TIFF image from the RAW data only a very tiny subset of the data is retained. Orignally the RAW data can have 85 billion billion billion billion variations per pixel, but the TIFF image format can only save some 280 trillion of those variations, which is a very tiny fraction of what the RAW data was able to represent.
No, it is correct - no image information is lost when converting to TIFF. Certainly the TIFF does not contain all the information of the RAW data because the RAW data is not an image file.
You need to rethink that. It contradicts itself.

...
But only the TIFF is ever changed. Not the RAW file or the RAW data.
Sorry, the RAW file is changed, the RAW data is not.
Ok, that is true if your editor actually edits the metadata. It is still an obfuscation for the fact that you do convert the exact same RAW data to a new RGB image every single time you "edit" it.
 
...the point is that a RAW file only stores luminance values at each pixel, as opposed to color values. it's not storing things in RGB.
The luminance values recorded in a RAW file are color luminance values. An RGB file also stores color luminance values, but stored in different places. In both cases, there is color information (otherwise, we'd see no color), but it's organized differently. In the case of an RGB file, most of the colors are SWAGs due to the Bayer de-mosaicing process.

--
Charlie Davis
Nikon 5700, Sony R1, Nikon D50, Nikon D300
HomePage: http://www.1derful.info
“...photography for and of itself – photographs taken
from the world as it is – are misunderstood as a
collection of random observations and lucky moments...
Paul Graham
 
That seems patently incorrect. If the RAW file didn't contain color information, then there would be no possibility of rendering it to a RGB version (w/o just making up random colors!).
sure there is. it's not that hard, actually. the software just has to know which pixel has which color filter over it, and interpolate the other values. the first part is why the specific camera has to be supported in your RAW program or plugin.
He has an excellent point! The RAW data does in fact contain color information encoded in the data.

The mistaken concept is that each 12 or 14 bit byte of data in the RAW file represents one pixel in the final image. In fact each image pixel is the result of decoding data from at least a 3x3 matrix of those 14-bit bytes in the RAW data.
Another mistaken concept is that a Bayer sensor has "pixels"...it doesn't. We are loose with our language! A Bayer sensor has photosites, which, after demosaicing, become pixels. But my argument with arachnophilia is whether the photosite data, recorded in a RAW file has color information. It didn't matter what we called things. Heck, even the sensor technologists around these parts use "pixel" and "photosite" interchangeably. :-(

--
Charlie Davis
Nikon 5700, Sony R1, Nikon D50, Nikon D300
HomePage: http://www.1derful.info
“...photography for and of itself – photographs taken
from the world as it is – are misunderstood as a
collection of random observations and lucky moments...
Paul Graham
 

Keyboard shortcuts

Back
Top