Sensor technical question

Boyers238870

Member
Messages
38
Reaction score
0
Location
Guildford, UK
Reading through Bruce Fraser's 'Real Time Camera Raw with Adobe PS CS', I was surprised to learn that RAW data from 'mosiac' sensors are still essentially monochrome!

Am I right in assuming that X3F files are therefore the only RAW files to have colour embedded?
 
Bayer raw data is in an array where half the data are green measurements, 25% are red measurements, and 50% are blue measurements. You could display that as a gray-scale image, but it wouldn't make much sense. You could think of that as the simplest possible de-mosaic algorithm that using the assumption of zero chroma to try to get extra luminance resoluton out of the color samples. It wouldn't make a very good grayscale image, though, because it would alias all the actual color into apparent fine grayscale patterns.

The Foveon sensor's data is exactly the same. Each pixel sensor provides a measurement through one spectral filter. The different is in the physical locations of the sensors and the work that has to be done to arrive at a color image in which each pixel has three components. The Foveon starts a lot closer to what you want to get out.

j
Reading through Bruce Fraser's 'Real Time Camera Raw with Adobe PS
CS', I was surprised to learn that RAW data from 'mosiac' sensors
are still essentially monochrome!
Am I right in assuming that X3F files are therefore the only RAW
files to have colour embedded?
 
Reading through Bruce Fraser's 'Real Time Camera Raw with Adobe PS
CS', I was surprised to learn that RAW data from 'mosiac' sensors
are still essentially monochrome!
Am I right in assuming that X3F files are therefore the only RAW
files to have colour embedded?
Kinda :) To a computer, there is no such thing as color, so an RGB image is made up of three monochrome images. With a Bayer sensor, you don't get a true monochrome image (like what we would think of from B+W photography) directly from the sensor because each pixelsite is filtered for a different color. Those filters work pretty much the same as colored filters work in B+W photography. With the Foveon, you get one blue filtered image, one green filtered image and one red filtered image, just as though you had taken three B+W photographs with different filters. If you look at a picture of the sky from a Sigma's red channel, it'll be dark - in the blue channel it'll be light. With a Bayer sensor, one dark pixel will be next to a light pixel and the sky will look like a checkerboard.

So yes, the RAW data from a Bayer sensor is essentially monochrome, but essentially unusable as such, whereas the RAW data from a Sigma is truly monochrome and very usable. It kicks a$$ :)
 
Where do people come up with this stuff.
Reading through Bruce Fraser's 'Real Time Camera Raw with Adobe PS
CS', I was surprised to learn that RAW data from 'mosiac' sensors
are still essentially monochrome!
I guess SG will be over to jump in on this one.
Bayer raw data is in an array where half the data are green
measurements, 25% are red measurements, and 50% are blue
measurements.
I dont mean to nit pick, but that adds up to 125%.

Isnt it 50% G, 25% R, 25% B.
You could display that as a gray-scale image, but it
wouldn't make much sense. You could think of that as the simplest
possible de-mosaic algorithm that using the assumption of zero
chroma to try to get extra luminance resoluton out of the color
samples. It wouldn't make a very good grayscale image, though,
because it would alias all the actual color into apparent fine
grayscale patterns.

The Foveon sensor's data is exactly the same. Each pixel sensor
provides a measurement through one spectral filter. The different
is in the physical locations of the sensors and the work that has
to be done to arrive at a color image in which each pixel has three
components. The Foveon starts a lot closer to what you want to get
out.

j
Reading through Bruce Fraser's 'Real Time Camera Raw with Adobe PS
CS', I was surprised to learn that RAW data from 'mosiac' sensors
are still essentially monochrome!
Am I right in assuming that X3F files are therefore the only RAW
files to have colour embedded?
--
http://www.troyammons.com
http://www.pbase.com/tammons
http://www.troyammons.deviantart.com
 
Okay, I think I see what he is driving at if you think of each channel as if it were split, with the 4th channel being the rgb composite.

Still it does not make a whole lot of sense to me especially when considering raw data.
The Foveon sensor's data is exactly the same. Each pixel sensor
provides a measurement through one spectral filter. The different
is in the physical locations of the sensors and the work that has
to be done to arrive at a color image in which each pixel has three
components. The Foveon starts a lot closer to what you want to get
out.

j
Reading through Bruce Fraser's 'Real Time Camera Raw with Adobe PS
CS', I was surprised to learn that RAW data from 'mosiac' sensors
are still essentially monochrome!
Am I right in assuming that X3F files are therefore the only RAW
files to have colour embedded?
--
http://www.troyammons.com
http://www.pbase.com/tammons
http://www.troyammons.deviantart.com
 
Not to argue or anything, but it does kindof make sense with the 3 channels and the fourth, the rgb channel, but computers dont even see B+W, much less color, only binary code.

Without a ton of interpolation so that our simple brains and eyes can understand, its just a bunch of numbers at its most basic computer form.

Funny to really think that our eyes and brains are already doing that. Continuous picture taking, coding into memory, recall later, er um sometimes. Okay the old noggin is doing a pretty good job.

I guess in about 5,000 years if the human race is still around, maybe our ancestors will be able to look at binary code and see a photo. Of course we could always take a short cut and implant a micro super computer interface.

Gees i must be getting bored coming up with this stuff.
Reading through Bruce Fraser's 'Real Time Camera Raw with Adobe PS
CS', I was surprised to learn that RAW data from 'mosiac' sensors
are still essentially monochrome!
Am I right in assuming that X3F files are therefore the only RAW
files to have colour embedded?
Kinda :) To a computer, there is no such thing as color, so an RGB
image is made up of three monochrome images. With a Bayer sensor,
you don't get a true monochrome image (like what we would think of
from B+W photography) directly from the sensor because each
pixelsite is filtered for a different color. Those filters work
pretty much the same as colored filters work in B+W photography.
With the Foveon, you get one blue filtered image, one green
filtered image and one red filtered image, just as though you had
taken three B+W photographs with different filters. If you look at
a picture of the sky from a Sigma's red channel, it'll be dark - in
the blue channel it'll be light. With a Bayer sensor, one dark
pixel will be next to a light pixel and the sky will look like a
checkerboard.

So yes, the RAW data from a Bayer sensor is essentially monochrome,
but essentially unusable as such, whereas the RAW data from a Sigma
is truly monochrome and very usable. It kicks a$$ :)
--
http://www.troyammons.com
http://www.pbase.com/tammons
http://www.troyammons.deviantart.com
 
The answer to your question is yes, Bayer sensors are monochrome. The reason is that they sense monchrome data at every photosite. The sensor is just a grayscale device with a colored plastic mosiac glued in front of it.

The way you get color out a Bayer sensor is by digitally interpolating the missing color channels at each monochrome photosite, using neighboring data. The magic of digital lets you borrow the color from the pixel next door. The end result of the borrowing is that the recorded pixels are no longer optically discrete entities, which is why it is incorrect to think that an "8MP" Bayer can produce 8M optically discrete full color pixels. It can't. In fact, in can't even come close.

If you've ever upscaled an image using a program like Photoshop, you'll understand what is going on. When you upscale, the computer interpolates digital RGB placeholders and inserts them between the original pixels, using various digital algorithms. Some algorithms look better than others, but in all cases, the amount of core data is unchanged by of digital upscaing. Bayer cameras have already upscaled their images when they are output, using the same concept.

Foveon colors aren't the result of digital interpolation, but are rather optical, as each photosite has complete RGB data. No borrowing things sensed elsewhere on the sensor is required. So every pixel in the output image is an optically discrete entity, it's not an upscaled image derrived from lesser core data.

Although the RAW files themselves don't really drive that, the physical sensor geometry does.
 
I really didnt want to mention anything, I mean everybody makes mistakes sooner or later, but I could just see somebody coming back later on with the idea that bayer sensors have 50% blue pixels.

JL is dead on all the time and has a very good understanding of D camera technology.
Bayer raw data is in an array where half the data are green
measurements, 25% are red measurements, and 50% are blue
measurements.
I dont mean to nit pick, but that adds up to 125%.

Isnt it 50% G, 25% R, 25% B.
Oops.

j
--
http://www.troyammons.com
http://www.pbase.com/tammons
http://www.troyammons.deviantart.com
 
Thanks everyone - all comments very useful, especially the last from SigmaSD9!
The answer to your question is yes, Bayer sensors are monochrome.
The reason is that they sense monchrome data at every photosite.
The sensor is just a grayscale device with a colored plastic mosiac
glued in front of it.

The way you get color out a Bayer sensor is by digitally
interpolating the missing color channels at each monochrome
photosite, using neighboring data. The magic of digital lets you
borrow the color from the pixel next door. The end result of the
borrowing is that the recorded pixels are no longer optically
discrete entities, which is why it is incorrect to think that an
"8MP" Bayer can produce 8M optically discrete full color pixels.
It can't. In fact, in can't even come close.

If you've ever upscaled an image using a program like Photoshop,
you'll understand what is going on. When you upscale, the computer
interpolates digital RGB placeholders and inserts them between the
original pixels, using various digital algorithms. Some algorithms
look better than others, but in all cases, the amount of core data
is unchanged by of digital upscaing. Bayer cameras have already
upscaled their images when they are output, using the same concept.

Foveon colors aren't the result of digital interpolation, but are
rather optical, as each photosite has complete RGB data. No
borrowing things sensed elsewhere on the sensor is required. So
every pixel in the output image is an optically discrete entity,
it's not an upscaled image derrived from lesser core data.

Although the RAW files themselves don't really drive that, the
physical sensor geometry does.
 
One point that nobody else has mentioned, the RAW file from Bayer cameras only represents a single array of pixels, i.e. those of the sensor behind the Bayer filter.

With the Foveon cameras, the data contains the representation of 3 arrays of pixels, Red, Green and Blue, one from each layer of the Foveon chip.

In that respect the structure of the RAW file is very different.

For that reason the (uncompressed) RAW file from my 5 Mpixel Minolta is 10 Megabytes, whereas the RAW file from the 3.43 Mpixel SD9 would be over 20 Megabytes, were it not for the file compression.

The structure of an Adobe DNG file is closest to that of the Foveon RAW structure as the file converter can seperate the single layer structure of a Bayer RAW file into 3 seperate layers whilst converting to DNG.

--
Thanks,
Gary.
 
It is amazing to see how you just quote pages and documents that
contradict your claim to support your claim, this one is even more
obvious than the X3 pixel page last time.
I coldn't see how it even had any relationship to his point. Slide 11 does have the word "monochrome" in it with respect to how one can sense color using a monochrome sensor with filters over it, but does that bear on the question on whether the raw file represents colors? Here's what it says as one of the color separation methods:

Color Filter Array
Monochrome sensor
Each pixel has a specific bandpass filter on it

Yeah, I guess I can see how that's a contradiction of the assertion that the raw file contains only monochrome data. At best (worst) it's irrelevant to the question.

j
 
One point that nobody else has mentioned, the RAW file from Bayer
cameras only represents a single array of pixels, i.e. those of the
sensor behind the Bayer filter.

With the Foveon cameras, the data contains the representation of 3
arrays of pixels, Red, Green and Blue, one from each layer of the
Foveon chip.

In that respect the structure of the RAW file is very different.
That's a difference mostly in how you are thinking about the data. Is it necessarily a differnce in how the data are stored or represented? I'm not so sure.

The Bayer data can certainly be interpreted as three arrays, one for each color, if you want to do so. The details of how the numbers are laid out in the file is perhaps more like you describe, but I'm not so sure. The three arrays of the Foveon data may be interleaved RGBRGB just as the Bayer are interleaved RGRGRG...GBGBGB...
For that reason the (uncompressed) RAW file from my 5 Mpixel
Minolta is 10 Megabytes, whereas the RAW file from the 3.43 Mpixel
SD9 would be over 20 Megabytes, were it not for the file
compression.
At 1.5 bytes (12 bits) per pixel, the 5 MP camera needs 7.5 bytes and the 10 MP camera needs 15 bytes. That's a simpler way to look at it. (Your numbers are based on 2 bytes per pixel, which is overkill, but same idea: all you need is the pixel count).
The structure of an Adobe DNG file is closest to that of the Foveon
RAW structure as the file converter can seperate the single layer
structure of a Bayer RAW file into 3 seperate layers whilst
converting to DNG.
I've read some of the DNG spec, and I recall that it specifically provides for both fully-sampled (X3-like) and various forms of CFA data. Its not structure is not closer to one or the other.
Thanks,
Gary.
 
Color Filter Array
Monochrome sensor
Each pixel has a specific bandpass filter on it

Yeah, I guess I can see how that's a contradiction of the assertion
that the raw file contains only monochrome data. At best (worst)
it's irrelevant to the question.
I think this pretty much explained the missunderstanding of Steve.

How a "monochrome" sensor can still have colored data before the demosaicing... even more interesting in this regard is slide 14:

Color data is offset geometrically

--
http://www.pbase.com/dgross (work in progress)
http://www.pbase.com/sigmasd9/dominic_gross_sd10

 

Keyboard shortcuts

Back
Top