Canon small sensors cannot capture reds and purples accurately.

For instance a flower might have values of R 1024 G 64 B 64.
When underexposing to bring the the red down to 256, the G and B
channel values becomes 4 and 4 or something like that.
Jpegs normaly have 256 levels to play with so even if you can do
something for underexposing, there is going to be a hue shift due
that the proportion of RGB have been altered to fit into the
narrower camera/sRGB space.
Could someone explain what a value of 1024 for the red channel means?

I always thought that 1 and 256 in a particular channel indicated the absolute minimum and maximum intensity of the relevant colour.

However from the above, it would appear that I am mistaken and that 1 to 256 represents only represents part of the possible intensity range of that colour - for some reason, only a part of the absolute range of possible colour intensities can be measuerd/recorded.

Many thanks
 
Saturation occurs at the pixel level and the gamut is determined by
the absorption/transmission spectra of the filters. Changing
ratios of red/green/blue doesn't change these things.
So then the matter here is that the camera itself isn't "seeing" the flower the same way that its user is. It's simply a matter of different response curves to the frequencies of light between the camera and our eyes.
I agree color transitions are a problem, but I don't think that's
what renders purple flowers blue. A purple flower does not contain
red and blue stripes.
Exactly, so the conclusion is that the Bayer pattern is not the
source of the proboem.
I have to thank gkl to pointing me to: http://www.cambridgeincolour.com/tutorials/

I hadn't really explored Sean McHugh's website. I did see it once and looked through his galleries, but some of the tutorials challenged my thinking and explained concepts that simply haven't been presented to me in a meaningful way before...

So let me use one of his tutorials (this one I already understood from taking 2 courses in college in the topics of human perception and computational vision):
http://www.cambridgeincolour.com/tutorials/color-perception.htm

Think about it this way: There is a whole frequency spectrum describing how light reflects off a surface such as a flower at every given point. Your eye has 3 different types of cones which have 3 different response curves (Short, Medium, and Long). Now, imagine for a moment if your eye had 4 different kinds of cone. Imagine that you could differentiate more of the color spectrum than anyone else on earth, because you have 4 different cones (one kind of shrimp actually has something like 16 different kinds of cones). In this case, you would be able to tell the difference between two reds that I can't distinguish because the spectrum is completely different.

To you with 4 cones, it would be like I'm color-blind (and everyone else would be, too). Computer screens have different Red, Green, and Blue pixels that humans in general see as Red, Green, and Blue. Cameras have 3 color filters for Red, Green, and Blue. People who are actually color blind are missing one or two wavelengths/kind of cone. But if you had the ability to see in more dimensions of color than the rest of humans, then you would look at the world and see color, and look at the computer screen and photographic images and see what would amount to a poor almost black-and-white copy.

Sony and Kodak (and Canon) have produced CCDs with 4 colors instead of the Bayer pattern to better capture color. People still use filters like polarizers and color filters (and color polarizers) to adjust the colors going into their camera sensors. People also adjust curves and white balance and curves on a given color channel to attempt to make the picture that they get out of a camera to look more like what it is that they saw when taking the picture (or for special effects).

So my answer to the original poster is:

It is not a problem with the camera. You just need to get more creative to capture what you see.

Maybe I need to write some software to render an image with color spectrums at each pixel. Then the user can supply 3 response curves representing red, green, and blue. The computer would integrate the response curves and the image spectrums to produce an output image of what someone with those 3 response curves would "see" when looking at the image. Maybe someone has already done this. A picture is worth a thousand words, and it would take a significant time to do what I'm suggesting, so for the time being, I hope my description is sufficient to describe the process.

Note: the red, green, and blue filters/pigments/inks used on your LCD, CRT, Plasma monitor, printers/photo printers, and projectors all have different transmission and reflectance curves. Digital images are simply complicated - it's a mix of art and science.

-Mike
http://demosaic.blogspot.com
 
Could someone explain what a value of 1024 for the red channel means?

I always thought that 1 and 256 in a particular channel indicated
the absolute minimum and maximum intensity of the relevant colour.

However from the above, it would appear that I am mistaken and that
1 to 256 represents only represents part of the possible intensity
range of that colour - for some reason, only a part of the absolute
range of possible colour intensities can be measuerd/recorded.

Many thanks
Cameras with RAW capability can keep such high channel values.
If you look at the sun that is for sure larger than 4096,4096,4096.

My Pro1 is able to capture 4096 levels per channel in theory, in practice is not that much but certainly more than 256 levels.

You can 'RE-PRESENT' a good slice of the visual reality with 256 levels per channel (8bits graphics).
The problem is that:

1. The Display medium, paper or computer monitors can't even display the whole spectrum of colors possible with 8bit graphics.

2. There are some color hues that need more than 1:256 ratio to be represented accurately.

When this happens, it means that some colors have fallen out of gamut.
That is why is almost impossible to

The trick is in the scale down and distribution of a bigger tonal range to the narrower color space of your specific display medium.

Color accuracy and management is one of the most complex subjects you will ever encounter in digital photography.

--
 
Even though humans are (usually) trichromats, there are reasons for using more than 4 colors in your capture and reproduction system. The main reasons are a) mismatch between the sensor response and eye response and b) the non existence of 3 visible colors that enclose the entire visible gamut.

To understand b, look at a typical plot of visible color gamut and notice that, rather than being a triangle, it bulges out at the sides. Any color perception that can be reproduced with a 3-color system will need to lie within the triangle formed by the three sources, but no combination of three points within the bulging triangle can enclose the entire area of the triangle.

--
Ron Parr
Digital Photography FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
From the little reading I have done it appears that purple and violet are not the same colour at all. We all learned about the visible colour spectrum going from red, orange, yellow, green, blue, indigo, violet. If sensors use red, green and blue sensors how are they going to capture violet? We can ignor orange, yellow, and indigo because they are supposed to be composite and not pure colours. However, the eye senses these colours as pure. I think we can accept that they are composite colours.

The brain indeed 'sees' violet as a pure colour. If no sensor exists to record that colour how can it be recorded? That means indigo will not be possible, either. Someone suggested we use sensors with 4 colours but not the standard RGB but VEYR. His theory was that having violet and emerald plus yellow and red would give those sensors the capacity to record and compose all the colour spectrum. I haven't seen any discussion of this theory but he could be right. That is something for sensor theorists to debate. In the meantime tweaking and all manner of things have to be done to try to capture violets in flowers. If computer screens have the same RBG filters then how is violet going to be depicted there? It seems to be a big problem across the board. Somehow we have to trick the eye to believe it is seeing violet.

Even though we have red sensors it seems that saturation and intensity levels of colours from flowers can cause sensors to not record images of those colours correctly. I must say that reds cause huge problems for me even to focus on them. The sensors do not detect the petals so cannot focus clearly on them. I tried unsuccessfully to get images of bright orange-red flowers that look like sunflowers but are not. What an intense colour that was to the eye. When I looked at the images it was clear those colours were not right. There was too much missing. Now I will have to return and try tweaking the settings re white balance and exposure to see if anything will change. I will have to experiment with other settings to see if they make a difference. It should be interesting. One lady at the garden store suggested I buy the plants to take home! I don't have anywhere to put them.

I rejoyed reading the posts here. Lots of interesting concepts and points.
 
If sensors use red, green and blue sensors
how are they going to capture violet?
You have to realize that the colored dyes in front of the sensor elements have spectral responses much wider than their names would indicate. The blue sensors can sense violet and beyond, and the red can sense near infrared.
The brain indeed 'sees' violet as a pure colour.
No, our retnas are similar to what I described above. Our brains interpret those signals, just as the RAW-to-JPEG converters interpret our RAW files into colors for us.

Our eyes are horrible cameras in many ways (CA, corner sharpness). But they have astonising dynamic range and they are connected to a very powerful image processor. Current sensors and lenses easily demolish our eyes in the CA and corner sharpness areas but they fall badly short in dynamic range and the image processors (like DIGIC II) are way inferior to your visual cortex.

--
Lee Jay
(see profile for equipment)
 
Thanks for those observations, Lee Jay. You sure have the time to post on these forums! I have been interested in how the brain processes information. I did a course in neurophilosophy which is about theories on how the brain works. As far as vision goes it is clear that the brain constructs reality out of sensation and sometimes there are things it 'sees' that don't really exist. A simple demonstration of this is to look at a CD with a light on it. If you look at the rainbow of colours you see one thing. Now close each eye and you will see different colours of the spectrum being dominant. Neither eye is seeing what the other eye sees but the brain somehow combines those sensations and constructs an image composed of both of those inputs. It really is amazing. What the brain now 'sees' is not what either eye is seeing! You are literally seeing a construct.

I am glad to hear that the camera sensors are sensitive to a broader range of the spectrum than the designated colours indicate. If we look at a colour chart on our monitors most of the colours seem to be depicted. There is no doubt our eyes see colours slightly differently from what sensors record and the various software try to bridge this gap. When taking images it really is a shock to see that colours of flowers are not recorded properly. That we have to then do things to overcome this deficiency suggests that the camera companies have a way to go before delivering products that will do what some of us photo nuts require.
 
I would estimate that more than half of my closeups of flowers are
not sharp. Especially if the light is low and I am using 5.6 in an
attempt to get more depth of field.
Well, if it's camera shake, a tripod is your answer, even if it's inconvienient. But if it's softness you're talking about and you're using f/5.6 it's probably thanks to diffraction due to the tiny tiny photosites of your sensor. An SLR would perform much better if that's what's causing your problem.
 
The Pro1 takes accurate photos of most colours and I am quite happy with what it does. Flowers are another matter and I suspect flowers give off lots of ultraviolet and that 'contaminates' the sensors in some way so that they do not capture what the eye sees.

Here is a sample of colours. Sometimes you have to be lucky and find colours that will frame like this.



http://ovincez.smugmug.com
 
With the S3 and is the camera warns when there is too much shaking. I think many of my images last time were blurry because the light was not that bright and I used 5.6 instead of auto. I was seeing sharp images on the LCD but not when I got back to the computer.

Most of my photos would be difficult to take with a tripod because shop owners are doing me a favour letting me take photos. I don't want to impede others who are customers. Also, many of the shots I take are difficult to set up with a tripod. I would need one of those unusual but expensive tripods that allow you to change angles in 3 degrees of freedom all at once.

Some of the blurring is caused by the inability of the camera to focus on some colours such as red.
 
It's a technology limitation that's not unique to Canon. Everyone using Bayer pattern sensor has this limitation.
 
From a brief reading of how the eye-brain constructs colours it appears that cameras literally see reality differently from what the eye does. So my charge that they do not reproduce colours accurately is relative. However, the problem about getting cameras to depict flowers as the eyes see them remains and is surely a challenge. I wonder if in 5 years time this will still be a problem? I think someone will solve this problem and that will be a clever thing to achieve.
 
I have looked at a colour chart online for purple/violet:

http://www.december.com/html/spec/color4.html

The flower that would not photograph accurately beside the dark red rose was indigo. Well, that is the closest match to the colour patches on the site.

If you go to their Swatch control you can play around and create your own colour. I tried to depict the indigo flower that I saw but without much success. The closest I could come there was to have the colour mostly blue with a bit of red but then darken the colour. Close but not right. The intensity of the indigo from the flower exceeds anything I can see on those charts.

http://www.december.com/html/spec/swatchcontrol.html

At least I am improving my colour vocabulary.
 
I found this photo online. This is the indigo-purple I saw or close to it. The one I saw was darker still.



photo by kjm45
 
Couple of questions:

1. You didn't mention if you color calibrated you monitor. I know mine is shifted to blue, so I got a Huey. As an amateur, I don't need the nth degree of exactness, I just want my purples to look appropriately bluish or pinkish.

2. Lighting. The 1st 2 are very bright & flattened, and that will affect the color. Are you using a flash? if so, can you reduce the output or use some type of diffuser. If sunlight, try early/late in the day when the light is softer.

3. Tripods. Just the act of pressing the button can move your camera and create blur. Try one of those minis or a beanbag and set your shutter release for 2 second delay. Press it, let go, wait for the shutter. It always worked for my A95.

Hope that helps.
 
I think that's oversaturation in the red channel (on the Canon that is)

A pretty similar rose, with neutral settings, on an A620:

 

Keyboard shortcuts

Back
Top