The Top Layer

Ted

It is what it is - the wavelength response curve given by Sigma. Your point is that there is a definition of Luminance that relates to the combined response of the receptors in the human eye, in fact several definitions, depending on what standard is used.
Yes.
Given the panchromatic nature of the Top Layer it does rather perform a similar function to the add up of the three rods in the eye.
It does not. BTW, the eye has rods and cones, not just rods.
I think he meant cones. The rods can be ignored for purposes of photography.
 
Ted

I have been watching these discussions with continued interest but with not much extra to offer.

However, it may be that we are getting lost in the difference between technical definitions and the ability of the sensor plus algorithms to produce attractive images.
From which, should I conclude that, if someone calls luminance 'Lightness', it doesn't matter and is not relevant to a discussion about the top layer?
I don't think it matters much. The point is that the top layer on its own cannot distinguish one colour from another. To do this, you have to compare one layer with another, or one layer with the sum of the two others. No one layer gives colours.

But the top layer differs from the other two in being more evenly sensitive through the spectrum, and much less noisy.

Once you get colour differences in various parts of the image, and they are in order along the spectrum, you can transform these differences into RGB for display on a monitor. It doesn't matter that the spectrum sensitivity curves don't match those of human cones (although those of a Foveon are more like those of cones than are those of a Bayer mosaic) because the transforms deal with that.
 
Ted

I have been watching these discussions with continued interest but with not much extra to offer.

However, it may be that we are getting lost in the difference between technical definitions and the ability of the sensor plus algorithms to produce attractive images.
From which, should I conclude that, if someone calls luminance 'Lightness', it doesn't matter and is not relevant to a discussion about the top layer?
I don't think it matters much. The point is that the top layer on its own cannot distinguish one colour from another. To do this, you have to compare one layer with another, or one layer with the sum of the two others. No one layer gives colours.

But the top layer differs from the other two in being more evenly sensitive through the spectrum, and much less noisy.

Once you get colour differences in various parts of the image, and they are in order along the spectrum, you can transform these differences into RGB for display on a monitor. It doesn't matter that the spectrum sensitivity curves don't match those of human cones (although those of a Foveon are more like those of cones than are those of a Bayer mosaic) because the transforms deal with that.
I wonder if there is less noise reduction going on by default with the Quattro cameras vs. the Merrill cameras, and that's why we see some noise in some of the photos. As they get better at designing the processes for producing images from the raw data, Sigma just might get the Quattro down to a level where it is not only producing an image with more detail and better color than the Merrill, but less noise too. That will be interesting to see. Already I think I've witnessed a major improvement in the images with regard to micro-contrast. I don't remember seeing noise in a lot of the early photos though . . . just some of them. I look forward to seeing the results from the Quattro DSLR.
 
Roland

I hope you don't mind me joining in here, but how the monochromatic image is painted in on the Quattro is what interests me.

The brain pays more attention to edges and patterns than it does to the detail of smooth transitions. Could it be that the Quattro algorithm looks at colour and luminosity gradients across pixels that neighbour edges and tries to reassign the spectrum of the 5Mpix data on either side of the 20Mpix resolved edges?

My thought is that might explain some of the differences between the M and Q, or maybe not...

Andrew
 
Roland

I hope you don't mind me joining in here, but how the monochromatic image is painted in on the Quattro is what interests me.

The brain pays more attention to edges and patterns than it does to the detail of smooth transitions. Could it be that the Quattro algorithm looks at colour and luminosity gradients across pixels that neighbour edges and tries to reassign the spectrum of the 5Mpix data on either side of the 20Mpix resolved edges?

My thought is that might explain some of the differences between the M and Q, or maybe not...

Andrew
I have seen three theories how Quattro might work.
  1. Expand the lower layers to 20 Mpixel and use ordinary old fashioned Foveon algorithms. Personally I did not think this would work. You would get color artefacts. I have tested, and I got color artefacts.
  2. Bin the upper layer to 5 Mpixel. Use an ordinary Foveon old fashioned algorithm, then use the upper layer to add extra details. This was my first thought, and I saw that Arvo thought so to.
  3. The third one - the one I described here. The reason I think this is possible is the Merrill sensor. It is very noisy in the two lower layers. Still the resulting color image that SPP makes is surprisingly clean. So, I think the upper layer has to play a huge role here - somehow.
Regarding us paying attention to detail etc - I am not sure that this is such a good idea to use in a camera. I know it is the basis for JPEG - and also for Bayer. So - as an end result it is probably just fine.

But, as a starting point I assume a solid RGB image with three good sharp channels is probably best. It is much more robust against editing. Maybe you want to increase saturation - then ouch if color sharpness is low.
 
Roland

Apologies for my poor use of language.

My proposal would give good colour sharpness in the sense of having sharp transitions from one area of colour to another. It would give good colour accuracy through the larger effective pixel size used by the colour algorithm.

It would only fall down in three areas, maybe:

1) For small complex shapes, where the foreground happens to give a similar top layer response to the background, even though the colours are very different. The algorithm would struggle to find the point of colour transition.

2) For wispy objects with visible structure but no defined edges or gradient clues to the colour transitions.

3) For areas of colour with subtle variations that interact with the 5Mpix scale to give artefacts at small scale steps in the top layer response at the 20Mpux scale - a sort of fine grain effect like noise but actually an artefact of the distribution of colour information by the algorithm.

Probably barking up the wrong tree...

Andrew
 
Yes, we see the word "Luminance" here a lot since the Quattro came out, but rarely do we see it's definition. So it has become a word similar to "resolution" which means many things to many people.

So. In the world of imaging, Luminance Y = 0.2R + 0.7G + 0.07B, where we're talking linear RGB decoded from a camera. The coefficients 0.2, 0.7 and 0.07 are called weights or weightings. The weighting of 0.7 for green tells us that green is the most important channel when determining the luminance of a pixel's-worth of a scene. And we note that blue (decoded blue, not the blue layer) carries hardly any weight at all, i.e. 0.07 is not a typo. And that is why JPEG compression uses red and blue for the color info, I reckon.
Regarding luminosity. I have been doing lots of monochrome film based photography. I then used filters. Those filters changed the spectral response, so did the lighting/illuminant. Still - I would absolutely call all those responses luminosity. I mean - monochrome - what is there more than luminosity?
A difficulty in using only one channel to determine Y (as Foveon implies) is that the top layer signal does not tell us what it's detected wavelength is. Go back to diagram and it can be seen that there are many instances of incident spectra that would give the same output. A narrow spectrum at say 450nm could easily give the same output as a wider spectrum at say 630nm (think "area under the curve").

So all this talk of glibly using the top layer as "the luminance channel" carries little weight, lol. Before someone rushes to disagree, provision of Sigmas exact algorithm would necessary to convince anyone here. So far, no such algorithm has been, even after all this time.
Yes, we do not know the algorithm. We can guess its nature though.

For Merrill and Quattro it has to put very high weight on the top layer, for different reasons. Merrill has lousy lower layers and Quattro has lower layers of lower resolution.

So this is my (maybe faulty) guess.

They use the top layer in order to make the original monochrome image. This image has max quality and resolution. But, it has the wrong luminosity - it is too blue sensitive.

Then they use the noisy information from the lower layers. Exactly how they do it - I do not know. They cannot simply denoise and smooth them and do some mixing, because then you would get halos and also decoloring of small details.

I guess they use some AI thinking or adaptive method. This method is used to correct the faulty luminance and also to color the image, without getting too much blotches.

Can be wrong - maybe they just do some clever filtering. Cannot understand how though.
It seems pretty straight forward to me. They make the image the same way they would with a Merrill, but with the exact same green and red values in four adjacent locations, based on the information from the green and red layers of the sensor. Each and every pixel will be slightly different, based on the differences from one blue "pixel" to the next having an affect on those "pixels" made of green and red. This would actually require less processing, because the computer or camera should only have to process 1/4 as much the information from each of the green and red layers. In fact, we're talking about roughly 1/3 as much information compared to each of those two layers of a Merrill sensor, but with just 1/3 more information in the top layer. That means that overall, there would be less processing required . . . or maybe about the same amount of processing, compared to what is necessary for a Merrill camera.

As far as knowing the algorithm - why do we need to know that? Can't we just use our own algorithm? Or couldn't we use a modification of the old one that just doubles the "pixels" of the green and red layers in the x and y direction?

Maybe I'm just too simple-minded to understand. I've been told that I "just don't understand" before. (Somehow it turns out that most of the time I actually DO understand, and people are either over-complicating things in their own minds, or they don't realize the complexities involved with something I understand. I really don't believe all this stuff is rocket science though. Of course, I could be wrong. It's happened before.)
 
Roland, maybe you can help me understand the complexities that make it a really difficult thing to make an image from a raw file. Here is how I imagine the data in a raw file to look, after it has been deciphered:

(This is of course a small sample of a much larger quantity of data, and Layer 1 would have twice as many rows and columns as layers 2 and 3 . . . in a Quattro raw file. And of course, there would be more information included - meta-data and the embedded jpeg, and maybe some other stuff.)

Layer 1

10110010010110 10110110010110 10110001010110 10110110010010 10110011010100

10110010010100 10110010010110 10110010010110 10110010010110 10110010010101

10110010010110 10010110010110 10101010010110 00110010010110 11010110010110

10110010010110 10111010010110 10100110110100 10110010010110 10110010010110

10110010010110 10110010010110 10110110010101 00110010010110 11010010010110

Layer 2

10110010010110 10110110010110 10110001010110 10110110010010 10110011010100

10110010010100 10110010010110 10110010010110 10110010010110 10110010010101

10110010010110 10010110010110 10101010010110 00110010010110 11010110010110

10110010010110 10111010010110 10100110110100 10110010010110 10110010010110

10110010010110 10110010010110 10110110010101 00110010010110 11010010010110

Layer 3

10110010010110 10110110010110 10110001010110 10110110010010 10110011010100

10110010010100 10110010010110 10110010010110 10110010010110 10110010010101

10110010010110 10010110010110 10101010010110 00110010010110 11010110010110

10110010010110 10111010010110 10100110110100 10110010010110 10110010010110

10110010010110 10110010010110 10110110010101 00110010010110 11010010010110

Is this about right?
 
And my experience is that blotching doesn't show up when there is no light.
That is an interesting observation, Yes, blotches are mostly in the mid or mid-low tones.

Personally I think the blotches are created by the difference between the middle and the lower layer. And I guess that the dark tones are mostly taken from the top layer. That makes sense if the blotches are not found in the dark tones.
We're only saved by SPP - the best de-blotcher on the planet :-) .
That is probably the scary truth.
 
iso 1600
And my experience is that blotching doesn't show up when there is no light.
That is an interesting observation, Yes, blotches are mostly in the mid or mid-low tones.

Personally I think the blotches are created by the difference between the middle and the lower layer. And I guess that the dark tones are mostly taken from the top layer. That makes sense if the blotches are not found in the dark tones.
We're only saved by SPP - the best de-blotcher on the planet :-) .
That is probably the scary truth.
 
There's no blotching on the various colors shown, however dark they are.
That is also an interesting observation.

So - blotches normally are found on certain areas.
Mid tones - and normally grayish.

Hmmmm ... I have seen it on a reddish brown surface also.

So ... I am not sure.

But - it seems like it is very hue/darkness dependent. Looking at an image, the blotches are not found everywhare, rather in certain areas.
 
Last edited:
There's no blotching on the various colors shown, however dark they are.
That is also an interesting observation.

So - blotches normally are found on certain areas.
Mid tones - and normally grayish.
In other words, where the differences between the 3 layers are minimal, and minor differences from e.g. microlenses/incident angles, play a larger part?

Hmmmm ... I have seen it on a reddish brown surface also.

So ... I am not sure.

But - it seems like it is very hue/darkness dependent. Looking at an image, the blotches are not found everywhare, rather in certain areas.
 
There's no blotching on the various colors shown, however dark they are.
There are blotches all over the CC, most evident and strongest is the patch on the left hand panel rightmost green patch. Other blotches are seen on patch rims on almost every patch on both panels. But that is obviously OK, if you're after B&W prints. Having said that I'll say, my, new to me, very much used DP2M will be here on Sunday, and I can do my own blotch tests, or just take some photos around.
 
There's no blotching on the various colors shown, however dark they are.
. . . very much used DP2M will be here on Sunday, and I can do my own blotch tests, or just take some photos around.
Once, I lit a letter-size white card (in portrait orientation) from about a foot above with small light bulb - so as to use the 'inverse square law' to get a gradient of illuminance down the paper. With the right exposure, the onset and fade-out of the infamous blotching is quite evident.
 
Roland

I hope you don't mind me joining in here, but how the monochromatic image is painted in on the Quattro is what interests me.

The brain pays more attention to edges and patterns than it does to the detail of smooth transitions. Could it be that the Quattro algorithm looks at colour and luminosity gradients across pixels that neighbour edges and tries to reassign the spectrum of the 5Mpix data on either side of the 20Mpix resolved edges?
This is a different way of describing a theory I've been trying to get across to people for some time now, perhaps your way they can understand better why it might work...

I don't know that it really works that way, just that it's a technique that could easily account for most fine detail being resolved in the Quattro, most especially things like fine detail against a blue sky.

The way I had described the "reassignment" was that every adjoining pixel had the potential to give you information for reducing the averaging caused by the lower 5mp layer combining four pixels worth of data. So for instance if one neighboring 4x4 quad was all blue, and a single top sensor value on the next quad over had the same value as the top layer values on the blue quad - you could subtract the lower layer values from the data under the "blue" quad from the quad with the single pixel, and only be averaging three potential pixels worth of data, not four. That would increase color accuracy no matter how you then calculated the remaining three pixels.
 
Roland

I hope you don't mind me joining in here, but how the monochromatic image is painted in on the Quattro is what interests me.

The brain pays more attention to edges and patterns than it does to the detail of smooth transitions. Could it be that the Quattro algorithm looks at colour and luminosity gradients across pixels that neighbour edges and tries to reassign the spectrum of the 5Mpix data on either side of the 20Mpix resolved edges?
This is a different way of describing a theory I've been trying to get across to people for some time now, perhaps your way they can understand better why it might work...

I don't know that it really works that way, just that it's a technique that could easily account for most fine detail being resolved in the Quattro, most especially things like fine detail against a blue sky.

The way I had described the "reassignment" was that every adjoining pixel had the potential to give you information for reducing the averaging caused by the lower 5mp layer combining four pixels worth of data. So for instance if one neighboring 4x4 quad was all blue, and a single top sensor value on the next quad over had the same value as the top layer values on the blue quad - you could subtract the lower layer values from the data under the "blue" quad from the quad with the single pixel, and only be averaging three potential pixels worth of data, not four. That would increase color accuracy no matter how you then calculated the remaining three pixels.
 
I thought about this for a while and yes, of course, it is ISO 1600. And yet still, there is the blotching, right in a textured area in the lower to mid-tone. And we could very likely expect the same blotching in an ISO 400 image of the scene.

I do not see obvious blotching in the image on the scale presented in the solid colors. Could blotching somehow, in some circumstance, be induced, perhaps by insufficient lighting on the solid colors in the image? Possibly, but I do not see it as an obvious issue in the image as presented.

Blotching is not some random occurrence. We seem to be looking at noise and frequency and the intersection and interaction of the noise from the various sensor levels and (if what I am seeing is part of it) from image detail resolution (a luminance function).

Richard



And my experience is that blotching doesn't show up when there is no light.
That is an interesting observation, Yes, blotches are mostly in the mid or mid-low tones.

Personally I think the blotches are created by the difference between the middle and the lower layer. And I guess that the dark tones are mostly taken from the top layer. That makes sense if the blotches are not found in the dark tones.
We're only saved by SPP - the best de-blotcher on the planet :-) .
That is probably the scary truth.
 
We seem to have the same thought. Something must explain the usually excellent edge sharpness of colour transitions on the Q.

It might be slightly more sophisticated - sort of a voting system near the top layer identified edge.

If you think about when this might not work or even produce artefacts, does it start to suggest what to look for between M and Q?
I like the idea of thinking of it as a "voting" process, since I imagined it looking at all neighboring pixels to determine how it thought it should break apart the lower layers...

This has been described as Bayer-like in concept, but it's not because it would only use the data present in the quad - it would just use other quads to determine the distribution of the lower level data among the four output pixels. A Bayer system would use the actual data from spatially removed sensors.

About how to look for that processing difference between an M and a Q - I'm not really sure how you could tell if this process were in use. The only reason i think it might be is because you can see single pixel level detail from a Q image, and overall a Q image pretty much always looks a little sharper than a Merrill image so you know they have to have some way to distinguish more than 15MP worth of data, so some system that makes use of the data the top layer records must be in use - and the system we are thinking of is the only way I can think about the processing that would seem to have that effect.
Wish I had more time to learn how to use Roland's tools.
I wish I had more time period...

--
---> Kendall
http://www.flickr.com/photos/kigiphoto/
http://www.pbase.com/kgelner
http://www.pbase.com/sigmadslr/user_home
 
Last edited:
There's no blotching on the various colors shown, however dark they are.
. . . very much used DP2M will be here on Sunday, and I can do my own blotch tests, or just take some photos around.
Once, I lit a letter-size white card (in portrait orientation) from about a foot above with small light bulb - so as to use the 'inverse square law' to get a gradient of illuminance down the paper. With the right exposure, the onset and fade-out of the infamous blotching is quite evident.
 
Wish I had more time to learn how to use Roland's tools.
I wish I had more time period...
Time? Yeah ... that is one of the more sparse resources it seems.

Nice that some do try to think :)

Here is another try at thinking.

Lets say that you first duplicate the top layer to all three layers. Then you will get a monochrome image of high quality, that in the case of Merrill will be a saturated red after conversion.

Then you use some secret and clever algorithm to modify the middle and low layers, without adding noise, so that they look similar to the actual middle and low layers. And, and here comes the big problem, without adding halos and without adding desaturation of small details.

What do you say?
 

Keyboard shortcuts

Back
Top