What the imager has

Started 5 months ago | Discussions
Laurence Matson
Forum ProPosts: 11,081Gear list
Like?
What the imager has
5 months ago

What the imager has is 19 million spatial locations. How the pixels are counted is once again a big deal for those discussion types. I am guessing that the G and R layers each have around 5 million pixels and the top B, 19 million. Or thereabouts.

Of course, some of our favorite negativists will argue that this is not really an X3 imager. That also, is nonsense. There are 3 layers (X3) each of which collects stuff to yield a full-color reading at each spatial location. The oh-so obvious - at least to Ricardo - interpolation that has to be going on is a moot point at best. Moot on, if you want.

Finally, on the critical point of pixel-level sharpness - acuity - it is there, as always. The question really is how one could conceive it not being there. Acuity or mush, depending on the technology, is defined by the top layer. In the case of the single-layer Bayer process in all of its iterations, the mush comes from the fact that none of the pixels is acute from its neighbors. The Foveon imager pixels are. The 19 million plus acute blue pixels in the top layer define the spatial locations.

To my mind, this imager is true to the process. Just a different solution.

It may be a bit less straightforward and thus harder for some to get their heads around. Just wait for the images and duct-tape your jaws for support ahead of time.

-- hide signature --

Laurence
laurence at appledore-farm dot com
"I thought: I read something in a book, I dream, I imagine, and it comes true. And it is exactly like this in life.
"You can dream, and it comes true, as long as you can get out of the certitudes. As long as you can get a pioneering spirit, as long as you can explore, as long as you can think off the grid. So much time we spend in our education, in our lives is spent learning certitudes, learning habits, trying to fight against the unknown, to avoid the doubts and question marks. As soon as you start to love the unknown, to love the doubts, to love the question marks, life becomes an absolutely fabulous adventure."
Bertrand Piccard, a Swiss person
http://www.pbase.com/lmatson
http://www.pbase.com/sigmadslr
http://www.howardmyerslaw.com

unknown member
(unknown member)
Like?
And you have already seen DP Quattro images?
In reply to Laurence Matson, 5 months ago

Laurence Matson wrote:

What the imager has is 19 million spatial locations. How the pixels are counted is once again a big deal for those discussion types. I am guessing that the G and R layers each have around 5 million pixels and the top B, 19 million. Or thereabouts.

Of course, some of our favorite negativists will argue that this is not really an X3 imager. That also, is nonsense. There are 3 layers (X3) each of which collects stuff to yield a full-color reading at each spatial location. The oh-so obvious - at least to Ricardo - interpolation that has to be going on is a moot point at best. Moot on, if you want.

Finally, on the critical point of pixel-level sharpness - acuity - it is there, as always. The question really is how one could conceive it not being there. Acuity or mush, depending on the technology, is defined by the top layer. In the case of the single-layer Bayer process in all of its iterations, the mush comes from the fact that none of the pixels is acute from its neighbors. The Foveon imager pixels are. The 19 million plus acute blue pixels in the top layer define the spatial locations.

To my mind, this imager is true to the process. Just a different solution.

It may be a bit less straightforward and thus harder for some to get their heads around. Just wait for the images and duct-tape your jaws for support ahead of time.

Otherwise your 'jaw dropping' promise would be just another speculation, as so much on this forum about the new DP Quattro.

Reply   Reply with quote   Complain
Ceistinne
Regular MemberPosts: 415Gear list
Like?
Re: What the imager has
In reply to Laurence Matson, 5 months ago

Laurence Matson wrote:

What the imager has is 19 million spatial locations. How the pixels are counted is once again a big deal for those discussion types. I am guessing that the G and R layers each have around 5 million pixels and the top B, 19 million. Or thereabouts.

Of course, some of our favorite negativists will argue that this is not really an X3 imager. That also, is nonsense. There are 3 layers (X3) each of which collects stuff to yield a full-color reading at each spatial location. The oh-so obvious - at least to Ricardo - interpolation that has to be going on is a moot point at best. Moot on, if you want.

Finally, on the critical point of pixel-level sharpness - acuity - it is there, as always. The question really is how one could conceive it not being there. Acuity or mush, depending on the technology, is defined by the top layer. In the case of the single-layer Bayer process in all of its iterations, the mush comes from the fact that none of the pixels is acute from its neighbors. The Foveon imager pixels are. The 19 million plus acute blue pixels in the top layer define the spatial locations.

To my mind, this imager is true to the process. Just a different solution.

It may be a bit less straightforward and thus harder for some to get their heads around. Just wait for the images and duct-tape your jaws for support ahead of time.

-- hide signature --

Laurence
laurence at appledore-farm dot com
"I thought: I read something in a book, I dream, I imagine, and it comes true. And it is exactly like this in life.
"You can dream, and it comes true, as long as you can get out of the certitudes. As long as you can get a pioneering spirit, as long as you can explore, as long as you can think off the grid. So much time we spend in our education, in our lives is spent learning certitudes, learning habits, trying to fight against the unknown, to avoid the doubts and question marks. As soon as you start to love the unknown, to love the doubts, to love the question marks, life becomes an absolutely fabulous adventure."
Bertrand Piccard, a Swiss person
http://www.pbase.com/lmatson
http://www.pbase.com/sigmadslr
http://www.howardmyerslaw.com

Laurence,

Best explanation yet.

S

Reply   Reply with quote   Complain
ilsiu
Regular MemberPosts: 197
Like?
Re: What the imager has
In reply to Laurence Matson, 5 months ago

The new sensor has 19 + 5 + 5 MP; I think it would be a good guess to say that it would resolve less than a hypothetical 19 + 19 + 19 MP sensor, but how much less?  Will it be equivalent, better, or worse than the current 15 + 15 + 15 sensor?

Reply   Reply with quote   Complain
Erik Magnuson
Forum ProPosts: 11,671Gear list
Like?
low contrast
In reply to ilsiu, 5 months ago

ilsiu wrote:

The new sensor has 19 + 5 + 5 MP; I think it would be a good guess to say that it would resolve less than a hypothetical 19 + 19 + 19 MP sensor, but how much less? Will it be equivalent, better, or worse than the current 15 + 15 + 15 sensor?

I suspect you would mainly see a difference with low contrast in certain colors.  And as Ilia Borg has pointed out, this was already an issue  due to focus, diffraction,  diffusion, and noise reduction.  For most real world images, it will likely be better but you could find some corner cases where it will be worse.  Interestingly, some of the "color resolution" charts may be in those corners.

-- hide signature --

Erik

 Erik Magnuson's gear list:Erik Magnuson's gear list
Canon EOS 5D Mark II Canon EOS 450D Sigma SD10 Sony Alpha NEX-5 Nikon D3200 +24 more
Reply   Reply with quote   Complain
unknown member
(unknown member)
Like?
The usual
In reply to Erik Magnuson, 5 months ago

Erik Magnuson wrote:

ilsiu wrote:

The new sensor has 19 + 5 + 5 MP; I think it would be a good guess to say that it would resolve less than a hypothetical 19 + 19 + 19 MP sensor, but how much less? Will it be equivalent, better, or worse than the current 15 + 15 + 15 sensor?

I suspect you would mainly see a difference with low contrast in certain colors. And as Ilia Borg has pointed out, this was already an issue due to focus, diffraction, diffusion, and noise reduction. For most real world images, it will likely be better but you could find some corner cases where it will be worse. Interestingly, some of the "color resolution" charts may be in those corners.

-- hide signature --

Erik

suspect, would, could and may.

Why not wait till RAWs are available - better yet the DP Quattros are in the hands of people shooting actual pictures with them and not just bricks and test charts?

Reply   Reply with quote   Complain
Erik Magnuson
Forum ProPosts: 11,671Gear list
Like?
Re: The usual
In reply to mrkr, 5 months ago

mrkr wrote:

Why not wait till RAWs are available

Because Sigma did not announce the camera with raws available.

-- hide signature --

Erik

 Erik Magnuson's gear list:Erik Magnuson's gear list
Canon EOS 5D Mark II Canon EOS 450D Sigma SD10 Sony Alpha NEX-5 Nikon D3200 +24 more
Reply   Reply with quote   Complain
unknown member
(unknown member)
Like?
Flawed logic
In reply to Erik Magnuson, 5 months ago

Erik Magnuson wrote:

mrkr wrote:

Why not wait till RAWs are available

Because Sigma did not announce the camera with raws available.

-- hide signature --

Erik

Because you want to speculate.

Fine by me, by the way: my speculation, for all it's worth, is that the DP Quattro will surpass the Merrills in IQ (and also the Leica Monochrome, easily, when it comes to BW).

Just silly, these fora, most of the time, aren't they?

Reply   Reply with quote   Complain
dr.noise
Veteran MemberPosts: 3,447Gear list
Like?
Re: What the imager has
In reply to Laurence Matson, 5 months ago

Laurence Matson wrote:

There are 3 layers (X3) each of which collects stuff to yield a full-color reading at each spatial location.

That is true, but in a rather useless way.

I have read quite a bit of "this sensor still gathers full color info".

But I probably miss one thing. Let's clarify that.

Yes, the sensor gathers full color info.

But does it RESTORE full color info? If it doesn't, then there's no advantage in gathering it.

For example, you have four jewels, each of them is either black or white. They are ordered in a random 2x2 block. You throw them one by one into a box. So that means you gathered the full color info - because none of the jewels were lost.

Now if someone tries to extract those jewels from the box and put them back into 2x2 block, he will never know in what order they were thrown or what place they should take. So the info cannot be restored.

Now imagine shooting a completely red target which doesn't have blue in it. Will the blue layer record zeroes thus giving no spatial info? How red layer can be restored from 5mp to 20mp?

I don't understand.

Reply   Reply with quote   Complain
ilsiu
Regular MemberPosts: 197
Like?
Re: Flawed logic
In reply to mrkr, 5 months ago

mrkr wrote:

Erik Magnuson wrote:

mrkr wrote:

Why not wait till RAWs are available

Because Sigma did not announce the camera with raws available.

-- hide signature --

Erik

Because you want to speculate.

Fine by me, by the way: my speculation, for all it's worth, is that the DP Quattro will surpass the Merrills in IQ (and also the Leica Monochrome, easily, when it comes to BW).

Just silly, these fora, most of the time, aren't they?

I agree that speculation is silly, but is there anyone who's life is totally devoid of silliness?  I don't rebuke anyone for indulging in some harmless* silliness.

*I'm totally against harmful silliness.

Reply   Reply with quote   Complain
xthfloor
Regular MemberPosts: 188
Like?
Re: What the imager has
In reply to dr.noise, 5 months ago

dr.noise wrote:

Laurence Matson wrote:

There are 3 layers (X3) each of which collects stuff to yield a full-color reading at each spatial location.

That is true, but in a rather useless way.

I have read quite a bit of "this sensor still gathers full color info".

But I probably miss one thing. Let's clarify that.

Yes, the sensor gathers full color info.

But does it RESTORE full color info? If it doesn't, then there's no advantage in gathering it.

For example, you have four jewels, each of them is either black or white. They are ordered in a random 2x2 block. You throw them one by one into a box. So that means you gathered the full color info - because none of the jewels were lost.

Now if someone tries to extract those jewels from the box and put them back into 2x2 block, he will never know in what order they were thrown or what place they should take. So the info cannot be restored.

Now imagine shooting a completely red target which doesn't have blue in it. Will the blue layer record zeroes thus giving no spatial info? How red layer can be restored from 5mp to 20mp?

I don't understand.

"Blue" layer records luminance (blue, green and red), so there should be enough data to decide, which pixel is darker and which is lighter.

Reply   Reply with quote   Complain
JohnLindroth
Senior MemberPosts: 2,285
Like?
Re: What the imager has
In reply to dr.noise, 5 months ago

dr.noise wrote:

Laurence Matson wrote:

There are 3 layers (X3) each of which collects stuff to yield a full-color reading at each spatial location.

That is true, but in a rather useless way.

I have read quite a bit of "this sensor still gathers full color info".

But I probably miss one thing. Let's clarify that.

Yes, the sensor gathers full color info.

But does it RESTORE full color info? If it doesn't, then there's no advantage in gathering it.

For example, you have four jewels, each of them is either black or white. They are ordered in a random 2x2 block. You throw them one by one into a box. So that means you gathered the full color info - because none of the jewels were lost.

Now if someone tries to extract those jewels from the box and put them back into 2x2 block, he will never know in what order they were thrown or what place they should take. So the info cannot be restored.

Now imagine shooting a completely red target which doesn't have blue in it. Will the blue layer record zeroes thus giving no spatial info? How red layer can be restored from 5mp to 20mp?

I don't understand.

Hey Doc. I agree with some of what you are saying, but none of us, or perhaps only a few who are sworn to secrecy, really know what the TRUE processors do with the three layers that we conveniently label RBG. If you look at the specifications on what wavelengths are recorded in each layer, it becomes obvious (at least to me) that there is something more in sensor -> pixel conversion. I'm really looking forward to the images. It may not be the same, but maybe it will be better!

Page 2 of this document has an example of wavelength sensitivity:

http://www.eso.org/sci/meetings/2009/dfa2009/Writeups/WR-Lesage.pdf

-- hide signature --

http://www.johnlindroth.com/
john@johnlindroth.com
My future starts when I wake up every morning ...
Every day I find something creative to do with my life.
--Miles Davis

Reply   Reply with quote   Complain
Erik Magnuson
Forum ProPosts: 11,671Gear list
Like?
Re: Flawed logic
In reply to mrkr, 5 months ago

mrkr wrote:

Erik Magnuson wrote:

mrkr wrote:

Why not wait till RAWs are available

Because Sigma did not announce the camera with raws available.

-- hide signature --

Erik

Because you want to speculate.

Naturally - that's why I posted in a speculative thread.  You don't have to participate if you abhore speculation.

Fine by me, by the way: my speculation, for all it's worth, is that the DP Quattro will surpass the Merrills in IQ (and also the Leica Monochrome, easily, when it comes to BW).

If you read what I wrote, you'd see that I agreed with this speculation. I only added the caveat that you could probably find a couple of specific conditions where that might not be the case.

-- hide signature --

Erik

 Erik Magnuson's gear list:Erik Magnuson's gear list
Canon EOS 5D Mark II Canon EOS 450D Sigma SD10 Sony Alpha NEX-5 Nikon D3200 +24 more
Reply   Reply with quote   Complain
Raist3d
Forum ProPosts: 32,689Gear list
Like?
Re: What the imager has
In reply to Laurence Matson, 5 months ago

Laurence Matson wrote:

What the imager has is 19 million spatial locations. How the pixels are counted is once again a big deal for those discussion types. I am guessing that the G and R layers each have around 5 million pixels and the top B, 19 million. Or thereabouts.

Of course, some of our favorite negativists will argue that this is not really an X3 imager. That also, is nonsense. There are 3 layers (X3) each of which collects stuff to yield a full-color reading at each spatial location. The oh-so obvious - at least to Ricardo - interpolation that has to be going on is a moot point at best. Moot on, if you want.

Why is it a moot point? It's the truth. Of course, the tradeoff is expected to be a better sensor, otherwise Sigma wouldn't bother.

You don't get a full color reading at each spatial location like the previous sensor because what you get now is an average of Red & Green at the 19.6 MP spatial location resolution.

But it should certainly be better still than what a Bayer CFA would deal with information wise, taking out noise and other variants (everything else being equal).

Finally, on the critical point of pixel-level sharpness - acuity - it is there, as always. The question really is how one could conceive it not being there. Acuity or mush, depending on the technology, is defined by the top layer.

Because full color resolution is less, as per 2 layers being 1/4th the resolution of the blue layer. But the perception of acuity may be kept well enough using the blue layer to drive the luminance aspect of the image. As pointed elsewhere, this is similar to some graphic texture compression techniques, you lose some color resolution, but you keep luminance detail giving the impression of more detail.

Depending on the situation, it may look more or less close to having had all three layers at full resolution (taking out noise issues).

In the case of the single-layer Bayer process in all of its iterations, the mush comes from the fact that none of the pixels is acute from its neighbors. The Foveon imager pixels are. The 19 million plus acute blue pixels in the top layer define the spatial locations.

Yes the "blue" pixels in the top layer are more detailed, so define and carry the detail. But because you do not have the other layers with the same resolution, full color is not at the same resolution, you get an approximation. But the approach certainly allows for the perception of more detail.

In BW this shouldn't be really a major issue. In color it will vary depending on the situation, but may not be a big issue vs the previous real world implementation sensor.

To my mind, this imager is true to the process. Just a different solution.

It's a bit of a hybrid solution. You can't drive full color detail completely, in full RGB data space with just one layer. But I am expecting a better result in more situations vs the previous approach.

It's pretty unequivocal that in an ideal world where say the Merrill had no noise issues, that approach certainly is gathering more data, and would work better than this one. But it's the previous real world implementation issues of the previous sensor what would make this one the better solution overall.

That doesn't mean it won't carry its own tradeoffs, but I certainly expect it to be a better solution overall -otherwise Sigma wouldn't have bothered.

It may be a bit less straightforward and thus harder for some to get their heads around. Just wait for the images and duct-tape your jaws for support ahead of time.

-- hide signature --

Laurence
laurence at appledore-farm dot com
"I thought: I read something in a book, I dream, I imagine, and it comes true. And it is exactly like this in life.
"You can dream, and it comes true, as long as you can get out of the certitudes. As long as you can get a pioneering spirit, as long as you can explore, as long as you can think off the grid. So much time we spend in our education, in our lives is spent learning certitudes, learning habits, trying to fight against the unknown, to avoid the doubts and question marks. As soon as you start to love the unknown, to love the doubts, to love the question marks, life becomes an absolutely fabulous adventure."
Bertrand Piccard, a Swiss person
http://www.pbase.com/lmatson
http://www.pbase.com/sigmadslr
http://www.howardmyerslaw.com

-- hide signature --

Raist3d/Ricardo (Photographer, software dev.)- I photograph black cats in coal mines at night...
“The further a society drifts from truth the more it will hate those who speak it.” - George Orwell

Reply   Reply with quote   Complain
Erik Magnuson
Forum ProPosts: 11,671Gear list
Like?
Re: What the imager has
In reply to dr.noise, 5 months ago

dr.noise wrote:

But does it RESTORE full color info? If it doesn't, then there's no advantage in gathering it.

Does it have to restore full color info or just more color info than other competitive sensors?

BTW, the advantage of gathering full info (and even if full color info could be reliably restored) has been debated here for over 10 years.

-- hide signature --

Erik

 Erik Magnuson's gear list:Erik Magnuson's gear list
Canon EOS 5D Mark II Canon EOS 450D Sigma SD10 Sony Alpha NEX-5 Nikon D3200 +24 more
Reply   Reply with quote   Complain
ilsiu
Regular MemberPosts: 197
Like?
Re: What the imager has
In reply to xthfloor, 5 months ago

xthfloor wrote:

dr.noise wrote:

Laurence Matson wrote:

There are 3 layers (X3) each of which collects stuff to yield a full-color reading at each spatial location.

That is true, but in a rather useless way.

I have read quite a bit of "this sensor still gathers full color info".

But I probably miss one thing. Let's clarify that.

Yes, the sensor gathers full color info.

But does it RESTORE full color info? If it doesn't, then there's no advantage in gathering it.

For example, you have four jewels, each of them is either black or white. They are ordered in a random 2x2 block. You throw them one by one into a box. So that means you gathered the full color info - because none of the jewels were lost.

Now if someone tries to extract those jewels from the box and put them back into 2x2 block, he will never know in what order they were thrown or what place they should take. So the info cannot be restored.

Now imagine shooting a completely red target which doesn't have blue in it. Will the blue layer record zeroes thus giving no spatial info? How red layer can be restored from 5mp to 20mp?

I don't understand.

"Blue" layer records luminance (blue, green and red), so there should be enough data to decide, which pixel is darker and which is lighter.

How do the underlying "green" and "red" layers contribute more spacial info?  Based on the diagrams, each of the four blue pixels "sees" the same green and red values - all four pixels will get offset the same amount.

If the "blue" layer already has blue, green, red info, then why are the other layers needed?  Why can't the full color image be entirely reconstructed from the blue layer?

Reply   Reply with quote   Complain
Raist3d
Forum ProPosts: 32,689Gear list
Like?
Re: What the imager has
In reply to ilsiu, 5 months ago

ilsiu wrote:

xthfloor wrote:

dr.noise wrote:

Laurence Matson wrote:

There are 3 layers (X3) each of which collects stuff to yield a full-color reading at each spatial location.

That is true, but in a rather useless way.

I have read quite a bit of "this sensor still gathers full color info".

But I probably miss one thing. Let's clarify that.

Yes, the sensor gathers full color info.

But does it RESTORE full color info? If it doesn't, then there's no advantage in gathering it.

For example, you have four jewels, each of them is either black or white. They are ordered in a random 2x2 block. You throw them one by one into a box. So that means you gathered the full color info - because none of the jewels were lost.

Now if someone tries to extract those jewels from the box and put them back into 2x2 block, he will never know in what order they were thrown or what place they should take. So the info cannot be restored.

Now imagine shooting a completely red target which doesn't have blue in it. Will the blue layer record zeroes thus giving no spatial info? How red layer can be restored from 5mp to 20mp?

I don't understand.

"Blue" layer records luminance (blue, green and red), so there should be enough data to decide, which pixel is darker and which is lighter.

How do the underlying "green" and "red" layers contribute more spacial info? Based on the diagrams, each of the four blue pixels "sees" the same green and red values - all four pixels will get offset the same amount.

If the "blue" layer already has blue, green, red info, then why are the other layers needed? Why can't the full color image be entirely reconstructed from the blue layer?

It can't.  It is an approximation at that point. It's pretty obvious the previous design would capture more data (and it's even right there in the specs for photo sensors for both sensors) but real world implementation issues should make this approach pull overall ahead over the previous.

-- hide signature --

Raist3d/Ricardo (Photographer, software dev.)- I photograph black cats in coal mines at night...
“The further a society drifts from truth the more it will hate those who speak it.” - George Orwell

Reply   Reply with quote   Complain
dr.noise
Veteran MemberPosts: 3,447Gear list
Like?
Re: What the imager has
In reply to JohnLindroth, 5 months ago

JohnLindroth wrote:

Page 2 of this document has an example of wavelength sensitivity:

http://www.eso.org/sci/meetings/2009/dfa2009/Writeups/WR-Lesage.pdf

That kinda explains. I'm not really a math person, but ok, we'll see.

Reply   Reply with quote   Complain
dr.noise
Veteran MemberPosts: 3,447Gear list
Like?
Re: What the imager has
In reply to Erik Magnuson, 5 months ago

Erik Magnuson wrote:

Does it have to restore full color info or just more color info than other competitive sensors?

Only if it says so in the advertisement.

Reply   Reply with quote   Complain
D Cox
Senior MemberPosts: 6,951
Like?
Re: What the imager has
In reply to ilsiu, 5 months ago

ilsiu wrote:

The new sensor has 19 + 5 + 5 MP; I think it would be a good guess to say that it would resolve less than a hypothetical 19 + 19 + 19 MP sensor, but how much less? Will it be equivalent, better, or worse than the current 15 + 15 + 15 sensor?

How much less will depend on the test texture. For small patches of red and green of equal brightness, there will be some loss. For fine black and white lines, there will be no loss -- the resolution will be the full 20 (real) Megapixels.

This compares with the 18 green Mpix in a D800 or a Sony A7r. These cameras have 9 Mpix for blue and red, as oposed to 5 in the Quattro; so they should give slightly better results for chroma-only test charts.

A test chart would have to be lit by light of the right colour for the red, green or blue patches or lines to match in brightness.

Reply   Reply with quote   Complain
Keyboard shortcuts:
FForum MMy threads