dp2Q's price is unveiled at Yodobashi in Japan. Also SPP6 screenshot from SIGMA

Well... about the Quattro... It will bring nothing to photography. Only for consumerism. I hope SIGMA will not follow this way any-more because they are losing their magic actually.

If only SIGMA would have worked on this "parallel universe camera" ... every single Nikon users in love with the D700 (I just let you try to make the maths) would have jumped on SIGMA system, especially with the new lenses !!

My argumentation around this part is not false, and you know it if you are honest.

--
Kind regards - http://www.hulyssbowman.com
I have no idea what you are talking about ('bring nothing to photography' ?) - but since you are appealing to honesty when you should actually explain your train of thought rationally (again: it's not clear to me whether there is a thought at all), I guess we find ourselfs suddenly discussing ethics when discussing the Quattro?

PS:

You do have the DP2 Quattro already? Otherwise you can only make assumptions. As they say: 'to assume is to make an ass of you and me'.
 
Last edited:
One I had my own darkroom, I considered a 10 x 8 inch print to be big ;-)

I think Ansel Adams said that a 4x enlargement from negative was ideal. 10x8 is about 8x from 35mm, so you would expect it to look worse than larger formats.
 
That's how any technology must work given - perception is an invention of the mind, the tech is just exploiting how the mind works by feeding it input it can interpret as reality. No, the direct imaging is not my assumption, merely the boastful claims of people who thought they were following a different path by aligning with Foveon.
 
Maybe not fully direct--just more direct. Whatever the Foveon technique is you don't get color moire, which I get now and then with my D800--not that it matters. But that shows something different is going on. Bayer sill reminds me of Stalin's quote: Quantity has a quality all its own.
For me the most interesting property of Foveon is that each pixel is treated equally. Maybe there is noise. Maybe there is aliasing. Maybe there is other problems. But ... those problems are evenly spread over all pixels.
 
It's all about results. B&W is somewhat inferior to colour for simulating reality but that never stopped people creating wonderful art with it.
 
Ah, but that should lead to another piece of hypocrisy, no? Hasn't the argument often been that Foveon big files sizes means there is more data = better? Bayer's compact files sizes and speed is just for convenience. Real photographers praise quality above all other considerations...
 
You miss the point, I think: I'm not arguing that it won't work successfully, I'm arguing that Bayer cameras work successfully but that was never good enough for some.

To be any good, a camera had to not only work, but also be based on some kind of "pure" process of operation.

Now we have a new approach that is every bit as pragmatic as Bayer in that all that matters is whether it works well. Faster operation, smaller files, less storage, efficient. It is no longer in any way commensurate with the "pure" operating principles of the original ideals but I predict that will be conveniently forgotten on pragmatic grounds - it works, who cares!

The superiority of 3 stacked pixels at every location and no sign of any tricks like interpolation is so, yesterday! Trouble is, all those thousands of cackling posts over the years claiming it required 3 stacked pixels at every spatial location and big data-packed files and nothing else, won't go away.

Bah, humbug!

--
"...while I am tempted to bludgeon you, I would rather have you come away with an improved understanding of how these sensors work" ---- Eric Fossum
Galleries and website: http://www.whisperingcat.co.uk/
Flickr: http://www.flickr.com/photos/davidmillier/
 
Last edited:
Well... about the Quattro... It will bring nothing to photography. Only for consumerism. I hope SIGMA will not follow this way any-more because they are losing their magic actually.

If only SIGMA would have worked on this "parallel universe camera" ... every single Nikon users in love with the D700 (I just let you try to make the maths) would have jumped on SIGMA system, especially with the new lenses !!

My argumentation around this part is not false, and you know it if you are honest.
 
Someone on Twitter tweeted that he found a advertisement of pre-order for dp2Q at Yodobashi Camera. The price is 109,620 yen including 8% tax which is 1,071.12USD at this moment.
If it delivers what it promises, that's quite reasonable.
 
I would also expect it to be an improvement else I can't see why they would go to he effort of developing it.

One quibble: if the resolution bump truly is the equivalent of a pixel count increase from 15 to 20, then it is a ~ 30% increase in pixels, not a 30% increase in resolution. The increase in resolution will be about 15% eg the increase in horizontal pixels will be ~15%.

That will be barely detectable, i would say as it usually takes something like +100% increase in pixel count to make a significant visual difference. But it should score that much better on resolution charts that Sigma can claim they match or perhaps exceed the D800 which may be very important for marketing reasons.
 
The superiority of 3 stacked pixels at every location and no sign of any tricks like interpolation is so, yesterday! Trouble is, all those thousands of cackling posts over the years claiming it required 3 stacked pixels at every spatial location and big data-packed files and nothing else, won't go away.

Bah, humbug!
I am one of those humbugs then :)

I actually believe that three stacked high quality RGB values would be totally fantastic. If anyone could make such a sensor. A sensor that detects most photons and efficiently sorts the detected photons in RGB values, with the low noise of a B&W sensor.

If I have not got it wrong, the main reason for the Quattro solution is that the lower layers are not better than 4 MP in the Merrill cameras. So, the Merrill sensor is unnecessary complex for what it gives.

And if I am not wrong again, the sensor in SD15 does not have that problem. There all three layers are as good as it gets.

So - making the lower layers the resolution of SD15 and then make the upper layer 4 times more is an optimal solution.
 
I've no problem with any solution that works, Roland, just an issue with those who said there is only one way to do things.
 
I've no problem with any solution that works, Roland, just an issue with those who said there is only one way to do things.
Only one way to do things that we knew of at the time. But if it works, it works. Though until we actually see one we won't know for sure.
--
"...while I am tempted to bludgeon you, I would rather have you come away with an improved understanding of how these sensors work" ---- Eric Fossum
Galleries and website: http://www.whisperingcat.co.uk/
Flickr: http://www.flickr.com/photos/davidmillier/
 
The superiority of 3 stacked pixels at every location and no sign of any tricks like interpolation is so, yesterday! Trouble is, all those thousands of cackling posts over the years claiming it required 3 stacked pixels at every spatial location and big data-packed files and nothing else, won't go away.

Bah, humbug!
I am one of those humbugs then :)

I actually believe that three stacked high quality RGB values would be totally fantastic. If anyone could make such a sensor. A sensor that detects most photons and efficiently sorts the detected photons in RGB values, with the low noise of a B&W sensor.

If I have not got it wrong, the main reason for the Quattro solution is that the lower layers are not better than 4 MP in the Merrill cameras. So, the Merrill sensor is unnecessary complex for what it gives.

And if I am not wrong again, the sensor in SD15 does not have that problem. There all three layers are as good as it gets.

So - making the lower layers the resolution of SD15 and then make the upper layer 4 times more is an optimal solution.
 
in short, removing unnecessary amount of data to have best quality. this is reason why most amount is put on top layer for maximum resolution output and lower layers as support for color.

if this is true, then expect better linear resolution at recommended/advisable amount of color. I say advisable because as you said, too much data will be put to waste.

however, I not know how this would impact pictures as far as how it would look as no samples are available yet.
 
Some years ago, when Foveon was doing their initial stuff, then three measures for each pixel location was the mantra. Something that was needed for accurate imaging.

But today, I can see two kind of reactions.
  1. It is good as long as the result is good (and we trust Sigma).
  2. It is still a Foveon sensor with three samples for each pixel.
Number 1 might mean that you see some kind of maturity since the initial days. Number 2 is just strange IMHO, I do not understand it at all.
There are still three measures at each pixel location, the bottom two merely compressed after reception.

People using the cameras care way more about what it captures rather than an implementation detail of storage.

--
---> Kendall
http://www.flickr.com/photos/kigiphoto/
http://www.pbase.com/kgelner
http://www.pbase.com/sigmadslr/user_home
 
Last edited:
There are still three measures at each pixel location, the bottom two merely compressed after reception.

People using the cameras care way more about what it captures rather than an implementation detail of storage.
The point is that in the lower layers, all of the four pixels under four top pixels are now one and will always give the same reading.

Imagine extending the idea so that there were only half a dozen pixels altogether in each of the lower layers. Now would you expect full resolution of colours ?

The question is not whether chroma resolution is lost: it certainly is. The question is whether the loss will be perceptible on a big print, especially in a picture of textiles. Or a distant poppy field.

We will not know until the camera is out and critical big prints of difficult subjects have been made. Until then it is guesswork. Items such as shells with low colour saturation will not show any loss of detail. Worsteds may.
 
The point is that in the lower layers, all of the four pixels under four top pixels are now one and will always give the same reading.

Imagine extending the idea so that there were only half a dozen pixels altogether in each of the lower layers. Now would you expect full resolution of colours ?

The question is not whether chroma resolution is lost: it certainly is. The question is whether the loss will be perceptible on a big print, especially in a picture of textiles. Or a distant poppy field.

We will not know until the camera is out and critical big prints of difficult subjects have been made. Until then it is guesswork. Items such as shells with low colour saturation will not show any loss of detail. Worsteds may.
Exactly.
 
There are still three measures at each pixel location, the bottom two merely compressed after reception.

People using the cameras care way more about what it captures rather than an implementation detail of storage.
The point is that in the lower layers, all of the four pixels under four top pixels are now one and will always give the same reading.
But the lower layers received essentially four different values in capture. The values received ALSO adjusted the top layers to some extent for each of the four locations. And the lower layer values are not NECESSARILY always the same value during (certainly not after) processing, as in some cases (perhaps most) you can distinguish what made up the lower layer values because of the variance in top values.
Imagine extending the idea so that there were only half a dozen pixels altogether in each of the lower layers. Now would you expect full resolution of colours ?
I also do not save images at JPG Q2 for that same reason. Instead I use Q10-Q12. It's not a lossless compression but it is nearly so, and compression effects are isolated to a group of four pixels.

You are also making the mistake in thinking of the area in isolation instead of realizing you can use adjacent pixel quads as lookup values for decompression of lower layer values.
The question is not whether chroma resolution is lost: it certainly is.
It MAY BE. Mull over that point and the MANY cases for which that is true, as it is important to understand why the worst case is not at all the common case.
The question is whether the loss will be perceptible on a big print, especially in a picture of textiles. Or a distant poppy field.
We will not know until the camera is out and critical big prints of difficult subjects have been made.
Such big prints have already been seen; the verdict is in on that front.
Until then it is guesswork. Items such as shells with low colour saturation will not show any loss of detail. Worsteds may.
There's guesswork, and there is educated guesswork. We already know how the Foveon sensor works so most of the guesswork CAN be educated. But all of the technical people appear to have decided to throw out the understanding of overlapping response to different wavelengths in proclaiming what the sensor will be able to do.
 

Keyboard shortcuts

Back
Top