All my lenses just doubled in length ....

Started Dec 19, 2012 | Discussions thread
Bart Hickman
Veteran MemberPosts: 7,111Gear list
Re: This shouldn't be controversial--CIZ is inferior to up-samping raw...
In reply to Rehabdoc, Dec 20, 2012

Rehabdoc wrote:

Bart Hickman wrote:

Rehabdoc wrote:

Bart Hickman wrote:

Rehabdoc wrote:

I have a question:

Since we're converting from one form of pixel arrangement to a completely different form, is there really any such thing as "lossless conversion" from RAW to JPG? There isn't even "lossless conversion" from RAW to BMP... unless you use 14 bit BMPs that use monochromatic pixels arranged just like in the sensor.

The RAW data was not originally corrected in the way its represented in a photographic file.

The upper left red pixel is most definitely NOT where the upper left blue pixel or where either of the two upper left-most green pixels are on a sensor.

So what would represent a "Perfect" conversion process? I'm sure there's wicked hard math behind this all, but what is the minimum number of BMP style pixels (24 bit color) that provides enough "oversampling" to "accurately" portray what the Bayer pattern cluster of sensor pixels "saw" when you shot the picture?

A 16 mpixel 24-bit RGB bitmap obviously has sufficient capacity to store a 16 mpixel 14-bit Bayer RAW losslessly (it has 50% excess information holding ability). It's not a matter of preserving the information losslessly. It's all about displaying in pleasing fashion.

Well I understand that the file size obviously has sufficient information capacity to store all the data.

But I'm asking whether the REMAPPING of the 14 bit RAW file to a 24 bit RGB bitmap can be said to be "optimal", because its certainly not "lossless".

It can be lossless. It's perfectly possible to do a lossless transformation from the 14 bit RAW to 24 bit RGB. That's academic though because there may be loss involved if your primary goal is luminance resolution in which case you may intentionally sacrifice color resolution in order to get more luminance resolution. In the case of the Fuji SuperCCD, they pushed the limits on horizontal and vertical resolution only--diagonal resolution was not as good. So there are all sorts of tradeoffs in the pursuit of an optimal answer.

First off, thanks for the very informative answers, I'm finding them very interesting.

Sure!  I'm not an imaging expert.  I'm just an engineer and I know my share of network theory and DSP.

How exactly can you do a lossless transformation going from 14 bit RAW to 24 bit RGB, preserving all the color resolution, given there is nowhere to actually put 14 bits of red data?

If we simplify our example to converting a 2x2 pixel 14 bit, GRGB sensor to a 4 pixel 24 bit bitmap file:

If the values of the pixels are: G1: 12,234. R: 8,232. B: 873. G2: 12,325.

What values could we possibly assign to P1, P2, P3, P4 that would actually preserve the color resolution of, say, the RED pixel?

G1 R -> P1 P2

B G2 P3 P4

It seems to me that by its very nature, it has to be a lossy conversion, no matter what math you do, because of the nature of the transformation. It seems that there is more than one "optimal" answer, and that increasing the resolution of the bitmap allows you to more easily retain the spatial resolution for a given level of color resolution.

I suppose there's *always* some loss (in this case, loss is in the form of quasi-random noise) going from one integer representation to another integer representation even if the 2nd integer has more bits.  But in your example,

[R1 R2 R3 R4] = [A] * [G1 R B G2], where A is the 4 x 4 transformation matrix and R1..R4 is the  red pixel vector, and G1..G2 is the RAW pixel vector.  You repeat this process for the G and B pixels.

I don't think there's inherently any loss in this transformation.  The loss could be zero if the R1...R3 integers were large enough.  But for 8-bits, yeah, you're right, I'll have round off error of as much as +/-0.5 LSB (the rms across the 2 x 2 quad is only 1/1773 however, which IMO is negligible.)

Again, like I've been mentioning, this is all academic--the real loss is happening with the abbreviated JPEG conversion that I believe Sony is using.  And this is real reason CIZ seems to magically work because it up-samples prior to the damage caused by their JPEG converter.


But I have not been able to understand it well enough to be convinced that calculation using the 1:1 bitmap:sensor pixel ratio preserves as much luminance and color resolution as a process using a higher than 1:1 bitmap:sensor pixel ratio...

You'd be guaranteed a lossless RAW conversion if you simply had 4x the integer depth in each pixel (56 bits)--that's without any cleverness on the hardware math.  Then it's computationally easy to round off to whatever final bit depth is required.  Perhaps this is equivalent, in a way, to what you're talking about with over-sampling.  The A matrix could be chosen such at you wouldn't need so many bits.

I guess I'm asking the same question as:

If a NEX 5N/5R/6 were to use ACR or software of choice to take the 16MP RAW files to produce 64MP bitmap files instead of 16MP bitmap files... could the 64MP bitmap files be SUPERIOR in luminance or color resolution (or in any other way) compared to the usual 16MP bitmap files?

I don't think so.  It might be slightly less noisy (see above), but only negligibly so.  Even base ISO generates more noise.


-- hide signature --
 Bart Hickman's gear list:Bart Hickman's gear list
Sony Alpha NEX-6 Sony a6000 Pentax smc FA 50mm F1.4 Sony E 55-210mm F4.5-6.3 OSS Sony E 10-18mm F4 OSS +10 more
Reply   Reply with quote   Complain
Post (hide subjects)Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark post MMy threads
Color scheme? Blue / Yellow