RAW Vs. SRAW..

Started Dec 27, 2007 | Discussions
rrcphoto Veteran Member • Posts: 6,173
Re: I tend to suspect that they don't.

JimH wrote:

I imagine that most RAW converters don't do the downsampling at the
RAW conversion stage.

Instead, I am betting that most do the conversion to full resolution
and then simply downsample the resulting image using one of the more
familiar strategies designed for use on color bitmaps.

So I'd like to see a RAW converter that explored this a bit and
perhaps offered some alternative strategies for the
binning/downsizing. But I'm not offering to write one just now

astrophotography raw converters do neat things like that .. ie: superpixel interpolation, and drizzle algorithms . however, they simply take the luminance value. photo-acute does something very similar to drizzle for it's "super resolution"

herr_bob Contributing Member • Posts: 607
Re: RAW Vs. SRAW..

rrcphoto wrote:

the sRAW pixel is the sensel data accumulated and binned directly in
hardware, and output as a RGB 3 channel output. which is why that
there's a variation in size between the two formats (1/4 the
resolution, 1/2 the filesize approximately) = RBG 24bit * 1936 * 1288
= 7.1MB, versus RAW at 3888 x 2592 * 16bit.

If the sRAW is only 24bits RGB or 8 bits per channel, where is the extra 6 bits per channel? I think the sRAW is also a lossless compressed format.

GaborSch Veteran Member • Posts: 7,203
To the understanding of sRaw

sRaw is noty a quarter of the original raw. Some people noticed already, that the file size is closer to the half than to the quarter.

The sRaw image is a "super quality in small size". Although there are only 1/4 pixels, BUT there are 30 bits of data instead of 14 per pixel. Each sRaw pixel carries a green and either red or blue component, 15 bits each.

John Sheehy Forum Pro • Posts: 21,743
Re: I tend to suspect that they don't.

JimH wrote:

I imagine that most RAW converters don't do the downsampling at the
RAW conversion stage.

Instead, I am betting that most do the conversion to full resolution
and then simply downsample the resulting image using one of the more
familiar strategies designed for use on color bitmaps.

So I'd like to see a RAW converter that explored this a bit and
perhaps offered some alternative strategies for the
binning/downsizing. But I'm not offering to write one just now

I tend to do things that are very critical in the deep shadows with my own manual conversions; dealing with blackpoints, subtracting banding artifacts, choosing the precision, the order of operations, etc, are all possible this way, but I don't know what to do about colorimetric issues, color spaces, etc, and those are things I'd rather trust to someone who knows more about them. Having plugins in RAW converters could take care of a lot of these things. We are still living in the dark ages of digital, though; I don't think most converter developers are aware of most of the issues with RAW data; each one seems to deal with some issues well and handles others poorly. Short of that, a program that let you load RAW data and work on it (or import edited RAW data) and then writing out a DNG (CFA or linear) would be helpful, although one would forfeit immediate feedback with this approach.

There are so many things that could be done to improve RAW files, IMO, before they are converted. Uneven amplification of odd and even lines could be addressed simply by taking the mean of the green pixels in each line and looking for patterns in it and scaling lines accordingly; an algorithm could look for banding offsets, "fix" black-clipped data, even out uneven highlight clipping by line as is seen in the 40D, 10D, D200, etc.

-- hide signature --

John

John Sheehy Forum Pro • Posts: 21,743
Re: To the understanding of sRaw

GaborSch wrote:

sRaw is noty a quarter of the original raw. Some people noticed
already, that the file size is closer to the half than to the quarter.

The sRaw image is a "super quality in small size". Although there are
only 1/4 pixels, BUT there are 30 bits of data instead of 14 per
pixel. Each sRaw pixel carries a green and either red or blue
component, 15 bits each.

That doesn't make any sense (not that it would stop it from being true!). There is no way to reduce the number of samples by 50% for each color without all kinds of spatial redistribution problems.

I would suspect that sRAW then has a lot of pixel-level color artifacts. How is it on sharp B&W edges?

If Canon wants to make smaller files, they can start by dropping worthless, marketing-gimmick bits from the RAWs. ISO 100 only needs 12 bits; ISO 3200 only 9 bits without losing any shadow detail. The dropped bits are less compressible than the ones that remain.

I had been guessing that there were 1/4 as many pixels, with reduced bit depth, each one having red, green, and blue components. Sort of like Adobe's linear DNG, but at 1/4 resolution.

Does sRAW maintain the blackpoint offset?

-- hide signature --

John

rrcphoto Veteran Member • Posts: 6,173
Re: To the understanding of sRaw

John Sheehy wrote:

If Canon wants to make smaller files, they can start by dropping
worthless, marketing-gimmick bits from the RAWs. ISO 100 only needs
12 bits; ISO 3200 only 9 bits without losing any shadow detail. The
dropped bits are less compressible than the ones that remain.

well, in the case of the 40D, there's provable increase in DR of around 1/2 stop in the shadows (as per dpreview), and they are assuming it's because of 14 bit mode. also the overall lattitude of a 40D RAW file is more than a 30D raw file ... and that's with an increased pixel density.

so I'm not quite sure it's "worthless". it's a YMMV thing on whether or not what it captures is important to your photography and what you do with the RAW data.

I have a feeling we're going to be seeing alot of sRAW .. as canon has been whispered around that it's next rendition of sensor technology will really focus on the hardware binning as methods of reducing noise (purportedly a 2 stop gain in ISO noise) - and I would presume they could also increase DR with clever binning as well), as the 40-60Mp low voltage CMOS sensors are the next significant generation of canon sensors.

GaborSch Veteran Member • Posts: 7,203
Black level correction in sRaw

John Sheehy wrote:

That doesn't make any sense (not that it would stop it from being
true!). There is no way to reduce the number of samples by 50% for
each color without all kinds of spatial redistribution problems

I think (though I am not sure), that following happens:

R-G-R-G
G-B-G-B

this octet will be substituted by G&R-G&B.

Does sRAW maintain the blackpoint offset?

1. There is no "blackpoint offset" in Canon cameras, but arrays of black level corrections (masked areas at two, three or four edges of the image).

2. Because the black level correction depends on the position of the pixel (column-row crossing), there is no way to apply the corrections gained from the masked pixels. That has to happen in-camera, before creating these "composite" pixels.

AmirNasher Regular Member • Posts: 116
2.5Mp is 1/4th of pixels, but size only 1/2 !!?

WHY?

-- hide signature --

my photo site - http://sight.com.ua
my mobile stuff - http://mobilemodding.info

John Sheehy Forum Pro • Posts: 21,743
Re: To the understanding of sRaw

rrcphoto wrote:

John Sheehy wrote:

If Canon wants to make smaller files, they can start by dropping
worthless, marketing-gimmick bits from the RAWs. ISO 100 only needs
12 bits; ISO 3200 only 9 bits without losing any shadow detail. The
dropped bits are less compressible than the ones that remain.

well, in the case of the 40D, there's provable increase in DR of
around 1/2 stop in the shadows (as per dpreview), and they are
assuming it's because of 14 bit mode.

No, it's because there is less analog noise. Read noise at IS0 100 is about 1.4 12-bit ADUs in the 40D, while it is about 2.1 ADU in 30Ds. That's 1/2 stop. If you quantize the 14-bit 40D RAW to twelve bits, it is still limited by the analog noise, and quantization has no impact on IQ.

You CAN'T judge the value of 14 bits against 12 bits by comparing them on different cameras. Even cameras like the D300 that offer both have other differences besides bit depth involved (readout/burst speed).

also the overall lattitude of
a 40D RAW file is more than a 30D raw file ... and that's with an
increased pixel density.

Well, DR and exposure latitude go hand in hand. The only difference is that you may accept more shadow noise for a DR standard than a latitude standard.

so I'm not quite sure it's "worthless".

I know it's worthless, unless you're an astrophotographer stacking 16 images, in which case the tiny bit of signal in the extra bits may wind up becoming visible.

it's a YMMV thing on whether
or not what it captures is important to your photography and what you
do with the RAW data.

I think even the astrophotography only benefits ever so slightly from the extra bits; the person shooting a single frame on their camera gets nothing out of the extra two bits that they're ever going to see, because the difference is trivial compared to the dynamics of analog read noise.

I have a feeling we're going to be seeing alot of sRAW .. as canon
has been whispered around that it's next rendition of sensor
technology will really focus on the hardware binning as methods of
reducing noise (purportedly a 2 stop gain in ISO noise) - and I
would presume they could also increase DR with clever binning as
well), as the 40-60Mp low voltage CMOS sensors are the next
significant generation of canon sensors.

Hardware binning is only going to reduce image read noise (software binning never does - it only reduces pixel noise); shot noise is a big part of the high-ISO issue, too, and binning doesn't have any benefit whatsoever for image shot noise. Binning is a big hype that will never deliver what it is supposed to, IMO, except as a marketing gimmick for people naive enough to think that looking at a reduced-resolution image with less pixels but the same image noise, is a good thing. If you have photoshop, you can simulate at least software and shot noise binning (IOW, everything but hardware read noise binning) with the Pixelate~Mosaic filter. Unless the noise is patterned at the same frequency as your pixelation would be, pixelating (binning) only makes the noise grains bigger as it makes it shallower, and trashes subject detail at the same time. You can see less of what you tried to capture. It's a totally false economy, and I can't believe that so many people believe in it.

I have my doubts about hardware binning, too, at least in the immediate future. The biggest obstacle with sensor technology and read noise, as it affects DR, is the inability to get read noise below a certain level, relative to the full DR of the readout. All of the highest-DR cameras right now are stuck at the same max_signal-to-readnoise level; 1.25 ADU for 12-bit systems, 5 ADU for 14-bit systems, and 20 ADU for 16-bit systems.

The cameras that keep RAW read noise at bay at higher ISOs can do so ONLY BECAUSE THEY ARE READING A LOWER NUMBER OF MAXIMUM ELECTRONS. They can not maintain the same low read noise as measured in electrons when counting more of them, and that is exactly what binning will entail. Read noise of the binned pixel will be similar to read noise of the single pixel at 1/4 the ISO.

I don't believe a word Canon or any other manufacturer says about their upcoming products. It's almost always basesless hype.

-- hide signature --

John

John Sheehy Forum Pro • Posts: 21,743
Re: Black level correction in sRaw

GaborSch wrote:

John Sheehy wrote:

That doesn't make any sense (not that it would stop it from being
true!). There is no way to reduce the number of samples by 50% for
each color without all kinds of spatial redistribution problems

I think (though I am not sure), that following happens:

R-G-R-G
G-B-G-B

this octet will be substituted by G&R-G&B.

Does sRAW maintain the blackpoint offset?

1. There is no "blackpoint offset" in Canon cameras, but arrays of
black level corrections (masked areas at two, three or four edges of
the image).

Blackpoint offsets. They are offset from zero in the RAW data, and the RAW data has what would be negative values (negative noise in black and near-black areas) if the true blackpoint is zeroed.

2. Because the black level correction depends on the position of the
pixel (column-row crossing), there is no way to apply the corrections
gained from the masked pixels. That has to happen in-camera, before
creating these "composite" pixels.

You didn't answer my question at all, and you're quite wrong in what you say here. I've used this data quite successfully to remove banding from RAW data. The interference between the horizontal and vertical means is trivial.

-- hide signature --

John

GaborSch Veteran Member • Posts: 7,203
Re: Black level correction in sRaw

John Sheehy wrote:

You didn't answer my question at all, and you're quite wrong in what
you say here

The answer was "it is not possible". The relationship between pixels and the respective black level corrections is nonexistent in the sRaw. Therefor it had to be done in-camera.

John Sheehy Forum Pro • Posts: 21,743
Re: Black level correction in sRaw

GaborSch wrote:

John Sheehy wrote:

You didn't answer my question at all, and you're quite wrong in what
you say here

The answer was "it is not possible". The relationship between pixels
and the respective black level corrections is nonexistent in the
sRaw. Therefor it had to be done in-camera.

Nonsense. You can just sRAW the black pixels, too. No difference necessary.

And they don't even have to be present in a file to leave "negative" exposure/noise space in the file.

-- hide signature --

John

Gao Gao Contributing Member • Posts: 561
A look at DNG files converted from sRAW and RAW

I complied the dng_sdk from Adobe, and was running it in debugging mode. Both sRAW and RAW are converted using the standard Adobe DNG converter 4.3.1, and keeping raw data raw.

The observation is that at Stage 3 (interpolation), DNG(Raw) provides valid CFA mosaic infomation, including CFA size, pattern order etc - while DNG(sRaw) does not provide this information and the Stage 2 image is directly copied to Stage 3's output. This means the DNG converter is doing whatever interpolation needed and the DNGs it spit out are not of any CFA pattern. This perhaps explains the file size of the uncompressed DNG (slightly larger than 1936*1288*3*16/8 bytes, meaning full rgb sampling); also the DNG converter itself has become much fatter than before if I'm not wrong - meaning some extra code specially written (or ported) for sRAW might have been added to handle the interpolation at conversion time.

So my attempt to crack sRAW through DNG has pretty much hit a dead end.

Cheers...

-- hide signature --

. 。o O o 。 . 。o O o 。 . 。o O o 。 .

Gao Gao Contributing Member • Posts: 561
A guess about what's inside sRAW

Since the spatial resolution is halved in both dimensions (1/4 total pixels), the file size is about halved, and the per-pixel sharpness seem higher, here's my guess -

1. sRAW has full green information, for every block of 4 pixels in sRAW, there are 4 green samples.

2. sRAW has half red and half blue information, for every block of 4, there are 2 samples of red and 2 of blue.

3. The in-camera interpolation can be tested by measuring pixel variance - but only distinguish naive bilinear from more sophisticated mothods (which is more likely).

-- hide signature --

. 。o O o 。 . 。o O o 。 . 。o O o 。 .

Rich Turk Regular Member • Posts: 283
Re: A guess about what's inside sRAW

imho, "S-RAW" is the least useful of any new feature on the 40D.

Now if you have me "M-Raw..that I can get an a4 out of, I' might use it to save space on my card, etc
--
http://www.richturkphotos.com

Gao Gao Contributing Member • Posts: 561
Sorry the picture is up now, and about mid-RAW

Take a look at the hypothesis - it is indeed a mid-RAW IMHO. It retains half the information of the raw data, in terms of spatial resolution, while the averaging may reduce the noise by almost a stop.

Well...for overall good SNR, the naive method should probably sample at Red instead of Blue.

-- hide signature --

. 。o O o 。 . 。o O o 。 . 。o O o 。 .

John Sheehy Forum Pro • Posts: 21,743
Re: A look at DNG files converted from sRAW and RAW

Gao Gao wrote:

So my attempt to crack sRAW through DNG has pretty much hit a dead end.

Does DCRAW support sRAW yet? If you use document mode "-D" it may give a color image (it usually outputs greyscale in "-D" mode) that is just the decoded RAW. I'd try it myself, but I don't have any sRAWs.

-- hide signature --

John

John Sheehy Forum Pro • Posts: 21,743
Re: A guess about what's inside sRAW

Gao Gao wrote:

Scheme 1 does cause a shift in alignment of the color planes, but it is almost totally reversible, in the sense that you know exactly where the data came from, and can shift it back in the RAW converter (working at a higher internal resolution). The only thing lost, resolution-wise, is that you have 1 green representing 2 greens from the original capture, so there will be a softening of green with a slight diagonal blur. To have R, G, and B for each pixel, however, requires a dropping of bits to meet the file size. I thought Canon claimed 14 bits for sRAW? That would suggest only two colors per pixel.

Scheme 2 has a little problem; every original red and green pixel is used in 4 different output pixels. This loses resolution in the red and green, and makes noise shallower, but coarser. That's the price of trying to maintain alignment and distribution.

Sort of like one of my quick'n'dirty pseudo-demosaicing routines; I lose one column and one row of pixels and create new pixels in the corners between all original pixels by combining the two greens and single red and blue found at each intersection. There is no overall alignment problem, because each new pixel pulls the alignment in a different way than the others around it, but results are softer than real demosaicing.

Again, I don't think that three colors per pixel is likely unless they are reducing bit depth (which, I suppose is possible as sRAW is not supposed to be full quality).

-- hide signature --

John

Gao Gao Contributing Member • Posts: 561
I used someone else's file...

There's a post in this forum with sRAW in the title very recently...

-- hide signature --

. 。o O o 。 . 。o O o 。 . 。o O o 。 .

Gao Gao Contributing Member • Posts: 561
Err...no, I didn't say 3 color per pixel in sRAW

Scheme 2 will give (from every 4x4 RAW), a 2x2 sRAW block with 4 green samples, 2 blues and 2 reds, altogether 8 samples versus 16 in the original raw.

Green Plane:
1 1
1 1

Blue Plane:
1 x
x 1

Red Plane:
1 x
x 1

or

x 1
1 x

Where x indicates sample missing. Well I was seeing 3 colors per pixel in the DNG file but not the sRAW CR2 (which I have no idea what the byte stream structure is).

Also, the interpolation of the values may not be simply bilinear. The densely sampled green can be used to estimate local gradient and used to guide the interpolation of both green itself and red. The problem with Scheme 1 (under the assumption of R:G:B=1:2:1) is that the distances between valid samples are unbalanced and can be rather far.

-- hide signature --

. 。o O o 。 . 。o O o 。 . 。o O o 。 .

Keyboard shortcuts:
FForum MMy threads