Sony A99 (and RX1) raw file issue? An investigative report...

Started Dec 27, 2012 | Discussions
Iliah Borg
Forum ProPosts: 16,083
Like?
Re: Sony A99 (and RX1) raw file issue? An investigative report...
In reply to Michaels7, Dec 29, 2012

Michaels7 wrote:

From Photoalpha.com

"It took me a few days and some digging to find out that the 14-bit readout on the A99 only applies one shooting mode – single shot. Any other mode you select, including Lo 2.5fps continuous and all multishot or JPEG only modes, uses 12-bit readout from the sensor. This was already documented in the literature about the A99, but what Sony omit to say is that the 14-bit mode causes a noticeable pause between the shot being captured and the restoration of EVF viewing. This pause is around 1/10th of a second longer in single shot mode than the blackout which happens during 2.5fps or the first frame of any faster sequence, and totals 200ms or 1/5th of a second."

Is it by the same gentleman who decides on the accuracy of Sony camera exposures based on the ACR histogram with default settings? If it is so his explanations should be taken very cautiously.

Reply   Reply with quote   Complain
crosstype
Regular MemberPosts: 162Gear list
Like?
Re: Sony A99 (and RX1) raw file scandal? An investigative report...
In reply to Ralf B, 9 months ago

Ralf B wrote:

Eric Perez wrote:

Wow, just read the entire thread. I am glad I didn't go the sony FF route. I do a lot of underexposing and HDR, post work that needs DR. My K5 can shoot 7FPS @14 bits without a LPF, not bad for $1300. I find it odd that sony does not disclose any of this info, scandal indeed...

According to DXO, DPR and Photoclubalpha, no shortage of DR in the a99 at low ISO. Your reply misses the factual point attempted to be discussed here.

You have a PM, Eric.

-- hide signature --

Cheers,
Ralf
www.ralfralph.smugmug.com

Great dynamic range performance does not mean smooth tonal transition.

 crosstype's gear list:crosstype's gear list
Sony RX1R Nikon D4s Nikon AF-S Nikkor 200mm f/2G ED VR II Carl Zeiss Distagon T* 2,8/15
Reply   Reply with quote   Complain
crosstype
Regular MemberPosts: 162Gear list
Like?
Re: Sony A99 (and RX1) raw file issue? An investigative report...
In reply to tesilab, 9 months ago

According to diglloyd:

Sony’s marketing material makes a false claim:

The BIONZ® image processor enables ... 14-bit RAW image data recording.

The claim should be understood as “14 bit image data from the sensor processed in full bit resolution then tonally compressed to an 8-bit representation and stored as 8 bits per pixel”. (It is not entirely clear if the pipeline is entirely 14-bit or partially 12-bit).

The actual recorded files are a fixed 8 bits per pixel, by simple math (6024 X 4024 * 1 byte = 24.24 MB, actual file sizes are a fixed 25.2MB total size including overhead, see screen shot at right). With real 14-bit files (Nikon D800/D800E), file sizes are continuously variable using a lossless compression method. The 'lossless' aspect is why they are variable size; no lossless compression algorithm can ever guarantee a fixed size.

It is not possible to achieve a stored file size of 8 bits per pixel without using a form of lossy compression. That lossy compression is a tone-mapping curve that “sags” in the middle and high key areas.

Sony RX1 ARW file size
Sony RX1 ARW file size — 8 bits per pixel
The Sony ARW raw files store 8 bits per byte using a tone curve that is linear for darker tones, and non-linear for brighter tones, essentially compressing a wider dynamic range into 8 bits. The 8 bits also involves a delta compression (difference between adjacents) which for gradual transitions effectively has a lossless aspect due to the delta approach, and allow more than 256 distinct values that 8 bits would imply. However, there is no getting around the fact that the approach is on the whole a form of lossy compression. Compare that to the Nikon D800/D800E, which records a high precision 14 bits per pixel.

The 8-bit files of the Sony RX1 do not define a limited dynamic range (range and precision within that range are two distinct ideas).

The Sony RX1 operates in 12-bits internally and is thus capable of recording a wide dynamic range. But since the stored file contains a fixed 8 bits per pixel, there (by math) cannot be more than 256 gradations per pixel. Thus one might find that tonal transitions are not as smooth as with a camera that records 14 bits (or 12 bits) per pixel such as the Nikon D800E, or Nikon D600. My assumption is that this would appear mostly in colored gradients— skies, sunsets, water, etc.

 crosstype's gear list:crosstype's gear list
Sony RX1R Nikon D4s Nikon AF-S Nikkor 200mm f/2G ED VR II Carl Zeiss Distagon T* 2,8/15
Reply   Reply with quote   Complain
EdnaBambrick
Regular MemberPosts: 268Gear list
Like?
Re: Sony A99 (and RX1) raw file issue? An investigative report...
In reply to tesilab, 9 months ago

Sony's compression is reeling it's ugly head in A7r files.  There are tons of tonal transitions, artifacts and aliases.

It's a mystery why Sony (other than rushing the camera to market) would choose to not have compression free RAW files at least as an option.

 EdnaBambrick's gear list:EdnaBambrick's gear list
Sony Alpha NEX-7 Carl Zeiss Distagon T* 2,8/21 Carl Zeiss Distagon T* 2/28 Carl Zeiss Distagon T* 1,4/35 Carl Zeiss Planar T* 1,4/50 +6 more
Reply   Reply with quote   Complain
K E Hoffman
Senior MemberPosts: 2,898Gear list
Like?
Don't Panic and Take More Pictures...
In reply to EdnaBambrick, 9 months ago

EdnaBambrick wrote:

Sony's compression is reeling it's ugly head in A7r files. There are tons of tonal transitions, artifacts and aliases.

It's a mystery why Sony (other than rushing the camera to market) would choose to not have compression free RAW files at least as an option.

Why no option for Uncompressed RAW? Because they would be GIGANTIC and take forever to record..an uncompressed 36MP file of 14 bit values would be 70 MB.. each.. So you compress.. and even if it is not 100% lossless... Keep in mind that the debayering process followed by the CRAMINING IT IN TO A JPG for the WEB or even Larger Color space is in itself a very lossy process.. And the problem people run into in this game is a 14 bit file is 14 bits of Red, 14 bits of blue and 14 bits of green Or 16K CUBED or 4 TRILLION Possible RGB color Values.. which not only can't bee seen but there is not a device on the planet that can render that.

Of course if we are looking at something painted in pure red light only.. then you have maybe 16K levels of red of RED to encode. If A7R files are coming up with tonal transition issues.. its because the PP software is not working with the full RAW file or is making mistakes..and is mapping colors to a color space wrong for its work etc. And people are looking at them on monitors or printers that have long since tossed out many times more data than the RAW compression even did. Most Monitors and Printers we use can't even pull off a full SRGB color space so the systems will remap colors.. etc.

If you think the 1:1 pixel peeping is crazy start to learn about colorspaces etc. and at some point you will get that in Most situations.. you will never see the difference between a 12 bit file and a 14 bit file.. because none of your tools can render the differences.

This panic of compression of 12 bit or even 14 bit RAW files is silly.. since 90% of all images are eventually mapped into an 8bit JPG for 16 million colors.... though if you are using GOOD software to process and alter the images having the full data keeps the software from creating artifacts until its turned into a JPG .. after which IT SHOULD NEVER EVER BE TOUCHED AGAIN.

Then there are the chances that many people will open a JPG tweak it save it and open it and tweak it. And the loss in that process and chances of artifacts is terrible. then many web sites re compress posted images.. and can your browser render them well etc etc. Its one reason I process in Lightroom its never changes the RAW file it renders the changes you make then converts to JPG. The even allows for work on JPGS to be better as I always get a NEW one generation offset JPG.. when I make changes.

Just reading this thread I suspect part of the problem here is understanding the compression system. Or NOT HAVING ANY UNDERSTANDING OF THE COMPRESSION SYSTEM OR EVEN the massive amounts of data in a RAW file that are lost just turning it into an image.

I seem to remember that in CRAW it was a system of encoding delta values from key pixels in a row. So you can't possibly do a statistical analysis of the values.. without processing the RAW into a 50 MB file of values.

If you don't understand the compression.. you can't determine the range of values in a file.

Example in the Font System called Unicode that all modern OS's use holds all the possible glyphs for all the alphabets are encoded in 16 bit Words 16 bit works for a possible set of 65,535 glyphs covering ASCII and Thai and even Inuit alphabet of the First Nations People in the most northern part of Canada ETC.

65K characters sounds like more than enough for all the common alphabets, Until you get to the ideographic languages of Asia.. And some of them keep growing as new Characters are created for names and new words etc. The possible characters in the variations of Chinese alone can fill multiple 65K pages of characters.. if you tried to represent every Chines Character even variations of Korean and Japanese that use Chinese like Characters (though both now in most modern work use a smaller set of characters) .

But the UNICODE system is only 16 bits so it must be impossible for it to represent tens of thousands of just special Asian characters needed right? and I can prove it by counting the number of different unique values which will never be more than 65K and always 16 bit even though we might need 17 or 18 bits to cover everything.. Right? WRONG....

Because a whole range of the Unicode space is actually 16 bit value pairs that combine to encode both a new PAGE of 65K glyphs AND an offset into that page. That means a file that will never have more than 65K unique 16 bit values can actually encode much more (currently 110K glyphs . But if I just looked at the values in the file and didn't know this.. Unicode would look incapable of covering the typographic ranges it does.

http://unicode.org/charts/

http://en.wikipedia.org/wiki/Unicode

earlier in this thread it talks about a distribution of the values and some ranges having very few.. this tells me the compression scheme is can not be quantified buy just doing a simple statistical analysis of the values. One must understand how they interact with each other to get a REAL read of how many values there are.

-- hide signature --

K.E.H. >> Shooting between raindrops in WA<<

 K E Hoffman's gear list:K E Hoffman's gear list
Canon EOS 450D Sony SLT-A77 Nikon 1 J1 Sony a77 II Sony 70-300mm F4.5-5.6 G SSM +7 more
Reply   Reply with quote   Complain
Allan Olesen
Senior MemberPosts: 2,255
Like?
Nothing new discovered here
In reply to tesilab, 9 months ago

This has been known since the a77 was new.

The new Sony compression method is a non-linear representation of the available range of sensor output values. At low raw values, each step in the raw file represents a small change in the sensor output value. At high raw values, each step represents a larger change. So you have the full range from dark to bright, but not the full resolution between values. But as far as I can see, you have already realized this yourself.

I don't understand why you say that this is not about reducing the size of the file because the file is the same size as Nikon's. If you have a background in digital imaging as you wrote in another post, you must be able to easily understand that any 24 MP 14 bit file of this small size has undergone some kind of compression. So the Nikon files are of course also compressed. And of course the goal is to reduce the file size.

The fact is that a lot of raw files are compressed. Some use lossless compression, and some use lossy compression.

Most of us would probably think "Lossless compression? I want that!". But then you should search Google for "Lightroom corrupted my photos". You will find a lot of threads on different forums, and they have one thing in common: Canon .CR2 raw files. Those files use lossless compression, and the result is that a single bit error can render the photo unusable.

The advantage of using lossy compression instead of lossless is that your files get much more robust to bit errors. Since each pixel (or group of pixels) will have the same size in the file (at least with the formats I know), flipping a bit will only affect this pixel, and the raw converter will still know where to find the next pixel in the file. With the type of lossless compression Canon uses, each pixel will have a different size in the file, and the raw converter will not know where in the file to find the next pixel until it has correctly decoded the current pixel. I have explained this in more detail in post no. 27 and 28 in this thread in Adobe's forums.

Before the new lossy compression scheme in the a77 and a99 Sony files, Sony used another lossy compression scheme which was really clever. They took a sequence of pixels (16, I think) and found the darkest and the brightest pixel in the sequence. These two pixels were stored with 12 bit precision while the other 14 pixels were stored as a 7 bit number which represented the relative brightness compared to the darkest and brightest pixel. The result was that the compression was in fact lossless in areas with small brightness variation, and only lossy in areas with high brightness variation.

As far as I know, the Nikon compression is also lossy, but I don't know the compression method.

Reply   Reply with quote   Complain
Nordstjernen
Veteran MemberPosts: 6,222Gear list
Like?
Wrong starting point
In reply to tesilab, 9 months ago

I would recommend to start with how many bit of data the sensor is capable of recording, and then look at how many bits of data that is needed to make a smooth and nice representation of the subject photographed.

Then follow the processing path to find out what data is lost and then try to figure out if the data is needed or not to make a 'perfect' end result.

16, 14 and 12 bit is just containers for image data. You don't get more data when putting 10 bit of data into a 16 bit container.

 Nordstjernen's gear list:Nordstjernen's gear list
Sony SLT-A99 Sony Alpha 7
Reply   Reply with quote   Complain
tesilab
Senior MemberPosts: 1,990Gear list
Like?
Re: Don't Panic and Take More Pictures...
In reply to K E Hoffman, 9 months ago

K E Hoffman wrote:

EdnaBambrick wrote:

Sony's compression is reeling it's ugly head in A7r files. There are tons of tonal transitions, artifacts and aliases.

It's a mystery why Sony (other than rushing the camera to market) would choose to not have compression free RAW files at least as an option.

Why no option for Uncompressed RAW? Because they would be GIGANTIC and take forever to record..an uncompressed 36MP file of 14 bit values would be 70 MB.. each..

Nikon provides options for how big of a raw file you want.

So you compress.. and even if it is not 100% lossless... Keep in mind that the debayering process followed by the CRAMINING IT IN TO A JPG for the WEB or even Larger Color space is in itself a very lossy process..

This has nothing to do with raw files. Once you are finished processing an image it is fine to apply whatever compression preserves the visual integrity of the image rendering. But when you are performing a variety of post-processing operations on the raw file, you need the extra precision during the interim steps.

And the problem people run into in this game is a 14 bit file is 14 bits of Red, 14 bits of blue and 14 bits of green Or 16K CUBED or 4 TRILLION Possible RGB color Values.. which not only can't bee seen but there is not a device on the planet that can render that.

The green channel that carries nearly 60% of the luminance data would certainly benefit from more levels.

Of course if we are looking at something painted in pure red light only.. then you have maybe 16K levels of red of RED to encode. If A7R files are coming up with tonal transition issues.. its because the PP software is not working with the full RAW file or is making mistakes..and is mapping colors to a color space wrong for its work etc.

Saying the PP is not working with the full raw is poppycock. Either it knows how to decode the raw file or it doesn't. Proprietary metadata is one thing, pixel values is something else.

And people are looking at them on monitors or printers that have long since tossed out many times more data than the RAW compression even did. Most Monitors and Printers we use can't even pull off a full SRGB color space so the systems will remap colors.. etc.

I produce an image of the sky that was purposely exposed to capture very many levels. There is obvious banding in the gray skies with a small amount of tonal adjustments.

If you think the 1:1 pixel peeping is crazy start to learn about colorspaces etc. and at some point you will get that in Most situations.. you will never see the difference between a 12 bit file and a 14 bit file.. because none of your tools can render the differences.

This is an issue of interim precision to be able to make tonal adjustments to a file without multiplying errors. No one argues with producing a compressed final result. The image isn't done at the time of capture. If it was, there would be no need for raw files, we'd all be just fine with jpeg.

This panic of compression of 12 bit or even 14 bit RAW files is silly.. since 90% of all images are eventually mapped into an 8bit JPG for 16 million colors.... though if you are using GOOD software to process and alter the images having the full data keeps the software from creating artifacts until its turned into a JPG .. after which IT SHOULD NEVER EVER BE TOUCHED AGAIN.

See above explanation. I believe LR is reasonable software developed by competent engineers. You may prefer another Raw processor for any particular task--but do you dispute that?

Then there are the chances that many people will open a JPG tweak it save it and open it and tweak it. And the loss in that process and chances of artifacts is terrible. then many web sites re compress posted images.. and can your browser render them well etc etc. Its one reason I process in Lightroom its never changes the RAW file it renders the changes you make then converts to JPG. The even allows for work on JPGS to be better as I always get a NEW one generation offset JPG.. when I make changes.

You are making my point. The Sony files appear fragile to me in the midtones when post-processing.

Just reading this thread I suspect part of the problem here is understanding the compression system. Or NOT HAVING ANY UNDERSTANDING OF THE COMPRESSION SYSTEM OR EVEN the massive amounts of data in a RAW file that are lost just turning it into an image.

I have stepped through every line of publicly available ARW libraw code, and know how every bit of the 128 bits of a 16 pixel block is expanded.

I seem to remember that in CRAW it was a system of encoding delta values from key pixels in a row. So you can't possibly do a statistical analysis of the values.. without processing the RAW into a 50 MB file of values.

Welcome to RawDigger. That's exactly what the software does. I started by performing an analysis of the expanded data. No demosaicing involved. Just arrays of R,G,B,G data. I followed that up by reading the code.

If you don't understand the compression.. you can't determine the range of values in a file.

Let's explain the Sony compression to you right now, it is actually very clever and relies on a simple visual phenomenon to achieve its encoding. Values with strong contrast in close proximity prevent you from appreciating the least significant bits of information. If there is a very slow gradation, no precision is lost, since the 7 bit deltas are sufficient. If the gradation is less slow, so that it could be encoded in 8 bits then discard one bit, if it is even less slow, discard another bit, and so on:

  • Each channel encodes blocks of 16 pixel values, let's call them 0-15.
  • Loop through pixels zero through fifteen and find the index of the lowest value.
  • Encode the lowest pixel in 11 bits, and the index of that value from 0-15 in the next 4 bitss
  • Loop through pixels zero through fifteen and find the index of the highest value
  • In the case when all 16 pixels have the same value, don't choose the same index as the lowest one
  • Encode the highest pixel value in 11 bits, and its index in the next 4 bits
  • Subtract the maximum from the minimum value to see the how many bits is required to store all the remaining values as a delta over the minimum. Use this to determine how many least significant bits will be shifted out (between zero and four bits)
  • Encode the remaining fourteen pixels as deltas as 7 bit values.

Expansion is simple, read the minimum and maximum 11 bit values, and put them in their respective places as determined by the indices. Fill in the missing pixels by reading the 7 bit values and shifting them left a number of bits determined by the difference between min and max. These values can be expanded to 12 bits by applying a curve that is also provided in the raw file. (This is why raw digger shows values in the range 0-4095, rather than 0-2047. Their distribution in that space is determined by a curve.)

 tesilab's gear list:tesilab's gear list
Sony RX1 Sony Alpha NEX-5 Carl Zeiss Makro-Planar T* 2/100 Sony E 50mm F1.8 OSS Sigma 19mm F2.8 EX DN +11 more
Reply   Reply with quote   Complain
tesilab
Senior MemberPosts: 1,990Gear list
Like?
Re: Nothing new discovered here
In reply to Allan Olesen, 9 months ago

You are describing the same scheme they are still using. I agree that it is clever. I agree that huge raw files of 14 bit data would contain a lot of noise in the lower bits. Still I think their compression is too agressive, and can certainly be made to produce artifacts, just not in most common scenarios, which is why they get away with it.

I do believe I'd have more latitude for post-processing with fatter, less compressed files. I also believe Sony's marketing materials are simply lying when they claim 14 bit output.  (As opposed to an internal imaging chain). You can debate whether we need the 14 bits, but it simply isn't available as Sony literature claims.

 tesilab's gear list:tesilab's gear list
Sony RX1 Sony Alpha NEX-5 Carl Zeiss Makro-Planar T* 2/100 Sony E 50mm F1.8 OSS Sigma 19mm F2.8 EX DN +11 more
Reply   Reply with quote   Complain
tesilab
Senior MemberPosts: 1,990Gear list
Like?
Re: Wrong starting point
In reply to Nordstjernen, 9 months ago

Nordstjernen wrote:

I would recommend to start with how many bit of data the sensor is capable of recording, and then look at how many bits of data that is needed to make a smooth and nice representation of the subject photographed.

Then follow the processing path to find out what data is lost and then try to figure out if the data is needed or not to make a 'perfect' end result.

16, 14 and 12 bit is just containers for image data. You don't get more data when putting 10 bit of data into a 16 bit container.

I assume Sony's literature mentions 14 bits for a reason. Some marketing guy heard the engineers say it. There's 14 bits in there somewhere, just not in the raw output.

Let's agree that it makes a difference only in a fraction of cases. Post-processors trying to get the best tonal mapping for their output. Let Sony show us the bits if we want to burn the space on our hard drives for those images.

 tesilab's gear list:tesilab's gear list
Sony RX1 Sony Alpha NEX-5 Carl Zeiss Makro-Planar T* 2/100 Sony E 50mm F1.8 OSS Sigma 19mm F2.8 EX DN +11 more
Reply   Reply with quote   Complain
K E Hoffman
Senior MemberPosts: 2,898Gear list
Like?
Re: Don't Panic and Take More Pictures...
In reply to tesilab, 9 months ago

tesilab wrote:

Let's explain the Sony compression to you right now, it is actually very clever and relies on a simple visual phenomenon to achieve its encoding. Values with strong contrast in close proximity prevent you from appreciating the least significant bits of information. If there is a very slow gradation, no precision is lost, since the 7 bit deltas are sufficient. If the gradation is less slow, so that it could be encoded in 8 bits then discard one bit, if it is even less slow, discard another bit, and so on:

  • Each channel encodes blocks of 16 pixel values, let's call them 0-15.
  • Loop through pixels zero through fifteen and find the index of the lowest value.
  • Encode the lowest pixel in 11 bits, and the index of that value from 0-15 in the next 4 bitss
  • Loop through pixels zero through fifteen and find the index of the highest value
  • In the case when all 16 pixels have the same value, don't choose the same index as the lowest one
  • Encode the highest pixel value in 11 bits, and its index in the next 4 bits
  • Subtract the maximum from the minimum value to see the how many bits is required to store all the remaining values as a delta over the minimum. Use this to determine how many least significant bits will be shifted out (between zero and four bits)
  • Encode the remaining fourteen pixels as deltas as 7 bit values.

Expansion is simple, read the minimum and maximum 11 bit values, and put them in their respective places as determined by the indices. Fill in the missing pixels by reading the 7 bit values and shifting them left a number of bits determined by the difference between min and max. These values can be expanded to 12 bits by applying a curve that is also provided in the raw file. (This is why raw digger shows values in the range 0-4095, rather than 0-2047. Their distribution in that space is determined by a curve.)

That is the best description of the process I have seen.  I am close to being able to diagram it, not quite..   Thanks!!!

I wonder if with a very carefully set up test target on the A7R you could start to find REAL artifacts from this..  because along with the compression scheme which should encode data accurately enough for most imaging.. the AA front filter [not on A7R] and the act of Debayering should introduce enough smoothing of transitions that it would be very rare to find something if could not encode.

A star test might be good.

I would say if you can't find it in a properly images star field were the star shapes and light fall off should follow known patterns then the compression is near perfect.   That's how I would test it is how does it encode a point source of light against a dark background.

But in the end for 99% of all shots.. there are so many other places in the imaging process where precision is removed from the system to map it into a new format, Debayering, Color space, JPG output... I think one would have to look very hard to find anyplace where compression of this kind really affects anything.  And yet people obsess about it.

-- hide signature --

K.E.H. >> Shooting between raindrops in WA<<

 K E Hoffman's gear list:K E Hoffman's gear list
Canon EOS 450D Sony SLT-A77 Nikon 1 J1 Sony a77 II Sony 70-300mm F4.5-5.6 G SSM +7 more
Reply   Reply with quote   Complain
tesilab
Senior MemberPosts: 1,990Gear list
Like?
Re: Don't jump the gun... I'm just asking about the data
In reply to mick232, 9 months ago

This is almost certainly not fixable in firmware.

 tesilab's gear list:tesilab's gear list
Sony RX1 Sony Alpha NEX-5 Carl Zeiss Makro-Planar T* 2/100 Sony E 50mm F1.8 OSS Sigma 19mm F2.8 EX DN +11 more
Reply   Reply with quote   Complain
Allan Olesen
Senior MemberPosts: 2,255
Like?
Re: Nothing new discovered here
In reply to tesilab, 9 months ago

tesilab wrote:

You are describing the same scheme they are still using.

I described the scheme Sony use now, the scheme Sony used before, and the scheme Canon use now. Which one are you referring to?

I agree that it is clever.

Hm. I only said that about the scheme Sony used before. So perhaps you are referring to that.

So you are saying that they have put the new compression scheme on top of the old scheme instead of replacing it? Do you have any links?

I am not saying that you are wrong. It is just the first time I have heard of it.

But I have been wondering if something like that was happening, because when I look at clipped areas with RawDigger, there are groups of pixels at the border of the clipped area, where the pixel values are actually somewhat higher than in the middle of the clipped area. I have never counted the number of pixels in these groups, but they could be 1 pixel high and 16 pixels wide like the grouping in the old scheme.

Edit:

I believe you are right. This will also explain two other things I have been wondering about:

1. Only 11 bits (I wrote 12, which was wrong) was used for storing min. and max. values in the 128 bit sequence, but the old Sony cameras were assumed to be 12 bit.

2. The a77 raw files are only large enough for average 8 bit per pixel, but there are more than 256 unique values.

So the old and new scheme is the same, probably just with a more aggressive curve for the 14 bit a99.

Reply   Reply with quote   Complain
Nordstjernen
Veteran MemberPosts: 6,222Gear list
Like?
Re: Wrong starting point
In reply to tesilab, 9 months ago

tesilab wrote:

Nordstjernen wrote:

I would recommend to start with how many bit of data the sensor is capable of recording, and then look at how many bits of data that is needed to make a smooth and nice representation of the subject photographed.

Then follow the processing path to find out what data is lost and then try to figure out if the data is needed or not to make a 'perfect' end result.

16, 14 and 12 bit is just containers for image data. You don't get more data when putting 10 bit of data into a 16 bit container.

I assume Sony's literature mentions 14 bits for a reason. Some marketing guy heard the engineers say it. There's 14 bits in there somewhere, just not in the raw output.

Let's agree that it makes a difference only in a fraction of cases. Post-processors trying to get the best tonal mapping for their output. Let Sony show us the bits if we want to burn the space on our hard drives for those images.

There is for sure NOT full 14 bit input from the sensor!

 Nordstjernen's gear list:Nordstjernen's gear list
Sony SLT-A99 Sony Alpha 7
Reply   Reply with quote   Complain
tesilab
Senior MemberPosts: 1,990Gear list
Like?
Re: Nothing new discovered here
In reply to Allan Olesen, 9 months ago

Allan Olesen wrote:

tesilab wrote:

You are describing the same scheme they are still using.

I described the scheme Sony use now, the scheme Sony used before, and the scheme Canon use now. Which one are you referring to?

Sony's scheme hasn't changed, if that is what they were doing before. I only looked into the raw format for the NEX, A99, and RX1 cameras.

I agree that it is clever.

Hm. I only said that about the scheme Sony used before. So perhaps you are referring to that.

So you are saying that they have put the new compression scheme on top of the old scheme instead of replacing it? Do you have any links?

Nope same scheme. Try getting code at https://github.com/LibRaw/LibRaw

...

Edit:

I believe you are right. This will also explain two other things I have been wondering about:

1. Only 11 bits (I wrote 12, which was wrong) was used for storing min. and max. values in the 128 bit sequence, but the old Sony cameras were assumed to be 12 bit.

Right. 11 bits of unique values distributed along a curve that fits those values into a 12 bit space.

2. The a77 raw files are only large enough for average 8 bit per pixel, but there are more than 256 unique values.

So the old and new scheme is the same, probably just with a more aggressive curve for the 14 bit a99.

The output still covers only a 12 bit space. Any 14-bit goodness to the cameras must be entirely internal and not available as output.

 tesilab's gear list:tesilab's gear list
Sony RX1 Sony Alpha NEX-5 Carl Zeiss Makro-Planar T* 2/100 Sony E 50mm F1.8 OSS Sigma 19mm F2.8 EX DN +11 more
Reply   Reply with quote   Complain
Allan Olesen
Senior MemberPosts: 2,255
Like?
Re: Nothing new discovered here
In reply to tesilab, 9 months ago

tesilab wrote:

The output still covers only a 12 bit space. Any 14-bit goodness to the cameras must be entirely internal and not available as output.

You mean the output after applying the curve, right?

Are you saying that the lowest non-zero value (and the value increment in the low end of the range) is approx. 1/4000 of the highest value, instead of approx. 1/16000 as should be expected for a 14 bit file?

This description would fit the output from my a77 as seen in RawDigger. Lowest values are 0, 4, 8 etc., and highest value is somewhere around 16k. So that is only a 12 bit range which has for some reason had all the values multiplied by 4. But this camera is only marketed as 12 bit, so that is OK.

Reply   Reply with quote   Complain
Keyboard shortcuts:
FForum MMy threads