Hig res in studio scenes

Started 6 months ago | Discussions
evan ts Forum Member • Posts: 64
Re: Hig res in studio scenes

knickerhawk wrote:

evan ts wrote:

knickerhawk wrote:

evan ts wrote:

knickerhawk wrote:

evan ts wrote:

There are many algorithms that can combine many low-res images into a single hi-res image.
(This one was announced 15 yeas ago: https://users.soe.ucsc.edu/~milanfar/publications/journal/SR-challengesIJIST.pdf )

I think that Silkypix just uses a newer and better hi-res algorithm than ACR.

But that's not what's going on at the raw converter stage to a HiRes raw file. The camera internally merges the subframes using a fixed algorithm (perhaps similar to what's discussed in the linked paper) and outputs an ordinary looking Bayer-style raw file, albeit a much larger one constructed from the subsampling of each normal-sized pixel position. The raw converters are not called upon to do anything different with one of these HiRes raws than they do with normal raws.

The hi-res takes 8 Bayer-shots (2 RGB-shots). The key point is, pixel size is still the same as a normal shot. When all shots are stacked together, pixels are overlapped. So we need a good algorithm to separate the information of overlapped pixels.

Yes, that's understood and implicit in my reference to "subsampling of each normal-sized pixel position." What you're not addressing is my point that the "good algorithm" you're referencing is applied in camera during the construction of the Bayer-style raw file, not later in the raw converter. Consider this: When the S1R generates a HiRes image does it output 8 individual/interim raw image files or just one? If it's the former, then you would be correct that the raw converter would have to have a built-in capability of handling the 8 samples per subpixel position. But in fact it's the latter, which means that the camera has already done the heavy algorithmic lifting for merging the subsamples into a specific R,G1,G2 or B value for each of the subpixels. This allows the raw converter of choice to simply see the raw file as a normal Bayer style RGGB raw file.

I think the hi-res raw file just contains 8 copies of RGGB values. (and embedded thumb jpeg)

In which case the HiRes raw file would be approximately 8x the size of a normal S1R image, but it isn't. For instance, the DPR studio scene normal ISO 100 S1R raw image is 67 MB. 67x8=536 MB, which is what we would expect if the 8 sets of RGGB values are maintained in the raw container file. However, the actual size of the DPR ISO 100 SIR HiRes raw files is 337 MB. None of the mFT Panny or Oly HiRes files work the way you're thinking. They work the way I've explained.

8 RGGB shots can be stacked into 2 RGB shots. The size is reduced to 6 times of a normal raw. Store it in a size-efficient format and minus the space of embedded JPEGs. I think the 5-6 times of size is reasonable.

If you were right, hi-res has 4x pixels, RGB 3 colors in each pixel. The size should be 12 times.

 evan ts's gear list:evan ts's gear list
Canon EOS M Fujifilm X70 Olympus PEN E-P5 Panasonic Lumix DMC-GM1 Panasonic Lumix DMC-GX85 +8 more
evan ts Forum Member • Posts: 64
Re: Hig res in studio scenes

knickerhawk wrote:

evan ts wrote:

twilsonstudiolab wrote:

But that's all done in the camera before the RAW is written. Doesn't have anything to do with ACR or Silkypix.

No, the raw file just contains the source data (layer1 and layer2). Like the bayer raw needs a demosaicing algorithm, the hi-res raw needs a combining algorithm too.

You can check the m43 hi-res case: http://bit.ly/2UB5QID
The algorithm of G9 OOC JPEG is obviously better than ACR.

The G9's in-camera JPEG engine is more aggressive with sharpening and contrast than ACR is at default settings. That's very typical. The same difference is evident in the normal G9 JPEG vs. ACR renderings in the DPR comparison.

No, that's because Pana uses a new demosaicing algorithm in GH5/G9.

 evan ts's gear list:evan ts's gear list
Canon EOS M Fujifilm X70 Olympus PEN E-P5 Panasonic Lumix DMC-GM1 Panasonic Lumix DMC-GX85 +8 more
knickerhawk Veteran Member • Posts: 6,374
Re: Hig res in studio scenes
1

evan ts wrote:

knickerhawk wrote:

evan ts wrote:

knickerhawk wrote:

evan ts wrote:

knickerhawk wrote:

evan ts wrote:

There are many algorithms that can combine many low-res images into a single hi-res image.
(This one was announced 15 yeas ago: https://users.soe.ucsc.edu/~milanfar/publications/journal/SR-challengesIJIST.pdf )

I think that Silkypix just uses a newer and better hi-res algorithm than ACR.

But that's not what's going on at the raw converter stage to a HiRes raw file. The camera internally merges the subframes using a fixed algorithm (perhaps similar to what's discussed in the linked paper) and outputs an ordinary looking Bayer-style raw file, albeit a much larger one constructed from the subsampling of each normal-sized pixel position. The raw converters are not called upon to do anything different with one of these HiRes raws than they do with normal raws.

The hi-res takes 8 Bayer-shots (2 RGB-shots). The key point is, pixel size is still the same as a normal shot. When all shots are stacked together, pixels are overlapped. So we need a good algorithm to separate the information of overlapped pixels.

Yes, that's understood and implicit in my reference to "subsampling of each normal-sized pixel position." What you're not addressing is my point that the "good algorithm" you're referencing is applied in camera during the construction of the Bayer-style raw file, not later in the raw converter. Consider this: When the S1R generates a HiRes image does it output 8 individual/interim raw image files or just one? If it's the former, then you would be correct that the raw converter would have to have a built-in capability of handling the 8 samples per subpixel position. But in fact it's the latter, which means that the camera has already done the heavy algorithmic lifting for merging the subsamples into a specific R,G1,G2 or B value for each of the subpixels. This allows the raw converter of choice to simply see the raw file as a normal Bayer style RGGB raw file.

I think the hi-res raw file just contains 8 copies of RGGB values. (and embedded thumb jpeg)

In which case the HiRes raw file would be approximately 8x the size of a normal S1R image, but it isn't. For instance, the DPR studio scene normal ISO 100 S1R raw image is 67 MB. 67x8=536 MB, which is what we would expect if the 8 sets of RGGB values are maintained in the raw container file. However, the actual size of the DPR ISO 100 SIR HiRes raw files is 337 MB. None of the mFT Panny or Oly HiRes files work the way you're thinking. They work the way I've explained.

8 RGGB shots can be stacked into 2 RGB shots.

Not without throwing away a LOT of information, which defeats the whole purpose of creating a HiRes image in the first place.

The size is reduced to 6 times of a normal raw. Store it in a size-efficient format and minus the space of embedded JPEGs.

I think the 5-6 times of size is reasonable.

If you were right, hi-res has 4x pixels,

Correct (for the HiRes method used by Panny and Oly)

RGB 3 colors in each pixel.

But it's not a TIFF or other RGB format. It's a raw! Therefore, it's either R, G, G1 or B for each individual pixel, instead of R+G+B for each pixel.

The size should be 12 times.

twilsonstudiolab
twilsonstudiolab Regular Member • Posts: 261
Re: Hig res in studio scenes

evan ts wrote:

knickerhawk wrote:

evan ts wrote:

knickerhawk wrote:

evan ts wrote:

There are many algorithms that can combine many low-res images into a single hi-res image.
(This one was announced 15 yeas ago: https://users.soe.ucsc.edu/~milanfar/publications/journal/SR-challengesIJIST.pdf )

I think that Silkypix just uses a newer and better hi-res algorithm than ACR.

But that's not what's going on at the raw converter stage to a HiRes raw file. The camera internally merges the subframes using a fixed algorithm (perhaps similar to what's discussed in the linked paper) and outputs an ordinary looking Bayer-style raw file, albeit a much larger one constructed from the subsampling of each normal-sized pixel position. The raw converters are not called upon to do anything different with one of these HiRes raws than they do with normal raws.

The hi-res takes 8 Bayer-shots (2 RGB-shots). The key point is, pixel size is still the same as a normal shot. When all shots are stacked together, pixels are overlapped. So we need a good algorithm to separate the information of overlapped pixels.

Yes, that's understood and implicit in my reference to "subsampling of each normal-sized pixel position." What you're not addressing is my point that the "good algorithm" you're referencing is applied in camera during the construction of the Bayer-style raw file, not later in the raw converter. Consider this: When the S1R generates a HiRes image does it output 8 individual/interim raw image files or just one? If it's the former, then you would be correct that the raw converter would have to have a built-in capability of handling the 8 samples per subpixel position. But in fact it's the latter, which means that the camera has already done the heavy algorithmic lifting for merging the subsamples into a specific R,G1,G2 or B value for each of the subpixels. This allows the raw converter of choice to simply see the raw file as a normal Bayer style RGGB raw file.

I think the hi-res raw file just contains 8 copies of RGGB values. (and embedded thumb jpeg)

I think it's probably more complicated than that. I think it's very possible that the assembly and 'demosaicing' may be done in camera and baked into the RAW file, leaving delinearization, white balance, and tone & color for the various processing applications to do normally. I put demosaicing in quotes because pixel shifting IS a kind of demosaicing. In 4-step pixel-shifted systems, color information for all 4 channels is obtained directly, so no need for algorithmic demosaicing. I think with 8 exposures, you get effectively 2 layers, like the diagram above, but each layer was never really mosaiced, having gotten its channel information directly. Weaving the two layers together by placing the pixels of one diagonally between the pixels of the other doubles the number of pixels, but doesn't quadruple it because it creates another set of holes to fill. Filling those is easier than demosaicing, though, in that all of the existing pixels have 4 channels of info already. But I'd bet that that is not a process that Panasonic would trust to anyone. My guess is they do all this in camera, and generate a file that is pre-demosaiced, and with 4 times the pixels, but otherwise the same as a normal RW2.

-- hide signature --

Tim Wilson
Studio/lab
Chicago

knickerhawk Veteran Member • Posts: 6,374
Re: Hig res in studio scenes

evan ts wrote:

knickerhawk wrote:

evan ts wrote:

twilsonstudiolab wrote:

But that's all done in the camera before the RAW is written. Doesn't have anything to do with ACR or Silkypix.

No, the raw file just contains the source data (layer1 and layer2). Like the bayer raw needs a demosaicing algorithm, the hi-res raw needs a combining algorithm too.

You can check the m43 hi-res case: http://bit.ly/2UB5QID
The algorithm of G9 OOC JPEG is obviously better than ACR.

The G9's in-camera JPEG engine is more aggressive with sharpening and contrast than ACR is at default settings. That's very typical. The same difference is evident in the normal G9 JPEG vs. ACR renderings in the DPR comparison.

No, that's because Pana uses a new demosaicing algorithm in GH5/G9.

That reads a lot like marketing BS, but it doesn't rebut my point that the normal G9 JPEG in the DPR studio scene is (just like the G9 HiRes) significantly more detailed/contrasty than the corresponding raw-from-ACR renderings.

evan ts Forum Member • Posts: 64
Re: Hig res in studio scenes

knickerhawk wrote:

evan ts wrote:

8 RGGB shots can be stacked into 2 RGB shots.

Not without throwing away a LOT of information, which defeats the whole purpose of creating a HiRes image in the first place.

The size is reduced to 6 times of a normal raw. Store it in a size-efficient format and minus the space of embedded JPEGs.

I think the 5-6 times of size is reasonable.

If you were right, hi-res has 4x pixels,

Correct (for the HiRes method used by Panny and Oly)

RGB 3 colors in each pixel.

But it's not a TIFF or other RGB format. It's a raw! Therefore, it's either R, G, G1 or B for each individual pixel, instead of R+G+B for each pixel.

The size should be 12 times.

Well, I don't want to argue this with you. You believe your hypothesis, I believe mine.

Maybe another day someone can decode the RW2 format.

 evan ts's gear list:evan ts's gear list
Canon EOS M Fujifilm X70 Olympus PEN E-P5 Panasonic Lumix DMC-GM1 Panasonic Lumix DMC-GX85 +8 more
knickerhawk Veteran Member • Posts: 6,374
Re: Hig res in studio scenes

twilsonstudiolab wrote:

evan ts wrote:

knickerhawk wrote:

evan ts wrote:

knickerhawk wrote:

evan ts wrote:

There are many algorithms that can combine many low-res images into a single hi-res image.
(This one was announced 15 yeas ago: https://users.soe.ucsc.edu/~milanfar/publications/journal/SR-challengesIJIST.pdf )

I think that Silkypix just uses a newer and better hi-res algorithm than ACR.

But that's not what's going on at the raw converter stage to a HiRes raw file. The camera internally merges the subframes using a fixed algorithm (perhaps similar to what's discussed in the linked paper) and outputs an ordinary looking Bayer-style raw file, albeit a much larger one constructed from the subsampling of each normal-sized pixel position. The raw converters are not called upon to do anything different with one of these HiRes raws than they do with normal raws.

The hi-res takes 8 Bayer-shots (2 RGB-shots). The key point is, pixel size is still the same as a normal shot. When all shots are stacked together, pixels are overlapped. So we need a good algorithm to separate the information of overlapped pixels.

Yes, that's understood and implicit in my reference to "subsampling of each normal-sized pixel position." What you're not addressing is my point that the "good algorithm" you're referencing is applied in camera during the construction of the Bayer-style raw file, not later in the raw converter. Consider this: When the S1R generates a HiRes image does it output 8 individual/interim raw image files or just one? If it's the former, then you would be correct that the raw converter would have to have a built-in capability of handling the 8 samples per subpixel position. But in fact it's the latter, which means that the camera has already done the heavy algorithmic lifting for merging the subsamples into a specific R,G1,G2 or B value for each of the subpixels. This allows the raw converter of choice to simply see the raw file as a normal Bayer style RGGB raw file.

I think the hi-res raw file just contains 8 copies of RGGB values. (and embedded thumb jpeg)

I think it's probably more complicated than that. I think it's very possible that the assembly and 'demosaicing' may be done in camera and baked into the RAW file, leaving delinearization, white balance, and tone & color for the various processing applications to do normally. I put demosaicing in quotes because pixel shifting IS a kind of demosaicing. In 4-step pixel-shifted systems, color information for all 4 channels is obtained directly, so no need for algorithmic demosaicing. I think with 8 exposures, you get effectively 2 layers, like the diagram above, but each layer was never really mosaiced, having gotten its channel information directly. Weaving the two layers together by placing the pixels of one diagonally between the pixels of the other doubles the number of pixels, but doesn't quadruple it because it creates another set of holes to fill. Filling those is easier than demosaicing, though, in that all of the existing pixels have 4 channels of info already. But I'd bet that that is not a process that Panasonic would trust to anyone. My guess is they do all this in camera, and generate a file that is pre-demosaiced, and with 4 times the pixels, but otherwise the same as a normal RW2.

When Oly came out with its original HiRes solution, a number of us spent a lot of time and effort examining what was going on under the covers. It turns out that Oly actually demosaics each of the 8 interim images and then remosaics into the final raw probably with some differences as to how it applies the 8 samples per subpixel based on the "raw" color of the output subpixel. This trick reduces the need to be absolutely spot on with sensor repositioning with each shot. I suspect that Panny does something similar, but there are a few interesting differences in how Panny HiRes files produce artifacts that don't seem to be present in Oly HiRes's vs. how Oly HiRes images are noticeably less sharp straight from the raw conversion and require added sharpening to achieve the same sharpness.

knickerhawk Veteran Member • Posts: 6,374
Re: Hig res in studio scenes

evan ts wrote:

knickerhawk wrote:

evan ts wrote:

8 RGGB shots can be stacked into 2 RGB shots.

Not without throwing away a LOT of information, which defeats the whole purpose of creating a HiRes image in the first place.

The size is reduced to 6 times of a normal raw. Store it in a size-efficient format and minus the space of embedded JPEGs.

I think the 5-6 times of size is reasonable.

If you were right, hi-res has 4x pixels,

Correct (for the HiRes method used by Panny and Oly)

RGB 3 colors in each pixel.

But it's not a TIFF or other RGB format. It's a raw! Therefore, it's either R, G, G1 or B for each individual pixel, instead of R+G+B for each pixel.

The size should be 12 times.

Well, I don't want to argue this with you. You believe your hypothesis, I believe mine.

Maybe another day someone can decode the RW2 format.

Might be worth reaching out to Iliah Borg. He and his LibRaw/RawDigger/FastRawViewer colleagues might have already done this.

evan ts Forum Member • Posts: 64
Re: Hig res in studio scenes

knickerhawk wrote:

evan ts wrote:

knickerhawk wrote:

evan ts wrote:

twilsonstudiolab wrote:

But that's all done in the camera before the RAW is written. Doesn't have anything to do with ACR or Silkypix.

No, the raw file just contains the source data (layer1 and layer2). Like the bayer raw needs a demosaicing algorithm, the hi-res raw needs a combining algorithm too.

You can check the m43 hi-res case: http://bit.ly/2UB5QID
The algorithm of G9 OOC JPEG is obviously better than ACR.

The G9's in-camera JPEG engine is more aggressive with sharpening and contrast than ACR is at default settings. That's very typical. The same difference is evident in the normal G9 JPEG vs. ACR renderings in the DPR comparison.

No, that's because Pana uses a new demosaicing algorithm in GH5/G9.

That reads a lot like marketing BS, but it doesn't rebut my point that the normal G9 JPEG in the DPR studio scene is (just like the G9 HiRes) significantly more detailed/contrasty than the corresponding raw-from-ACR renderings.

At least there is no moire or false color in ooc jpeg.

 evan ts's gear list:evan ts's gear list
Canon EOS M Fujifilm X70 Olympus PEN E-P5 Panasonic Lumix DMC-GM1 Panasonic Lumix DMC-GX85 +8 more
twilsonstudiolab
twilsonstudiolab Regular Member • Posts: 261
Re: Hig res in studio scenes

knickerhawk wrote:

twilsonstudiolab wrote:

evan ts wrote:

knickerhawk wrote:

evan ts wrote:

knickerhawk wrote:

evan ts wrote:

There are many algorithms that can combine many low-res images into a single hi-res image.
(This one was announced 15 yeas ago: https://users.soe.ucsc.edu/~milanfar/publications/journal/SR-challengesIJIST.pdf )

I think that Silkypix just uses a newer and better hi-res algorithm than ACR.

But that's not what's going on at the raw converter stage to a HiRes raw file. The camera internally merges the subframes using a fixed algorithm (perhaps similar to what's discussed in the linked paper) and outputs an ordinary looking Bayer-style raw file, albeit a much larger one constructed from the subsampling of each normal-sized pixel position. The raw converters are not called upon to do anything different with one of these HiRes raws than they do with normal raws.

The hi-res takes 8 Bayer-shots (2 RGB-shots). The key point is, pixel size is still the same as a normal shot. When all shots are stacked together, pixels are overlapped. So we need a good algorithm to separate the information of overlapped pixels.

Yes, that's understood and implicit in my reference to "subsampling of each normal-sized pixel position." What you're not addressing is my point that the "good algorithm" you're referencing is applied in camera during the construction of the Bayer-style raw file, not later in the raw converter. Consider this: When the S1R generates a HiRes image does it output 8 individual/interim raw image files or just one? If it's the former, then you would be correct that the raw converter would have to have a built-in capability of handling the 8 samples per subpixel position. But in fact it's the latter, which means that the camera has already done the heavy algorithmic lifting for merging the subsamples into a specific R,G1,G2 or B value for each of the subpixels. This allows the raw converter of choice to simply see the raw file as a normal Bayer style RGGB raw file.

I think the hi-res raw file just contains 8 copies of RGGB values. (and embedded thumb jpeg)

I think it's probably more complicated than that. I think it's very possible that the assembly and 'demosaicing' may be done in camera and baked into the RAW file, leaving delinearization, white balance, and tone & color for the various processing applications to do normally. I put demosaicing in quotes because pixel shifting IS a kind of demosaicing. In 4-step pixel-shifted systems, color information for all 4 channels is obtained directly, so no need for algorithmic demosaicing. I think with 8 exposures, you get effectively 2 layers, like the diagram above, but each layer was never really mosaiced, having gotten its channel information directly. Weaving the two layers together by placing the pixels of one diagonally between the pixels of the other doubles the number of pixels, but doesn't quadruple it because it creates another set of holes to fill. Filling those is easier than demosaicing, though, in that all of the existing pixels have 4 channels of info already. But I'd bet that that is not a process that Panasonic would trust to anyone. My guess is they do all this in camera, and generate a file that is pre-demosaiced, and with 4 times the pixels, but otherwise the same as a normal RW2.

When Oly came out with its original HiRes solution, a number of us spent a lot of time and effort examining what was going on under the covers. It turns out that Oly actually demosaics each of the 8 interim images and then remosaics into the final raw probably with some differences as to how it applies the 8 samples per subpixel based on the "raw" color of the output subpixel. This trick reduces the need to be absolutely spot on with sensor repositioning with each shot. I suspect that Panny does something similar, but there are a few interesting differences in how Panny HiRes files produce artifacts that don't seem to be present in Oly HiRes's vs. how Oly HiRes images are noticeably less sharp straight from the raw conversion and require added sharpening to achieve the same sharpness.

Remosaicing might also explain the file size inconsistency in my theory.

-- hide signature --

Tim Wilson
Studio/lab
Chicago

Iliah Borg Forum Pro • Posts: 26,078
Re: Hig res in studio scenes
2

evan ts wrote:

Maybe another day someone can decode the RW2 format.

This should do the trick

https://www.rawdigger.com/news/rawdigger-1-2-27-beta-panasonic-S1R

-- hide signature --
The Davinator
The Davinator Forum Pro • Posts: 22,577
Re: Hig res in studio scenes

Iliah Borg wrote:

evan ts wrote:

Maybe another day someone can decode the RW2 format.

This should do the trick

https://www.rawdigger.com/news/rawdigger-1-2-27-beta-panasonic-S1R

As always...thank you

 The Davinator's gear list:The Davinator's gear list
Canon EOS D30 Canon EOS 10D Nikon D2X Fujifilm X-Pro1 Fujifilm X-T1 +17 more
Iliah Borg Forum Pro • Posts: 26,078
Re: Hig res in studio scenes

knickerhawk wrote:

But it's not a TIFF or other RGB format. It's a raw!

Technically, raw can be RGB for each pixel.

it's either R, G, G1 or B for each individual pixel

Yes, that's what they are in this case, for Panasonic S1 and S1R hires shots published by DPR Team.

-- hide signature --
Iliah Borg Forum Pro • Posts: 26,078
Always a pleasure -=nt=-

The Davinator wrote:

Iliah Borg wrote:

evan ts wrote:

Maybe another day someone can decode the RW2 format.

This should do the trick

https://www.rawdigger.com/news/rawdigger-1-2-27-beta-panasonic-S1R

As always...thank you

-- hide signature --
knickerhawk Veteran Member • Posts: 6,374
Re: Hig res in studio scenes

Iliah Borg wrote:

knickerhawk wrote:

But it's not a TIFF or other RGB format. It's a raw!

Technically, raw can be RGB for each pixel.

Foveon raws, for instance? Would you also include linear DNGs in this category?

it's either R, G, G1 or B for each individual pixel

Yes, that's what they are in this case, for Panasonic S1 and S1R hires shots published by DPR Team.

Thanks for that confirmation and also for the link to the RawDigger beta that supports the new Pannys.

Iliah Borg Forum Pro • Posts: 26,078
Re: Hig res in studio scenes

knickerhawk wrote:

Iliah Borg wrote:

knickerhawk wrote:

But it's not a TIFF or other RGB format. It's a raw!

Technically, raw can be RGB for each pixel.

Foveon raws, for instance? Would you also include linear DNGs in this category?

Yes. There are also some technical cameras and scanning backs that record full-colour RGB, as well as hi-res modes in some cameras.

it's either R, G, G1 or B for each individual pixel

Yes, that's what they are in this case, for Panasonic S1 and S1R hires shots published by DPR Team.

Thanks for that confirmation and also for the link to the RawDigger beta that supports the new Pannys.

-- hide signature --
evan ts Forum Member • Posts: 64
Re: Hig res in studio scenes

Iliah Borg wrote:

knickerhawk wrote:

But it's not a TIFF or other RGB format. It's a raw!

Technically, raw can be RGB for each pixel.

it's either R, G, G1 or B for each individual pixel

Yes, that's what they are in this case, for Panasonic S1 and S1R hires shots published by DPR Team.

Okey, I start to believe you guys.
I have another question. Pana's hi-res raw uses the same Bayer pattern, it should be around 4 times size of a normal raw. However, the S1/S1R are both 5+ times size. What is the information contained in the extra data?

 evan ts's gear list:evan ts's gear list
Canon EOS M Fujifilm X70 Olympus PEN E-P5 Panasonic Lumix DMC-GM1 Panasonic Lumix DMC-GX85 +8 more
evan ts Forum Member • Posts: 64
Re: Hig res in studio scenes

knickerhawk wrote:

evan ts wrote:

The hi-res takes 8 Bayer-shots (2 RGB-shots). The key point is, pixel size is still the same as a normal shot. When all shots are stacked together, pixels are overlapped. So we need a good algorithm to separate the information of overlapped pixels.

Yes, that's understood and implicit in my reference to "subsampling of each normal-sized pixel position." What you're not addressing is my point that the "good algorithm" you're referencing is applied in camera during the construction of the Bayer-style raw file, not later in the raw converter. Consider this: When the S1R generates a HiRes image does it output 8 individual/interim raw image files or just one? If it's the former, then you would be correct that the raw converter would have to have a built-in capability of handling the 8 samples per subpixel position. But in fact it's the latter, which means that the camera has already done the heavy algorithmic lifting for merging the subsamples into a specific R,G1,G2 or B value for each of the subpixels. This allows the raw converter of choice to simply see the raw file as a normal Bayer style RGGB raw file.

What is a good hi-res combination algorithm?
In theory, the yellow cell is the intersection of other two low-res pixels. However, camera actually generates the union of two low-res pixels. I think the ACR just takes the union value to produce hi-res image. That's why it looks so blurry.

If we want to get a real crisp, detailed hi-res image. Sharpening is not a good choice. The better choice is to exclude the green area for each yellow cell.

In order to achieve the goal, we have to process at least the data of 4 pixels of layer1 and 4 pixles of layer2 closed to the yellow cell. Of course, processing 9+9 or 16+16 pixels will be more precise. More source data, higher precision, more complex calculation and more inefficient.

Sorry, English is not my mother tongue. I don't know if I describe clearly enough above.

 evan ts's gear list:evan ts's gear list
Canon EOS M Fujifilm X70 Olympus PEN E-P5 Panasonic Lumix DMC-GM1 Panasonic Lumix DMC-GX85 +8 more
Iliah Borg Forum Pro • Posts: 26,078
Re: Hig res in studio scenes

evan ts wrote:

Iliah Borg wrote:

knickerhawk wrote:

But it's not a TIFF or other RGB format. It's a raw!

Technically, raw can be RGB for each pixel.

it's either R, G, G1 or B for each individual pixel

Yes, that's what they are in this case, for Panasonic S1 and S1R hires shots published by DPR Team.

Okey, I start to believe you guys.

You can open a highres file in RawDigger, export it as raw composite single layer (that's essentially a pixel map), open the resulting TIFF in Photoshop, zoom in and see the Bayer pattern.

I have another question. Pana's hi-res raw uses the same Bayer pattern, it should be around 4 times size of a normal raw. However, the S1/S1R are both 5+ times size. What is the information contained in the extra data?

Regular raws are lossy compressed, hires raws are lossless.

Before anybody starts with sky is falling, the compression (1 delta scaling value is used for 3 pixels, which in some extreme cases may result is artifacts on very small contrast details, 1 or 2 pixel size, if only those are resolved with high contrast) Panasonic are using in our opinion is still visually lossless even after heavy editing. The compression reduces the file size by about 20%.

-- hide signature --
evan ts Forum Member • Posts: 64
Re: Hig res in studio scenes

Iliah Borg wrote:

evan ts wrote:

Iliah Borg wrote:

knickerhawk wrote:

But it's not a TIFF or other RGB format. It's a raw!

Technically, raw can be RGB for each pixel.

it's either R, G, G1 or B for each individual pixel

Yes, that's what they are in this case, for Panasonic S1 and S1R hires shots published by DPR Team.

Okey, I start to believe you guys.

You can open a highres file in RawDigger, export it as raw composite single layer (that's essentially a pixel map), open the resulting TIFF in Photoshop, zoom in and see the Bayer pattern.

I have another question. Pana's hi-res raw uses the same Bayer pattern, it should be around 4 times size of a normal raw. However, the S1/S1R are both 5+ times size. What is the information contained in the extra data?

Regular raws are lossy compressed, hires raws are lossless.

Before anybody starts with sky is falling, the compression (1 delta scaling value is used for 3 pixels, which in some extreme cases may result is artifacts on very small contrast details, 1 or 2 pixel size, if only those are resolved with high contrast) Panasonic are using in our opinion is still visually lossless even after heavy editing. The compression reduces the file size by about 20%.

Okey, I see. Thank you!

 evan ts's gear list:evan ts's gear list
Canon EOS M Fujifilm X70 Olympus PEN E-P5 Panasonic Lumix DMC-GM1 Panasonic Lumix DMC-GX85 +8 more
Keyboard shortcuts:
FForum MMy threads