Hig res in studio scenes

Started 5 months ago | Discussions thread
twilsonstudiolab Regular Member • Posts: 261
Re: Hig res in studio scenes

evan ts wrote:

knickerhawk wrote:

evan ts wrote:

knickerhawk wrote:

evan ts wrote:

There are many algorithms that can combine many low-res images into a single hi-res image.
(This one was announced 15 yeas ago: https://users.soe.ucsc.edu/~milanfar/publications/journal/SR-challengesIJIST.pdf )

I think that Silkypix just uses a newer and better hi-res algorithm than ACR.

But that's not what's going on at the raw converter stage to a HiRes raw file. The camera internally merges the subframes using a fixed algorithm (perhaps similar to what's discussed in the linked paper) and outputs an ordinary looking Bayer-style raw file, albeit a much larger one constructed from the subsampling of each normal-sized pixel position. The raw converters are not called upon to do anything different with one of these HiRes raws than they do with normal raws.

The hi-res takes 8 Bayer-shots (2 RGB-shots). The key point is, pixel size is still the same as a normal shot. When all shots are stacked together, pixels are overlapped. So we need a good algorithm to separate the information of overlapped pixels.

Yes, that's understood and implicit in my reference to "subsampling of each normal-sized pixel position." What you're not addressing is my point that the "good algorithm" you're referencing is applied in camera during the construction of the Bayer-style raw file, not later in the raw converter. Consider this: When the S1R generates a HiRes image does it output 8 individual/interim raw image files or just one? If it's the former, then you would be correct that the raw converter would have to have a built-in capability of handling the 8 samples per subpixel position. But in fact it's the latter, which means that the camera has already done the heavy algorithmic lifting for merging the subsamples into a specific R,G1,G2 or B value for each of the subpixels. This allows the raw converter of choice to simply see the raw file as a normal Bayer style RGGB raw file.

I think the hi-res raw file just contains 8 copies of RGGB values. (and embedded thumb jpeg)

I think it's probably more complicated than that. I think it's very possible that the assembly and 'demosaicing' may be done in camera and baked into the RAW file, leaving delinearization, white balance, and tone & color for the various processing applications to do normally. I put demosaicing in quotes because pixel shifting IS a kind of demosaicing. In 4-step pixel-shifted systems, color information for all 4 channels is obtained directly, so no need for algorithmic demosaicing. I think with 8 exposures, you get effectively 2 layers, like the diagram above, but each layer was never really mosaiced, having gotten its channel information directly. Weaving the two layers together by placing the pixels of one diagonally between the pixels of the other doubles the number of pixels, but doesn't quadruple it because it creates another set of holes to fill. Filling those is easier than demosaicing, though, in that all of the existing pixels have 4 channels of info already. But I'd bet that that is not a process that Panasonic would trust to anyone. My guess is they do all this in camera, and generate a file that is pre-demosaiced, and with 4 times the pixels, but otherwise the same as a normal RW2.

-- hide signature --

Tim Wilson

Post (hide subjects) Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow