Pentax K-1 pixel shift

Iliah Borg

Forum Pro
Messages
29,605
Solutions
26
Reaction score
26,453
Location
AK, US
Please update your version with new beta:

Windows/x64: http://updates.rawdigger.com/data/beta/RawDigger-1.2.11.476-x64-Setup.exe

Windows/x32: http://updates.rawdigger.com/data/beta/RawDigger-1.2.11.476-Setup.exe

OS X: http://updates.rawdigger.com/data/beta/RawDigger-1.2.11.476.dmg

Unfortunately, in RawDigger build 475 assembly from sub-frames worked incorrectly for PEF files (DNG files were processed correctly). Sorry for that, fixed now.

Many thanks to Nick and Jack for forcing us to look into it.
 
Last edited:
Please update your version with new beta:

Windows/x64: http://updates.rawdigger.com/data/beta/RawDigger-1.2.11.476-x64-Setup.exe

Windows/x32: http://updates.rawdigger.com/data/beta/RawDigger-1.2.11.476-Setup.exe

OS X: http://updates.rawdigger.com/data/beta/RawDigger-1.2.11.476.dmg

Unfortunately, in RawDigger build 475 assembly from sub-frames worked incorrectly for PEF files (DNG files were processed correctly). Sorry for that, fixed now.

Many thanks to Nick and Jack for forcing us to look into it.
 
Please update your version with new beta:

Windows/x64: http://updates.rawdigger.com/data/beta/RawDigger-1.2.11.476-x64-Setup.exe

Windows/x32: http://updates.rawdigger.com/data/beta/RawDigger-1.2.11.476-Setup.exe

OS X: http://updates.rawdigger.com/data/beta/RawDigger-1.2.11.476.dmg

Unfortunately, in RawDigger build 475 assembly from sub-frames worked incorrectly for PEF files (DNG files were processed correctly). Sorry for that, fixed now.

Many thanks to Nick and Jack for forcing us to look into it.
I loaded the OS X beta and the DPR studio scene PEF for the K-1 looks great now. No more movement either when cycling between channels (i.e., it's now stable like the RGB rendering from ACR).
The check was: export 4 channels from all 4 subframes, assemble manually in Photoshop, subtract from RD export in "merge" mode. The resulting image is now void. One of our colleagues sent us pairs of pef/dng files, made things easier.
Glad I was of some assistance.
We are very grateful.

--
http://www.libraw.org/
 
Last edited:
Please update your version with new beta:

Windows/x64: http://updates.rawdigger.com/data/beta/RawDigger-1.2.11.476-x64-Setup.exe

Windows/x32: http://updates.rawdigger.com/data/beta/RawDigger-1.2.11.476-Setup.exe

OS X: http://updates.rawdigger.com/data/beta/RawDigger-1.2.11.476.dmg

Unfortunately, in RawDigger build 475 assembly from sub-frames worked incorrectly for PEF files (DNG files were processed correctly). Sorry for that, fixed now.

Many thanks to Nick and Jack for forcing us to look into it.
I loaded the OS X beta and the DPR studio scene PEF for the K-1 looks great now. No more movement either when cycling between channels (i.e., it's now stable like the RGB rendering from ACR).
The check was: export 4 channels from all 4 subframes, assemble manually in Photoshop, subtract from RD export in "merge" mode. The resulting image is now void. One of our colleagues sent us pairs of pef/dng files, made things easier.
A clever "proof".
 
Thanks Iliah, great response from the RawDigger team.

So back to question: how much of a benefit is there for the quadrupling of storage space and 3s+ processing time granted by Pixel Shift technology? Here is one example of shifted vs unshifted from DPR studio scene's 'lowlight' raw captures: which is shifted and which is demosaiced?

For those who are interested in testing their forensic abilities click on 'Original Size' below and view the image at 100% making sure the browser is not zoomed - ctrl-zero typically resets it. Or download it and look at it in the trusted viewer of your choice.

5aea205418f74119a956f41358202915.jpg.png

And here is the 'aliasing hell' portion of DPR's studio scene. One is the K1 pixel shifted image (IMGP0551,PEF) as-is in linear raw, and two are the unshifted Bayer array (IMGP0431.DNG) converted by state of the art algorithms, one blind, one not. They come from different programs with everything off except that in one case I was not able to stop the first color correction from raw. Which one is the sharpest?

3957d0aeddfb424ebf1a563a15f8b76c.jpg.png

No processing, no sharpening, no noise reduction. There is a difference and I can see it*.

Jack

*And more
 
Last edited:
Jack, are the spools from RawDigger? Do we know the conversion tools you used are extracting raw data correctly? Have you tried with SILKYPIX Developer Studio for PENTAX / Digital Camera Utility ?
 
Last edited:
Thanks Iliah, great response from the RawDigger team.

So back to question: how much of a benefit is there for the quadrupling of storage space and 3s+ processing time granted by Pixel Shift technology? Here is one example of shifted vs unshifted from DPR studio scene's 'lowlight' raw captures: which is shifted and which is demosaiced?

For those who are interested in testing their forensic abilities click on 'Original Size' below and view the image at 100% making sure the browser is not zoomed - ctrl-zero typically resets it. Or download it and look at it in the trusted viewer of your choice.

5aea205418f74119a956f41358202915.jpg.png
The right crop is visibly superior. Virtually no artifacts and slightly more textural detail. The right absorbs smart sharpening in PS nicely. The left reveals itself immediately upon the application of sharpening. The blue channel of the left is completely f'd.
And here is the 'aliasing hell' portion of DPR's studio scene. One is the K1 pixel shifted image (IMGP0551,PEF) as-is in linear raw, and two are the unshifted Bayer array (IMGP0431.DNG) converted by state of the art algorithms, one blind, one not. They come from different programs with everything off except that in one case I was not able to stop the first color correction from raw. Which one is the sharpest?

3957d0aeddfb424ebf1a563a15f8b76c.jpg.png

No processing, no sharpening, no noise reduction. There is a difference and I can see it*.
The left crop is crap. Aside from the obvious color moire, there's also clear luminance moire (for example in the girl's cheek). Generally the the texture in the clothes is muddy and indistinct. By far, the least successful rendering. The right is definitely the shifted version, but it's not hugely better than the center crop. You can just make out the improvements by looking at the grooves in the clothing. The contrasts are cleaner in the right. There's also noticeably less checkerboarding/false detail. Several of the vertical lines on the door are cleaner on the right. I suspected the right was the pixel shift by looking at 100%. At 200% I was very confident. The surefire confirmation came when I added smart sharpening. The right absorbed it better than the center. The center became "digital" looking much sooner. This difference is important to those of us who like to do what the experts here might characterize as "poor subjective processing."

Of course, there are other portions of studio scene in which it would have been immediately obvious to even a casual viewer which was which. I'm talking about the Siemen stars, the text, the half circles, the converging lines. Perhaps more importantly in real-world terms is the texture of the paper. The single frame versions don't really show it. The pixel-shift version pretty clearly shows the texture in the paper.
 
To avoid differences in focusing, light, etc the better way is to extract the first subframe of the pixel shift stack and demosaic it - and compare with the full stack.
 
To avoid differences in focusing, light, etc the better way is to extract the first subframe of the pixel shift stack and demosaic it - and compare with the full stack.\
Yes. That's what I did the other night in response to Jack's objection to use of the IR resolution charts. He didn't respond to my post here.
 
I just noticed that in the lowlight DPR studio scene shot (shutter speed was 3 seconds) there is visible motion-based artifacting in some of the green feather fuzz in the upper right corner of the pixel shift version. The artifacting pattern looks identical in the rendering generated by RawDigger and ACR. (More proof that RawDigger has things working correctly now.) Below are crops that show the issue. The left crop is from ACR/PS (upsized to 300% with nearest neighbor to preserve the hard edges). The center crop is a screen grab of RawDigger at 300%. The right crop is the single frame version processed in ACR/PS (also upsized with nearest neighbor). Of course, one might want to also comment on which is sharpest, but I suspect that "bad processing" objections may be raised.

Best viewed full size in the DPR viewer
Best viewed full size in the DPR viewer
 
Last edited:
I guess K-1 Motion Correction sub-mode for Pixel Shift could be of some help here. It looks like in this mode VR data is recorded in Makernotes as sensor is busy shifting. To my understanding, Pentax incorporated processing of this sub-mode in their own converter, but ACR does not interpret this data.

There are more things here waiting to be reverse-engineered.
I just noticed that in the lowlight DPR studio scene shot (shutter speed was 3 seconds) there is visible motion-based artifacting in some of the green feather fuzz in the upper right corner of the pixel shift version. The artifacting pattern looks identical in the rendering generated by RawDigger and ACR. (More proof that RawDigger has things working correctly now.) Below are crops that show the issue. The left crop is from ACR/PS (upsized to 300% with nearest neighbor to preserve the hard edges). The center crop is a screen grab of RawDigger at 300%. The right crop is the single frame version processed in ACR/PS (also upsized with nearest neighbor). Of course, one might want to also comment on which is sharpest, but I suspect that "bad processing" objections may be raised.

Best viewed full size in the DPR viewer
Best viewed full size in the DPR viewer
--
 
Thanks for your contribution Nick. I'll wait another day or so to see if anybody else wants to chime in before revealing the order.
 
Last edited:
To avoid differences in focusing, light, etc the better way is to extract the first subframe of the pixel shift stack and demosaic it - and compare with the full stack.
Good point, Iliah. Matlab will not read PEFs and it chokes on DNG subframes because they are stored at 14bits intead of 16 (tsk tsk). DNG converter produces 3 fully populated 'LinearRaw' channels - I assume it averages the two greens? - so I never get to extract the individual subframes. If you are able to share the first raw subframe of file IMGP0551.PEF in a format that can be read by RT I can perform the comparison.

Jack
 
Last edited:
Jack, are the spools from RawDigger? Do we know the conversion tools you used are extracting raw data correctly? Have you tried with SILKYPIX Developer Studio for PENTAX / Digital Camera Utility ?
No they are from RT and straight from 'LinearRaw' after DNG conversion of the relative PEF. I'll also try it with the latest RawDigger beta when I have some time.
 
Thanks for your contribution Nick. I'll wait another day or so to see if anybody else wants to chime in before revealing the order.
Fair enough. I'm actually quite curious about the demosaicing algorithm used on the center image. It's extraordinarily artifact free for a single frame. The left is very much in line with what I'd expect from a single frame, and the right is the pixel shift, but the center is a bit of a mystery to me.

P.S. How do you perform a blind deconvolution/demosaicing process on a single source image?
 
Last edited:
Thanks for your contribution Nick. I'll wait another day or so to see if anybody else wants to chime in before revealing the order.
Fair enough. I'm actually quite curious about the demosaicing algorithm used on the center image. It's extraordinarily artifact free for a single frame. The left is very much in line with what I'd expect from a single frame, and the right is the pixel shift, but the center is a bit of a mystery to me.

P.S. How do you perform a blind deconvolution/demosaicing process on a single source image?
Right, there was no sharpening or other processing of any kind. By blind I meant that one raw conversion made no assumptions whatsoever, while one was neutral target prior aware. I guess it's what we could expect in the future by really smart algorithms.
 
Last edited:
Thanks for your contribution Nick. I'll wait another day or so to see if anybody else wants to chime in before revealing the order.
Fair enough. I'm actually quite curious about the demosaicing algorithm used on the center image. It's extraordinarily artifact free for a single frame. The left is very much in line with what I'd expect from a single frame, and the right is the pixel shift, but the center is a bit of a mystery to me.

P.S. How do you perform a blind deconvolution/demosaicing process on a single source image?
Right, there was no sharpening or other processing of any kind. By blind I meant that one raw conversion made no assumptions whatsoever, while one was neutral target prior aware.
I understand now.
I guess it's what we could expect in the future by really smart algorithms.
Yes, it will be almost as good as what we can expect today from well-implemented pixel shift. :-D
 
Jack, are the spools from RawDigger? Do we know the conversion tools you used are extracting raw data correctly? Have you tried with SILKYPIX Developer Studio for PENTAX / Digital Camera Utility ?
No they are from RT and straight from 'LinearRaw' after DNG conversion of the relative PEF. I'll also try it with the latest RawDigger beta when I have some time.
I still think Pentax software should be used a a kind of reference.
 
Thanks for your contribution Nick. I'll wait another day or so to see if anybody else wants to chime in before revealing the order.
Fair enough. I'm actually quite curious about the demosaicing algorithm used on the center image. It's extraordinarily artifact free for a single frame. The left is very much in line with what I'd expect from a single frame, and the right is the pixel shift, but the center is a bit of a mystery to me.

P.S. How do you perform a blind deconvolution/demosaicing process on a single source image?
Right, there was no sharpening or other processing of any kind. By blind I meant that one raw conversion made no assumptions whatsoever, while one was neutral target prior aware.
I understand now.
I guess it's what we could expect in the future by really smart algorithms.
Yes, it will be almost as good as what we can expect today from well-implemented pixel shift.
Good point. And without further ado, given the great interest demonstrated for the K-1 in this thread ;-) , congratulations Nick! Your analysis was spot on (applause), my observations were very similar.

The demosaiced file, first to the left, was rendered by RawTherapee with the dcb algorithm and its enhancements and CA enabled. It typically does a great job without photographer intervention, as it did in this case. I think I was not able to defeat the initial color matrix adjustment in RT (though I was able to kill profile and tone curves), which desaturates blues in the current strategy of DNG v1.4, that's why the blue channel is, err, f'd.

The image to the right is produced straight from the fully populated DNG converted 'LinearRaw' channels with no processing whatsoever other than white balancing and application of sRGB gamma. Impressive.

The grayscale image in the center is simply the unshifted cfa raw data as-is, except for channel normalization (white balance) and application of sRGB gamma. I am glad it gave you (and at least another nameless poster in the other thread) pause and you'd consider it extraordinarily artifact free for a single frame. In a double blind test I would expect it to be virtually statistically indistinguishable from the pixel-shifted 4 captures when viewed by the unalerted typical photographer pixel peeping at 100% - unless clearly put under a forensic microscope by someone skilled in the art, like you. Of course this approach only works in the neutral portions of an image.

If someone can provide a separate single frame from the 4-frame pixel shifted sequence of IMGP0551.PEF in the next couple of days I will pit the two versions against each other using the new RawDigger beta and DNG converter.

In the meantime I think I have now been able to touch first hand the advantages of pixel shift technology. It seems to work, good job Pentax. As to whether I would use it on my tripod mounted landscape captures given the massive files and additional 3 second in-camera rendering time, I am not sure yet. I would probably want to compare these results to 4 unshifted captures assembled by something like photoacute after a bit of subjective processing before making up my mind.

And good job Nick!

Jack
 
Last edited:

Keyboard shortcuts

Back
Top