The Pentax K1 II has draw a great deal of attention lately because of strong Noise Reduction (NR) applied starting at ISO 640.
Does the processing applied by the KP appear to be different (in nature and/or strength) to what the K-1 II is applying?
DPReview praised the KP for its "excellent high ISO performance in both Raw and JPEG" and only mentioned suspected RAW baking in passing. The K-1 II, on the other hand, got slapped for its processing. I'm wondering whether that's due to different processing or a different assessment of the same situation.
FWIW, I'm against any in-camera destructive RAW baking in any shape or form.
Of course there are uncontentious system-noise reduction strategies performed within the sensor, but post-A/D "denoising" should be left to out-of-camera processing, AFAIC.
Am I right in assuming that the "accelerator" chip Pentax has been employing could most likely be replaced by out-of-camera processing? Given the "closed black boxes" modern Sony sensors appear to be, I have difficulty believing that the "accelerator" chip leverages any sensor-internal data or processes.
Potentially, the "accelerator" chip could make use of the equivalent of "dark frames", etc., i.e., data that is normally not available outside a camera (but could be provided as secondary data), but from what I've seen in the analyses conducted so far, it seems that the "accelerator" attempts "beautification" with a nearest-neighbour smoothing component as part of its data massaging.
I hope this is not considered to be an off-topic post.
BTW, @bclaff, have you noticed my proposal to use "image stacking" for analysis purposes?
While FT plots and power spectra are useful in detecting image manipulation, when applied to images of pure noise they are not informative regarding the retention of signal. In other words, if some image manipulation did a miraculous job of almost not harming signal but only combating noise it may deserve less scolding than an alternative that simply attenuates high spatial frequencies.
I'll be the first to argue that in general it is impossible to distinguish signal (here meant to be "information" present in a scene) from noise (given the stochastic nature of light itself) by evaluating a single image, but could it still be useful to compare denoising strategies by averaging over many images (in an image stack) and then evaluating which denoising strategy supports the recovery of signal through averaging better than others?
BTW, attempting to work with scenes that have content (as opposed to "lens cap" shots which I'm assuming you are using) could circumvent potential optimisations by some manufacturers.
Sony, for instance, had a line of CD players that switched off analogue circuitry upon detecting a "zero" stream of signal. This led to phenomenal dynamic range measurements which were, of course, of no practical relevance. As soon as any information was fed to the A/D converters, the noise floor was significantly raised by the re-activated circuitry.
Are we sure something similar is not happening with "lens cap" images?
--
http://www.flickr.com/photos/class_a/