Well, maybe you should dispute my conclusion because I'm now reversing it. (See below.)
And now, based on the response Iwaddo received from OM System, I should reverse my reversal. I'm feeling dizzy.
On a slightly more serious note, I'm a little skeptical of reading much into the response provided by OM System. These things have a tendency to lose validity in translation. We've seen "official" and supposedly reliable sources get things mixed up. I will treat the response as "interesting" but not definitive until an actual detailed explaination of what's actually happening under the covers is offered up. Oly/OM has treated this "feature" as an enigma wrapped in a riddle from the get-go. Why?
Also, I want to make very clear, if my prior posts fail to adequately convey it, that my testing is limited and shouldn't be relied on (as if my flip-flopping speculations haven't made that clear already). Generally speaking, though, I'm not aware of a better and more reliable starting point for detecting and evaluating digital signal processing in raw files than looking at blackframe FFTs. Of course, evaluating blackframe FFTs have limitations and particularities of the feature in question here (specifically, the setting only applying to bursts) add time-consuming complexity. Nevertheless, based on the limited and fairly casual testing I've done and compared to the obvious differences in the JPEGs, I'm just not seeing much to get worked up about with respect to raws. The lens correction anomaly is intriguing but not enough to interest me to pursue this further. Perhaps you'll pick up the scent trail and do your own???
Every raw is going to be different because the noise is going to be different each time; additionally subsequent shots will be noisier than earlier shots as the sensor heats up.
With respect to noise, other than some fixed pattern noise which should be generally similar from one blackframe to the next, the only noise present should be read noise (remember, they're blackframes, so no photon shot noise).
No, there's also dark current noise, which is quite distinct from read noise. It can consist of both fixed pattern and temporal noise. It's an inescapable property of a real-world sensor system. It also is impacted by temperature (both ambient, which varies, and operation-induced heat (which is why I commented about the sensor heating up for subsequent shots). This is why imaging sensors in critical or demanding environments (like astronomy or astrophotography, where images are mostly black) are actively cooled. By leaving the lens cap on, you're reducing the signal to zero, so noise dominates. You can read more about it
here .
I concede that what I wrote oversimplified by not acknowledging contributions from DSNU and DCNU. However, the impact of DCNU in particular on the testing I did is surely next to nothing. We're talking about a five shot burst at a shutter speed of 1/4000 (taken in my cool dark basement utility room). Unlike the astrophotography use case you refer to where thermal noise is a significant factor, the impact here when (for instance) comparing the first and last shots in a burst or between the separate bursts is really
de minimis. Besides, I already accounted for it by looking at
all of the frames in both bursts and FFTing all of them and also averaging each set and looking at those FFTs. The highest difference in the raw standard deviations between the first shots in each burst and the last shots was approx. 0.01 in the red and blue channels and 0.02 in the average of the green channels. Again, feel free to do your own testing and calculate the statistical significance of the DCNU impact on the testing. I'm just not seeing how this is an obstacle or unavoidable confounding factor in my (admittedly casual and preliminary) speculations as
between the bursts. Are you saying we should expect different amounts of thermal noise in one mode vs the other? I just don't see it being a factor, at least when comparing like to like (i.e., corresponding frames from each burst or averages of all of them).
I'm not aware of any claim ever made for Oly/OM raws having varying levels of NR applied based on image-specific content as opposed to fixed (known) conditions such as ISO. I would be shocked if that's a factor here given the large processing impact it would likely have and the fact that it has never (to my knowledge) been identified as a feature.
Firstly, there are lots of things that are true that Oly has not publicly stated.

But more importantly, you seem to think I'm talking about some sort of content recognition, but I'm not. Many noise reduction and compression algorithms are adaptive; they perform quantization and frequency analysis on the data to optimize performance. Even rudimentary frequency analysis of an image will reveal a photo of a test target or a tree is very different from a photo of noise, because they have structure. We do not know that this sort of thing is not going on inside the camera, and we know it absolutely is happening in the case of generating jpegs.
Sure, it's going on with JPEGs, but please point me to evidence of the kind of image-specific adaptive DSP (as opposed to other adaptive DSP based on things like ISO) that are currently being utilized at the raw level in any of the cameras that show up in DPReview. (I'm not interested in responses about the compressed raw options that some cameras have. They aren't relevant here.)
If one mode wants to preserve "details" then it stands to reason it might be imbued with some simple definition of what constitutes a "detail" that distinguished detail from noise.
I fear that the conversation is swerving into the wildly speculative sphere here. Is it possible that Oly/OM developed some super-secret, fast and smart adaptive DSP that somehow improves raw "detail" while also evading detection in blackframe FFTs? Well, yes I suppose it's
possible. Is it possible that Oly/OM's marketing department and technical documentation team chose not to tout this feature? I suppose that's
possible. Is it possible that in the six-or-so years it's been available none of us ordinary Oly/OM users have picked up on how this feature visibly impacts IQ rendered from raws? Again, I suppose it's
possible. Is it possible DPReview staff got it wrong when it wrote:
A new 'Detail Priority' mode reduces noise (at the expense of burst speed) at low ISOs in JPEGs. Well, it wouldn't be the first time DPR was wrong!
My point before was that when we don't know for sure what's happening inside a system, it's best not to assume that nothing is happening. See relevance of this thought below...
See also: Occam's Razor...
When the raws were processed in RawTherapee, the distinct FFT "signature" was present when lens distortion correction was turned on and completely gone when it was turned off. (This proves that the distinct signature visible in the FFTs was due to lens correction applied by the raw converters and not something irreversibly built into the raw data itself.)
Good catch on your part. Lens Correction was not something I specifically thought of, but it goes back to the principal of not assuming nothing's going on. There was indeed something going on that we didn't realize, that left a reproducible fingerprint in the data, and you found it.
With the lens distortion correction turned off in RawTherapee, the FFTs from both modes were very clean looking and indistinguishable from each other. There was not any obvious sign of raw noise reduction either (in either mode). I also averaged the 5 shots from each mode and FFT'd them and compared these averages. Again, there was no meaningful difference at all in the FFTs between the two modes, either visibly or when compared in PS using the "Difference" blending mode.
Again, good catch. And I feel vindicated in my skepticism.

Even though the hypothesis from your images wasn't proven, it was a good discussion.
I'll just add one last thought before exiting this discussion. I bought a Nikon D300 when it came out. One of the touted features was the option to switch between 12-bit and 14-bit raw capture. Somewhat unlike the situation here, the visible IQ superiority of the 14-bit raws was hotly and frequently debated early on, but the consensus around that conclusion pretty quickly solidified. There was just too much evidence supporting the claim. It turned out that most of the reason for the better IQ was not due to the greater bit depth but, rather, because the read-out speed was much slower and, therefore, more accurate (less noisy). Something along the same lines could be going on here based on the differences in burst speeds between the two modes. In other words, there
might be a [very] modest SNR improvement in the raws generated from Details Priority mode (hence the references to improved raw IQ). There might ALSO be some DSP voodoo differences in the two modes at the JPEG level - e.g., a bit of extra NR splash to SOOC JPEGs generated in the Details Priority mode (hence the claims promoted by DPR and elsewhere). To me, this is one way of reconciling the seemingly at-odds information and evidence we're grappling with here.
I could be wrong...it's been known to happen from time to time. ;-)