However, when photos are compared at the same size, the increased resolution of the H2 (and other 40MP sensors) is on display.
No, it really isn’t. Now you’ve thrown away 14MP of the 40MP sensor's resolution advantage.
Downscaled, the
resolution is now the same. Only 26MP of resolution is on display. However,
the finer rendering of detail that comes with more original image data to work with and fewer resulting demosaicing errors
is on display.
Bravo, you nailed it perfectly. Thank you for the clarity of your formulation.
A remark on demosaicing. Even the in-camera demosaicing part of the imaging pipeline is good enough, despite being constrained by real-time processing and available computing resources (so it trades off the algorithm sophistication and result quality for speed and effectiveness). I can't recall any artifacts in my Fuji SOOC JPEGs that I'd relate specifically to demosaicing.
The only case of wrong demosaicing I recall is the infamous "X-Trans worms" epic, which turned out to be specific to Adobe products and to Adobe's malicious negligence in the demosaicing algorithm choice and under-the-hood implementation, superposed by the dubious USM nonswitchable default (over)sharpening.
Free darktable software has supported a choice of X-Trans demosaicing algorithms since its early days (now at version 5.2); it never performs unsolicited sharpening, and thanks to this, I have never had any problems with RAFs. Even more, because desktop computers have "unlimited" computational resources, it is easy to demosaic the RAF at a quality level far beyond the in-camera CPU's capabilities. Doing so is my standard practice.
It still looks better despite there being no longer being any resolution advantage.
Yes, exactly. Stronger downsampling performs well here.
BTW, this reminds me of the smartphone-specific wiles and tricks that are common nowadays. Let's take the Samsung S23 as an example. It uses Samsung’s
50MP ISOCELL GN3 sensor (~10.19 mm diagonal, 4:3 aspect, ~4.2 crop, and a 1.00 μm pixel size), for
~1.0 megapixels/mm². Crazy dense, yeah?
Apple iPhone 14/15 Pro is a bit less extreme: 48MP, 1.22 μm pixel size, ~0.66 megapixels/mm², ~3.5 crop.
But have you ever seen the 50MP RAW file from S23? I did, out of curiosity. It looks completely horrible, even if taken in good light. Postprocessing this mess by hand is far beyond my own skills and desire.
So what did these smart guys do? They both implemented 4-to-1 pixel binning (a 2x2 quad maps to 1 pixel in the resulting image) in hardware under different trade names (“Quad” / Tetrapixel binning). Add some supplementary computational imaging tricks, such as invisibly merging multiple exposures (or multiple gains), and we effectively get a 12MP image that is quite good, and 12-bit, AFAIK.
For comparison, the Fuji 40MP sensor is ≈0.11 MP/mm² only, so Fujifilm has ample room for future improvements on its way to approaching the smartphone market with its cameras. Imagine a 360MP physical APS-C sensor, made by Samsung with all its underlying smart tech, tetrabinned to 90MP pseudo-RAW? That's what will make a real difference for the 4k or 8k screen viewing.
--
https://www.viewbug.com/member/stesinou