EXR Confusion not fully resolved

Started 6 months ago | Discussions thread
Timur Born
Senior MemberPosts: 3,765
Like?
Re: EXR Confusion not fully resolved
In reply to photoreddi, 6 months ago

photoreddi wrote:

No, M size pixels are not the same size as L size pixels. Pixels are (as you indicate) mathematical creations of a complex demosaicing algorithm. The photosites used to create the M size pixels are the same size as the photosites used to create L size pixels, but there are twice as many L size pixels as M size pixels so they can't be the same size.

I guess this is more a question of how we define things than how we understand them. If you take two consecutive shots on any camera and then combine them, does that increase pixel size or just sample size? There is a difference in my definition.

You seem to think that to create an M size pixel, the demosaicing process has to create individual L size pixels and then average them down to M size pixels.

No, I even described it the way you describe it in the paragraph below. But language barriers surely add to the confusion. What happens is that raw data of all L sized pixels is saved in the RAW file and then adjacent pixels of same filter-color are preferably averaged *before* demosaicing. With certain settings the averaging happens in-camera, with others the RAW converter software has to handle this (some do better than others).

That's one way that Fuji could have done it, but it's more likely that Fuji starts by combining pixel pairs and then uses a more straightforward Bayer demosaic on the averaged pairs. This would produce higher resolution M size images than what you've suggested.

This is how I described it before and even underlined that the supposedly advantage of this EXR approach is that downsizing/averaging can be done before demosaicing on a (mostly) Bayer type pattern. People seem to agree that this also creates better resolving (as in detail, not resolution as in pixels) detail compared to standard Bayer pattern cameras of same MP count (SN mode, not HR mode). My personal jury is still out on that.

Unless you have a link to a Fuji document that explicitly states how the pixels are created, I'd have to think that the Occam's Razor principle suggests that the simpler explanation (or in this case, process), while not foolproof, provides more support for my theory than yours.

The simple explanation is that collecting the same information two times and then averaging the data leads to a higher signal-to-noise ratio, because noise happens randomly. Thus it can be expected that the chance for noise to hit the same (pair of) pixel(s) twice is smaller than just taking a single sample.

But my understanding or interpretation is that averaging two samples is *not* the same as having larger sized pixels (of same tech level) with larger photon wells. Averaging a 1-100 pixel with another 1-100 pixel is not the same as reading a single 1-200 pixel. Which of the two provides better results is something I don't know enough about at this time.

Could you post a link to the page that has the "mp pixel" scores along with the controls that allow Screen vs Print to be selected, or describe how to get to it? All I see are DxO's bar charts showing Overall Score, Color Depth Score, Dynamic Range Score and Low Light ISO Score. This is on the Scores page. The Measurements page makes charts available that show ISO Sensitivity, SNR 18%, Dynamic Range, Tonal Range, Color Sensitivity, Full SNR, Color Response and Full CS (Color Sensitivity), but nothing for sensor resolution.

In the measurement graphs there are two buttons in the upper left corner saying "Screen" and "Print". "Screen" is the native resolution of the measured sensor, "Print" is always sampled down to 8 mp.

I've seen that sort of thing from you as well, or at least I think that I have.

Actually I gave an example in just the very sentence you replied to in this paragraph. Two can play that way, but mine may or may not have been more subtle.

If not, I probably was conflating you and Trevor. Speaking of whom, I wonder if you agree with his opinion that EXR sensors have no more dynamic range than non-EXR sensors, even when EXR DR mode is considered?

Is that what Trevor says that EXR DR does not provide more dynamic range over non EXR sensor of *same* size? Anyway, when two sensors of same size are based on roughly the same technology then they more or less offer the same dynamic range. It's all about collecting photons, turning photons efficiently into electrons and keeping read noise (of the electronic circuits) down.

What EXR DR does in essence is nothing else than taking two consecutive exposures at lower resolution and then combine them into one. You can do the very same with every camera out there and likely get better results at same sensor size and (HR) pixel count than what EXR DR gives. The benefit of EXR is (only) that it works handheld with perfect pixel-alignment and doesn't require post-processing to merge the two exposures. Drawbacks compared to truly consecutive shots are increased noise in highlights and shadows (only half the sensor size used to capture data) and lack of control over the blending process (every RAW converter does it differently and there are blending artifacts and not so well done blending even in-camera). Both methods struggle with moving targets if the longer exposed half uses too slow a shutter speed, but EXR has a slight advantage in that part of both exposures happen at the same time (on the X10 one sensor half starts later and then both stop at the same time).

-- hide signature --

Red flash eyes save lives and eye-sight!
http://en.wikipedia.org/wiki/Retinoblastoma

Reply   Reply with quote   Complain
Post (hide subjects)Posted by
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark post MMy threads
Color scheme? Blue / Yellow