EP1 Dynamic range - dxo says 10eV

Jay's test simply confirms that some cameras use different filters for the two G pixels in the Bayer array, i.e. they become R(G1)B(G2). This isn't some great secret; for example, not so long ago, Adobe issued an ACR/LR update that specifically improved the handling of such cameras.
Interesting what took Adobe so long.
Another possibility could be that ACR is treating cameras differently in this regard.
Another great argument. It's a possibility, sure, but again it's one bandied around often without a shred of evidence.
Take same raw data, pack it to cr2 and arw, and you will have plenty of evidence.

--
http://www.libraw.org/
 
For "JPEG DR", if one wants to go that way,
After reading the first page of this thread I had an idea ;

What if DPReview (or another big website) set up a studio scene for dynamic range measurement in 'real' terms : i.e. a scene with enormous lighting range from jet black velvet in shadow
Yes like a cavity acting as a light trap. DPR's new test scene has a bit of that which is good. Lots of varying low-contrast fine textures to trick the NR algorithm.

Then I'd throw in a handful of differently-coloured sewing threads (not on their spools) in a dark place so the camera's JPEG engine has some fine colour detail to deal with.
to bright spotlight reflections and all the stuff inbetween.
A bright, slow gradient to match the clipping against. As small as possible not to reduce overall scene contrast by lens flare needlessly.
Each camera/sensor takes the same picture and we can easily see the clipping areas since ALL cameras and films will clip at different points in this extreme scene.
Which tone curve are you going to apply? How are you going to set exposure?
Do you calibrate to a highlight, a shadow, or a midtone?
I'd use a sequence 1/3 stops apart and pick the one where I get 255,255,255 closest to the middle of the clipping gradient reference (or better also pick a few around that). I'd also put 1/3 stop reference marks on the gradient.
How do you compensate for ISO variances between cameras.
Use the lowest ISO (or the ISO performing the best if there's a compromised lowest ISO).

Then darken the bright gradient and brighten the darkest parts of the scene to some predefined average levels using some standardised method (e.g. gamma) where visual inspection can be done. Compare to a well-exposed reference. Everyone can then make up their own mind.

It would still be affected by light spectrum and things like that (which the test should point out), but at least the effect of NR, sharpening, JPEG compression and pattern noise would be factored in.
The T4110 step wedge (and other techniques) work pretty well at dealing with these issues and make it easy to get standardized results.
Problem with the wedge and Imatest, or DPR, or even DxO, is that none of them registers pattern noise like banding. That can often be the factor that limits the practically useful DR. It also doesn't go away well when downsizing, unlike random noise.

Hopefully, at some point, there will be software capable of, in the vast majority of cases, reducing line noise at the source without damaging the real image noticeably.

The GH1 has the unique problem of clipping 99.4% of its raw data at black at ISO 100, which, I'd expect, would limit the amount of subject details you could salvage from the deepest shadows, as well as fool the wedge/Imatest method.
 
It's possible to test, by shooting something with another spectrum like a tungsten-illuminated white screen or a patch of even blue sky. If the relative differences between the G channels change (particularly if they change so much that the order is reversed), we know it's related to the dyes.
It does change.

--
http://www.libraw.org/
 
Do you have an explanation of the possible advantage of such a design for image processing?
Better colour, less noise.
I have been scratching my head on that for some time, because it seems to entail a substantial loss of resolution.
You need to apply 4-channel white balance at least, but interpolation in 4 colours helps even more. Both RML and RPP take advantage of green channels being unequal.
Interesting. Do you generate and use a four-channel color profile?

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
Do you generate and use a four-channel color profile?
Profiles can be applied to demosaiced data, right?
Yes. But one also has the option of interpolating an additional bit of chrominance information, so I was asking whether that was used or not.
Demosaicing "4 to 3" does not need 4-channel profiles. But pre-demosaicing linearization needs to be in 4 channels, independent.
What do you mean by linearization? More than just normalization of color channels? Again, what is puzzling me is what use could be made of the extra channel data that one doesn't have with the two greens being spectrally identical.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
one also has the option of interpolating an additional bit of chrominance information, so I was asking whether that was used or not.
It is used.
What do you mean by linearization? More than just normalization of color channels?
Flare and vignetting correction at bare minimum.
what is puzzling me is what use could be made of the extra channel data that one doesn't have with the two greens being spectrally identical.
You pointed at metamerism, right? A subtle difference in spectral responses of green channels allows to significantly improve differentiation of colours.

If you look at EXIF of say E-330 you will see that the sensitivity coefficients for both green channels are stated equal. But in fact if you use 4-channel WB the difference can be as large as 0.05 EV, depending on the light spectrum.

--
http://www.libraw.org/
 
The GH1 has the unique problem of clipping 99.4% of its raw data at black at ISO 100, which, I'd expect, would limit the amount of subject details you could salvage from the deepest shadows, as well as fool the wedge/Imatest method.
Fool it how? That should simply create a final step where you can no longer distinguish the next step. You may get an odd graph where the last step is less noisy than the next to last step, but I don't think that would affect the DR range at specific noise thresholds report.

--
Jay Turberville
http://www.jayandwanda.com
 
Well, it might be by design if there were some metamerism degeneracy that was resolved by the use of different filters.
Emil, I think you read that bit too quickly. Jay (as I understood it) was suggesting using the same dye but of different "thickness", as an alternative to using different gains (in case it was deliberate).
I was offering that as a possibility that would make it difficult for it to show up. But if there is no good design reason for such an approach, then I agree it is unlikely.

--
Jay Turberville
http://www.jayandwanda.com
 
The GH1 has the unique problem of clipping 99.4% of its raw data at black at ISO 100, which, I'd expect, would limit the amount of subject details you could salvage from the deepest shadows, as well as fool the wedge/Imatest method.
Fool it how?
It would reduce the noise amplitude (std.dev.) so the noise curves Imatest put out would not be correct towards black. But maybe the wedge test never comes close to real black anyway?
That should simply create a final step where you can no longer distinguish the next step. You may get an odd graph where the last step is less noisy than the next to last step, but I don't think that would affect the DR range at specific noise thresholds report.
Yes, I suppose if the thresholds are conservative enough the effect will be outside their realm.

One thing I've been thinking about is that when we view the image at a smaller scale, tonal levels that were too noisy at 100% become acceptable. But subject detail that was clipped, how much of that can we retrieve? The noise will help push some of it above the clipping threshold but how much does that help?
 

Keyboard shortcuts

Back
Top