Panasonic G9 vs G9M2 Normalised PDR Chart

Dark frames for read noise must be taken at the highest shutter speed in a dark room

either lens or body cap

shutter type needs to be the same
 
Dark frames for read noise must be taken at the highest shutter speed in a dark room

either lens or body cap

shutter type needs to be the same
That requirement is pretty clearly shown by the variability in my posted crops due to light leakage! Obviously, shutter mode differences might be a significant factor as well, depending on the camera. I tested at 1/500 and 1/8000 for the mechanical shutter mode with my EM1iii and didn't see any consistent difference based on that variable alone.
 
Last edited:
Dark frames for read noise must be taken at the highest shutter speed in a dark room

either lens or body cap

shutter type needs to be the same
Why on shutter type? Didn’t Cliff recently show no difference with G9ii. Prior charts for G9 also showed no difference…or was not indicated.
 
Below (#1) is dark frame of linear data from G9MII raised by +13 stops after the black level subtraction (gain amplification 8192). Even after this amplification the following DR-room remains before the saturation:

For red channel - 1.78 eV

For green channel - 2.43 eV

For blue channel - 1.67 eV

Thus the scientific dynamic range for the G9MII has the following values (sum of 13 eV and left DR which one can read at Data Statistics iWE-pannel in #1):

For red channel 13eV+1.78eV=14.78 eV

For green channel 13eV+2.43eV=15.43 eV

For blue channel 13eV+1.67 eV=14.67 eV

The same for G9, but only with +10 stops amplification (gain is 1024), is shown in #2

The values of the dynamic range for G9 are as follows:

For red channel 10eV+1.99eV=11.99 eV

For green channel 10eV+2.91eV=12.91 eV

For blue channel 10eV+2.0 eV=12 eV

Thus, the difference in dynamic range between G9II and G9 is of about 2.5 stops for all the RGB channels. The huge DR of the G9II justifies the new 16-bit per channel Raw format introduced by Panasonic.

The data you show can not be considered as reliable.

#1. Dark-frame linear data from G9MII compensated by +13 Stops (eVs)
#1. Dark-frame linear data from G9MII compensated by +13 Stops (eVs)

#2. Dark frame linear data from G9 compensated by +10 stops (eVs).
#2. Dark frame linear data from G9 compensated by +10 stops (eVs).
Thanks for the link and sorry for the delayed response. My request was prompted by the pretty significant difference in the appearance of the dark frames in your respective screenshots for the G9ii and G9 confirmed by my examination of the raws you kindly provided (especially considering that the G9ii rendering is pushed +3 EV relative to the G9 rendering). I wondered whether there might be some uncontrolled variables in play. My initial suspects:
  • ISO (100 for the G9ii and 200 for the G9, but that only accounts for a possible 1 EV difference at most);
  • Shutter mode (ES for the G9ii and mechanical for the G9);
  • Shutter speed (1/32000 for the G9ii and 1/500 for the G9, but both are fast enough to rule out meaningful impact from heat buildup);
  • Lens related auto-distortion correction (the G9ii had no lens mounted and presumably was shot with the body cap on while the G9 was shot with the Oly 12-100mm mounted and presumably with a lens cap on, but there are no visible signs of correlated noise patterns in the G9 screenshot indicative of distortion-correction).
  • Possible difference in external light leak most likely due to viewfinders not being consistently covered.
Not having the two cameras in question here to run my own tests and not really having had much personal experience with dark frames, I decided to experiment with my Oly EM1iii with changes to ISO, shutter mode, shutter speed and lens mounted/removed. Aside from the expected visible difference between ISO speeds, the impact of the other variables was minor and certainly not enough to account for such a large visible difference, with the exception of one variable: light leak. I was rather surprised by how large a difference it made when I failed to cover the viewfinder and shot a dark frame in my relatively darkened studio with only shaded natural light present. In retrospect, I should have realized that even a tiny amount of external light pollution would be quite meaningful when the dark frame is pushed in processing by 10 EV or more. What's more, the impact on the individual channel behavior is surprisingly variable. I'd be interested in your thoughts on the cause but I wonder whether it might have something to do with the relative position of the viewfinder to the light source sometimes causing a filtering effect on the light that leaks in via the viewfinder. Just a wild guess...

Back to the main question, can you confirm that the possibility of light leakage was truly excluded for both of the dark frames used in your comparison?

Below is a composite of crops from my EM1iii dark frame experiment. All were processed in ACR at default settings except the exposure slider was +5ev for all and sharpening and all noise reduction was zero'd out. In Photoshop an additional +5ev exposure adjustment layer was applied as well.

Top Left = ISO 200, 1/8000, no lens attached, viewfinder NOT shielded; Top Middle = ISO 200, 1/8000 no lens attached, viewfinder shielded; Top Right = ISO 200, 1/8000 lens attached, viewfinder shielded; Bottom Left = ISO 100, 1/500, no lens attached, viewfinder shielded; Bottom Middle = ISO 200, 1/32000 (ES), no lens attached, viewfinder shielded; Bottom RIght = ISO 200, 1/500, no lens attached, viewfinder NOT shielded.
Top Left = ISO 200, 1/8000, no lens attached, viewfinder NOT shielded; Top Middle = ISO 200, 1/8000 no lens attached, viewfinder shielded; Top Right = ISO 200, 1/8000 lens attached, viewfinder shielded; Bottom Left = ISO 100, 1/500, no lens attached, viewfinder shielded; Bottom Middle = ISO 200, 1/32000 (ES), no lens attached, viewfinder shielded; Bottom RIght = ISO 200, 1/500, no lens attached, viewfinder NOT shielded.
The data I have shown are reliable.

The lens cup with shutter speed 1/500 s are good enough to prevent the light penetration, which is well visible from data reported in the Statistics panel. As you can see from the Statistics panel of my previous analysis the Mean value is 2056 for the G -channel. Now, taking into account that +10 stops means a digital gain of 1024, we can come to the iWE Normalized Mean value 2056/1024=2.01 . Taking into account that iWE normalized maximum is 8192, while the G9 ADC maximum on account of the black line shift (142) is 4098-142= 3956, we get the iWE normalization coefficient for G9-ADC 8192/3956=2.07. Thus, the Normalized Mean Value corresponds to G9-ADC value 2.01/2.07=0.97.

The 0.97 is less than the weight of the lowest bit of the ADC and on account of the noise, which also gives contribution to the Mean value, can be neglected (no light penetration!)

To illustrate the reproducibility and accuracy of iWE results, below is another example at the shutter speed 1/8000 s, F-number 22, cap on Lumix 12-35 lens (now lens is different :) ), and the whole camera is in the dark place.

As you can see from the statistics data:

R-channel dynamic range is 10+1.99=11.99

G-channel dynamic range is 10+2.91=12.91

B-channel dynamic range is 10 +1.99=11.99

These data are extremely close to that I have already reported. The DR results are reproduced within inaccuracy of about 0,01EV, which is excellent!

BTW the G-channel DR 12.9 is already close to the theoretical maximum restricted by ADC digital quantum noise caused by fluctuation of the lowest-weght bit. It is impossible to get more than 13.7 EV for 12-bit ADC without the narrowing the passband (averaging or using another filter). The averaging or filtering, of course, is forbidden in the scientific measurements of DR, because this is the way to manipulate with the DR.

The Adobe LR, ACR, Photoshop can not be used for the data analysis because they apply their specific profiles with additional digital amplification, which can be nonlinear.

If you indeed want to work with the sensor data then there is the only way - learning iWE :)

iWE allows controlling the whole process. For example, for correct DR measurements it is important either to exclude demosaicing or to use the bilinear interpolation in order to minimize additional noise (yes, the demosaicing adds the additional noise). To find correct DR for R,B channels the white balance must be switched-off to leave the original values as they were registered in R and B channels. Of course, no lens correction or color space matrix should be applied to the data.



#1. SS=1/8000 s +cap on the lens. The camera is in dark room.
#1. SS=1/8000 s +cap on the lens. The camera is in dark room.
 
[snipped out embedded text from prior responses]
The data I have shown are reliable.

The lens cup with shutter speed 1/500 s are good enough to prevent the light penetration, which is well visible from data reported in the Statistics panel. As you can see from the Statistics panel of my previous analysis the Mean value is 2056 for the G -channel. Now, taking into account that +10 stops means a digital gain of 1024, we can come to the iWE Normalized Mean value 2056/1024=2.01 . Taking into account that iWE normalized maximum is 8192, while the G9 ADC maximum on account of the black line shift (142) is 4098-142= 3956,
A quibble, but it should be 4096-142=3954 (although the G9 EXIF data also reports a linear limit of 4095 for whatever that's worth).
we get the iWE normalization coefficient for G9-ADC 8192/3956=2.07. Thus, the Normalized Mean Value corresponds to G9-ADC value 2.01/2.07=0.97.

The 0.97 is less than the weight of the lowest bit of the ADC and on account of the noise, which also gives contribution to the Mean value, can be neglected (no light penetration!)

To illustrate the reproducibility and accuracy of iWE results, below is another example at the shutter speed 1/8000 s, F-number 22, cap on Lumix 12-35 lens (now lens is different :) ), and the whole camera is in the dark place.

As you can see from the statistics data:

R-channel dynamic range is 10+1.99=11.99

G-channel dynamic range is 10+2.91=12.91

B-channel dynamic range is 10 +1.99=11.99

These data are extremely close to that I have already reported. The DR results are reproduced within inaccuracy of about 0,01EV, which is excellent!

BTW the G-channel DR 12.9 is already close to the theoretical maximum restricted by ADC digital quantum noise caused by fluctuation of the lowest-weght bit. It is impossible to get more than 13.7 EV for 12-bit ADC without the narrowing the passband (averaging or using another filter). The averaging or filtering, of course, is forbidden in the scientific measurements of DR, because this is the way to manipulate with the DR.

The Adobe LR, ACR, Photoshop can not be used for the data analysis because they apply their specific profiles with additional digital amplification, which can be nonlinear.
I wasn't relying on anything I did in ACR and Photoshop for any "data analysis" I considered. Rather, I was using those tools to visualize the differences caused by different variables in the dark frame captures. Since all examples shown were processed with identical settings in ACR+PS, the visual differences between the displayed crops from my EM1iii are both informative for analytical purposes but, more importantly, from a photographic imaging perspective.

I'm well aware of what Adobe does to manipulate raws, so when I do camera comparisons of raws processed in ACR, I usually ensure that I've reset ACR to eliminate the hidden baseline exposure compensation and hidden non-linear tone curve. I also often utilize linear profiles I've generated for the specific cameras being compared. That's what I did in this instance, and the discrepancy between your G9 dark frame and an EM1iii dark frame I produced with identical ISO, shutter speed and lens-on settings was baffling. The processing results pretty clearly favored the EM1iii. That was unexpected because I didn't think the read noise levels of these two cameras actually favored the EM1iii. (I know you don't believe in PhotonstoPhotos because of how Claff calculates Photographic Dynamic Range, but his separate read noise charts show a slight advantage to the G9.) Below is a side-by-side of a 10 Ev boost in PS applied to your G9 and G9ii dark frames plus my EM1iii dark frame, all three processed in ACR with the noted adjustments to minimize non-linear processing differences (Adobe Standard Profile applied to all three):

Left=G9; Middle=EM1iii: Right=G9ii, all three dark frames were processed in ACR with sharpening and noise reduction zero'd out
Left=G9; Middle=EM1iii: Right=G9ii, all three dark frames were processed in ACR with sharpening and noise reduction zero'd out

Suffice it to say, the difference between the G9ii and the other two supports your proposition of better DR for the G9ii, but what got me scratching my head was the significant difference between the EM1iii and the G9.

To be clear here, I've contended all along (including in several exchanges with you) that a magenta color cast by itself is not indicative of increased noise levels at the raw level compared to other color casts or no color cast at all. (This can be confirmed by looking at the standard deviations for the respective color channels in Rawdigger, which in this instance are virtually identical in all 4 raw channels for the G9 dark frame.) As I've maintained, the color cast for the G9 can be corrected in processing and the visual effect significantly ameliorated, but even if the necessary curve adjustments are applied to the red and blue channels to bring them down to the green channel level, the overall noise and lightness level is still higher than the EM1iii's.

One more thing I didn't mention in my previous response is that I also did a "data analysis" comparing the raw DR for your G9 dark frame vs the raw DR for my EM1iii (also with a lens on, plus ISO 200 and 1/500 and carefully controlled to minimize light leakage). Using the standard deviation amounts reported by Rawdigger and calculating DR based on your engineering/scientific DR formula, the DR for the G9 was lower than the EM1iii in all four channels by approximately 0.75 Ev.

All of these considerations are why I speculated there might be some relevant difference in how the two dark frames you used for your analysis were prepared. Based on my own quick experiment my speculation was that light leakage was a factor. You've now shot down that hypothesis. Care to speculate on why there's the difference between the G9 dark frame and my EM1iii? I'm genuinely perplexed.
If you indeed want to work with the sensor data then there is the only way - learning iWE :)
It's not my fault you've been too lazy to release a Mac version. :-) Meanwhile, I'll continue to muddle through with Rawdigger, which doesn't entail any complications of interpolations to RGB, with or without application of white balance co-efficients, etc.
iWE allows controlling the whole process. For example, for correct DR measurements it is important either to exclude demosaicing or to use the bilinear interpolation in order to minimize additional noise (yes, the demosaicing adds the additional noise). To find correct DR for R,B channels the white balance must be switched-off to leave the original values as they were registered in R and B channels. Of course, no lens correction or color space matrix should be applied to the data.

#1. SS=1/8000 s +cap on the lens. The camera is in dark room.
#1. SS=1/8000 s +cap on the lens. The camera is in dark room.
 
While i am pleased to look at this thread deep technical details considered, I just want to add what I belive to be a important factors that seen missed... but eventually were considered, just not mentioned...

For this data comparison to be reliable, you cannot base comparison on data from a single camera...but several identical ones (I would say absolute minimum 3 and with S/N far apart from each other) if by lab standards, ideally would need 25 identical cameras...but I understand if you do not do that many...

probably you would be surprised with the difference of the results...

and on each camera several shoots should have been taken and compared in identical conditions

Moreover, other factor I did not have seen refered, ambient temperature of the test shoot and if cameras were both used to take shoots in cold condition just after boot (with a rest of at least 6 hours in the room in between shoots) and aclimated to test room temperature for 12 hours at least if big difference from where they were before...to ensure comparable results.
 
[snipped out embedded text from prior responses]
The data I have shown are reliable.

The lens cup with shutter speed 1/500 s are good enough to prevent the light penetration, which is well visible from data reported in the Statistics panel. As you can see from the Statistics panel of my previous analysis the Mean value is 2056 for the G -channel. Now, taking into account that +10 stops means a digital gain of 1024, we can come to the iWE Normalized Mean value 2056/1024=2.01 . Taking into account that iWE normalized maximum is 8192, while the G9 ADC maximum on account of the black line shift (142) is 4098-142= 3956,
A quibble, but it should be 4096-142=3954 (although the G9 EXIF data also reports a linear limit of 4095 for whatever that's worth).
we get the iWE normalization coefficient for G9-ADC 8192/3956=2.07. Thus, the Normalized Mean Value corresponds to G9-ADC value 2.01/2.07=0.97.

The 0.97 is less than the weight of the lowest bit of the ADC and on account of the noise, which also gives contribution to the Mean value, can be neglected (no light penetration!)

To illustrate the reproducibility and accuracy of iWE results, below is another example at the shutter speed 1/8000 s, F-number 22, cap on Lumix 12-35 lens (now lens is different :) ), and the whole camera is in the dark place.

As you can see from the statistics data:

R-channel dynamic range is 10+1.99=11.99

G-channel dynamic range is 10+2.91=12.91

B-channel dynamic range is 10 +1.99=11.99

These data are extremely close to that I have already reported. The DR results are reproduced within inaccuracy of about 0,01EV, which is excellent!

BTW the G-channel DR 12.9 is already close to the theoretical maximum restricted by ADC digital quantum noise caused by fluctuation of the lowest-weght bit. It is impossible to get more than 13.7 EV for 12-bit ADC without the narrowing the passband (averaging or using another filter). The averaging or filtering, of course, is forbidden in the scientific measurements of DR, because this is the way to manipulate with the DR.

The Adobe LR, ACR, Photoshop can not be used for the data analysis because they apply their specific profiles with additional digital amplification, which can be nonlinear.
I wasn't relying on anything I did in ACR and Photoshop for any "data analysis" I considered. Rather, I was using those tools to visualize the differences caused by different variables in the dark frame captures. Since all examples shown were processed with identical settings in ACR+PS, the visual differences between the displayed crops from my EM1iii are both informative for analytical purposes but, more importantly, from a photographic imaging perspective.
Have in mind, that even the correct visualization of dark frames is problematic with ACR and Photoshop. Nobody knows what happens inside this "black-box".
I'm well aware of what Adobe does to manipulate raws, so when I do camera comparisons of raws processed in ACR, I usually ensure that I've reset ACR to eliminate the hidden baseline exposure compensation and hidden non-linear tone curve. I also often utilize linear profiles I've generated for the specific cameras being compared. That's what I did in this instance, and the discrepancy between your G9 dark frame and an EM1iii dark frame I produced with identical ISO, shutter speed and lens-on settings was baffling. The processing results pretty clearly favored the EM1iii.
If you give me the black frames from you EM1III then I can quickly compare DR with the G9.

I need blackframes at SS=1/500s and maximal F-number + black cap (lens type is not important; you can close viewfinder, but I think it is important only for DSLRs). Go to manual focus and make shots at the following ISO.

200, 400, 800, 1600, 3200, 6400, 12800, 25600

I have already the same data for G9 .
That was unexpected because I didn't think the read noise levels of these two cameras actually favored the EM1iii. (I know you don't believe in PhotonstoPhotos because of how Claff calculates Photographic Dynamic Range, but his separate read noise charts show a slight advantage to the G9.) Below is a side-by-side of a 10 Ev boost in PS applied to your G9 and G9ii dark frames plus my EM1iii dark frame, all three processed in ACR with the noted adjustments to minimize non-linear processing differences (Adobe Standard Profile applied to all three):
Left=G9; Middle=EM1iii: Right=G9ii, all three dark frames were processed in ACR with sharpening and noise reduction zero'd out
Left=G9; Middle=EM1iii: Right=G9ii, all three dark frames were processed in ACR with sharpening and noise reduction zero'd out

Suffice it to say, the difference between the G9ii and the other two supports your proposition of better DR for the G9ii, but what got me scratching my head was the significant difference between the EM1iii and the G9.

To be clear here, I've contended all along (including in several exchanges with you) that a magenta color cast by itself is not indicative of increased noise levels at the raw level compared to other color casts or no color cast at all. (This can be confirmed by looking at the standard deviations for the respective color channels in Rawdigger, which in this instance are virtually identical in all 4 raw channels for the G9 dark frame.)
My study shows that magenta indeed comes from higher mean values in R and B channels. These higher non-zero mean values come from higher read noise. Of course, one can make additional off-set for the R and B channels and subtract these mean levels removing the magenta. You will get the visual effect, but this will not improve the true DR. Higher DR gives more room for light, when the "light-line" level is above the "noise-line" level. In the given case you are simply modify black-line level and make specific procedure of the color noise reduction known as discrimination method. But in presence of light you will also shift down the "light-line" level, which can be even lower than this mean level you push down .

The black-line level is measured by special masked pixels, and I have found that the black-line level measurements are reliable and should not be touched in case you are measuring the DR.

BTW, the better DR can be visualized on real-life images taken at low light for sensors with different pixel size if the conditions of same exposure per pixel is satisfied.

As I've maintained, the color cast for the G9 can be corrected in processing and the visual effect significantly ameliorated, but even if the necessary curve adjustments are applied to the red and blue channels to bring them down to the green channel level, the overall noise and lightness level is still higher than the EM1iii's.

One more thing I didn't mention in my previous response is that I also did a "data analysis" comparing the raw DR for your G9 dark frame vs the raw DR for my EM1iii (also with a lens on, plus ISO 200 and 1/500 and carefully controlled to minimize light leakage). Using the standard deviation amounts reported by Rawdigger and calculating DR based on your engineering/scientific DR formula, the DR for the G9 was lower than the EM1iii in all four channels by approximately 0.75 Ev.

All of these considerations are why I speculated there might be some relevant difference in how the two dark frames you used for your analysis were prepared. Based on my own quick experiment my speculation was that light leakage was a factor. You've now shot down that hypothesis. Care to speculate on why there's the difference between the G9 dark frame and my EM1iii? I'm genuinely perplexed.
If you indeed want to work with the sensor data then there is the only way - learning iWE :)
It's not my fault you've been too lazy to release a Mac version. :-) Meanwhile, I'll continue to muddle through with Rawdigger, which doesn't entail any complications of interpolations to RGB, with or without application of white balance co-efficients, etc.
iWE allows controlling the whole process. For example, for correct DR measurements it is important either to exclude demosaicing or to use the bilinear interpolation in order to minimize additional noise (yes, the demosaicing adds the additional noise). To find correct DR for R,B channels the white balance must be switched-off to leave the original values as they were registered in R and B channels. Of course, no lens correction or color space matrix should be applied to the data.

#1. SS=1/8000 s +cap on the lens. The camera is in dark room.
#1. SS=1/8000 s +cap on the lens. The camera is in dark room.
 
I wasn't relying on anything I did in ACR and Photoshop for any "data analysis" I considered. Rather, I was using those tools to visualize the differences caused by different variables in the dark frame captures. Since all examples shown were processed with identical settings in ACR+PS, the visual differences between the displayed crops from my EM1iii are both informative for analytical purposes but, more importantly, from a photographic imaging perspective.
Have in mind, that even the correct visualization of dark frames is problematic with ACR and Photoshop. Nobody knows what happens inside this "black-box".
We don't know the exact demosaicing algorithm, but otherwise we know a lot about the internals of ACR and PS. We can linearize output from ACR. We can also substitute our own profiles if preferred; but regardless of that option, the "standard" Adobe profiles (Adobe Standard and Adobe Color) are sufficiently consistent between cameras to make them adequate for the kind of visualization-based comparisons we're doing here. Remember, I'm not basing my observations on these visualizations alone. However, to humor you, I'm switching to Rawdigger for the visualizations below. That's in addition to continuing to rely on Rawdigger for the unadulterated raw data analysis, which I contend is less "black box" than the interpolated and white balanced data displayed and used for DR calculations in IWE.
I'm well aware of what Adobe does to manipulate raws, so when I do camera comparisons of raws processed in ACR, I usually ensure that I've reset ACR to eliminate the hidden baseline exposure compensation and hidden non-linear tone curve. I also often utilize linear profiles I've generated for the specific cameras being compared. That's what I did in this instance, and the discrepancy between your G9 dark frame and an EM1iii dark frame I produced with identical ISO, shutter speed and lens-on settings was baffling. The processing results pretty clearly favored the EM1iii.
If you give me the black frames from you EM1III then I can quickly compare DR with the G9.
No need for that. See below.
I need blackframes at SS=1/500s and maximal F-number + black cap (lens type is not important; you can close viewfinder, but I think it is important only for DSLRs).
That observation about OVFs vs. EVFs is a fair point and it got me thinking that maybe I was missing something in my initial black frame testing with my EM1iii. I wondered whether the source of light leakage could be the EVF itself when turned on. I repeated the black frame tests and eventually stumbled on the real reason I was seeing radically different color casts with my EM1iii. The bottom line is that it was just a coincidence that the blue-tinted and magenta-tinted black frame shots from my initial test happened to occur when I blocked the viewfinder. In my subsequent testing done in a windowless pitch-black room with the EVF off, shutter speed set to 1/8000 (mechanical) and body cap on, about half the test black frames I shot were tinted green, about a quarter were tinted magenta and the other quarter were tinted blue when visualized in either Rawdigger or ACR! There was no correlation to any possible light leakage.

At first, I was really baffled and worried that there was something seriously wrong with my camera. Then I dug into the shots more deeply using Rawdigger to see what was really going on with these varying black frames. I think it will be easier for readers to conceptualize the issue by including Rawdigger screen grabs from one of the "magenta" EM1iii black frames. [Sergeui, please note that I'm sure you fully understand the math and related details to follow. The explanation is aimed at other readers and feel free to correct me if I stumble.] In all of the Rawdigger screen grabs below, I've set brightness to +3 to make things easier to visualize. Just bear in mind that the brightness setting does not affect at all any of the data reported in the header part of the screen shots. Also, no raw profile is selected. Black level setting and white balance setting for each screen shot is specified just below the image. Let's start with this one:

EM1iii; black level applied as per image exif (253,254,253,254) ; white balance set to "As Shot"
EM1iii; black level applied as per image exif (253,254,253,254) ; white balance set to "As Shot"

Looks pretty magenta doesn't it? This is confirmed by the mean ("Avg") values shown at the top for the four channels. Clearly, the red and blue mean values are higher than the two green values. That alone would explain a magenta cast, but the story is more complicated than that. In fact, when no black level is applied to the file, the unadulterated mean values for all four channels are extremely close:

EM1iii; black level NOT applied; WB is still set to As Shot
EM1iii; black level NOT applied; WB is still set to As Shot

As you can see, there is no longer a significant imbalance between the R and B channels vs the G channels in either mean (AVG) or standard deviation (SD) values. Clearly, then, the problem must have been introduced by the application of the black levels seen in the preceding screen grab. And if we look a bit closer at the specific black levels that are assigned, we can see that they are set to 254 for the G channels and 253 for the R and B channels. Those values are determined by the camera itself and written to the EXIF header. (Note: if you're wondering why the color has shifted to pink rather than gray despite all channel AVGs being virtually identical, please pin that question until I show and discuss the G9ii screen grabs below.)

Back to the EM1iii varying color cast issue, look what happens when I manually set the black level for all four channels to 254:

Same EM1iii black frame as above; black level applied uniformly to 254 for all 4 channels; WB set to As Shot
Same EM1iii black frame as above; black level applied uniformly to 254 for all 4 channels; WB set to As Shot

It turns out that all of the green-tinted EM1iii black frames that I shot had black levels set to 254 for all four channels. The blue-tinted ones had the B channel black level set to 253 and the magenta-tinted ones had both the R and B channel black level set to 253. The green channels in all of my test black frames were always auto-set to 254.

Exactly why the camera switches sometimes away from the all-254 setting is unclear to me. I know that it's derived somehow from the readouts of the optical black pixels, but what camera-specific or environmental conditions that cause the fluctuation in very controlled settings as were present during my second round of testing is a mystery to me. Perhaps it's as simple as random variation in the readout of the optical black pixels combined with the bit depth limitation of the camera causing it to toggle between 254 and 253 when more precision (something "between" 253 and 254) is what's actually needed. If so, this is just a consequence of the EM1iii being a 12-bit camera. Probably 14-bit and certainly 16-bit cameras like the G9ii wouldn't run into this issue (assuming quantization error is really a factor here). Bear in mind, though, that the impact of this problem only becomes visible when dealing with very dark images (or "black" frames like this one) that require significant shadow pushes.
[snip]

My study shows that magenta indeed comes from higher mean values in R and B channels. These higher non-zero mean values come from higher read noise. Of course, one can make additional off-set for the R and B channels and subtract these mean levels removing the magenta. You will get the visual effect, but this will not improve the true DR.
This is where we part ways once again. The magenta color cast is, of course, nominally due to the higher mean values in the R and B channels. That's not in dispute. The question is: WHY ARE THE R AND B DN VALUES HIGHER TO BEGIN WITH? I showed above one scenario that causes an imbalance among the four channels - namely, less than ideally set black levels. Let's return to the hypothetical question I "pinned" earlier in this post by looking at your G9ii black frame. First the rendering with black levels applied:

G9ii black frame; black level applied as per image exif (2048) ; white balance set to "As Shot"
G9ii black frame; black level applied as per image exif (2048) ; white balance set to "As Shot"

The channel AVGs are better balanced than the EM1iii with black levels applied, but they are tilted toward the blue and the rendering shows a pretty obvious purplish/magenta cast. Your own screen shot from IWE shows the same purplish/magenta cast. So what gives? Let's check what happens when we look at the unadulterated version with no black level adjustment applied:

Same G9ii black frame; black level OFF; white balance set to "As Shot"
Same G9ii black frame; black level OFF; white balance set to "As Shot"

What's interesting here is - just like we saw with the EM1iii with no black level adjust - all four channel AVG values are very closely balanced (as are the SDs, of course). Yet, despite nearly identical AVGs for all channels, the rendering appears pinkish (just like the EM1iii). Now, let's look at one more rendering of the same G9ii black frame:

Same G9ii; black level applied as per image exif (2048); white balance set to AUTO
Same G9ii; black level applied as per image exif (2048); white balance set to AUTO

The ONLY difference between this more neutral rendering and the purplish/magenta one shown above is that the white balance has been switched from "As Shot" to Rawdigger's "Auto" WB. So, obviously, WB has a critical impact on the presence of an apparent color cast, but what exactly is going on?

To begin with, unless some kind of masking is utilized, the R-specific and B-specific WB coefficients are applied to all R and B pixel values in the image, regardless of how light or dark the individual R and B pixels are. It makes sense to apply the co-efficients to correct for the differences in color responsivity caused by the different wavelengths of photoelectrons passing through the color filters on top of pixels. Since the R and B pixels are less responsive to light than G pixels, they end up generating lower DNs in the raw files, so the WB operation multiplies these DNs by the amount of the WB co-efficients in order to prevent the RGB image output by the raw converter from having an overly strong green color cast. However, what does NOT make sense is to apply this WB multiplying effect to pixels that received no light. Ideally, there would be a tapering off of the WB multiplier with respect to to any very dark pixels in which read noise plays a significant role in setting the DN value of the pixel. The failure to taper the WB effect on read noise-dominated pixels will cause the overall average of these very dark pixels to have an inappropriate magenta color cast. The actual saturation and hue of the cast will depend on several factors:
  • How much of a role read noise plays in establishing the average lightness of these dark pixels.
  • How strong the WB effect is.
  • How well-optimized the black level setting is that gets applied to these darkest pixels (e.g., the black levels for my EM1iii aren't particularly well optimized since the starting mean values, after black levels are applied are already unbalanced and either tilted toward the green, blue or magenta.)*
The bottom line here is that the pinkish color cast seen in the non-black leveled renderings is caused by application of WB to the black frame raws, NOT because the R and B raw channels are inherently more noisy than the G channels. Similarly, huge exposure pushes to very deep shadows/very underexposed shots will have the perverse effect of adding an inappropriate magenta cast to the image. The corrective action to take is to back out the effect by a tone curve adjustment targeted at just these inappropriately affected deep shadows. ACR and LR include a special adjustment slider (in the Calibration tab) that may suffice to ameliorate the problem, but my experience is that it's often necessary to use that adjustment very modestly and to supplement it with curve adjustments in the red and blue channels to re-equalize them with the green channel (plus also address any other errors introduced by the black level subtraction step).

The foregoing is what I've been hammering at now for months whenever the magenta cast issue and its real proximate cause is brought up. THERE REALLY IS NO MEANINGFUL DIFFERENCE IN READ NOISE LEVELS AT THE RAW LEVEL (with the possible exception of some PDAF pixel implementations) because there is no relevant difference in how CMOS pixels are designed and fabbed for each of the four Bayer raw color channels. The circuitry and silicon for one pixel should be identical to every other active pixel on the sensor, regardless of which color channel it is associated with. Remember: we're talking about read noise, which is noise added by the electronics, not any noise associated with light hitting the sensor. Since there is no light involved here (we're talking now just about black frames generated in pitch black conditions), there are no complications and channel-specific variability generated by differing responsivity to specific wavelengths of photoelectrons absorbed by the pixels based on the color filters that sit atop them. I'm not aware of any reason to expect read noise behavior to be correlated by color channel at this level. The correlation is introduced later in the processing chain as has been demonstrated with visualizations and corresponding raw data above.

Furthermore, by waiting until later in the processing chain to extract the standard deviation data needed to calculate DR, you're adding your own version of what you've called a "black box". IWE appears to perform some kind of white balance operation in addition to the interpolation of the four raw channel data into three channels. This is bound to be more confounding than performing the measurements at the front end (as can be done with Rawdigger). For instance, any reasonable type of interpolation of two green channels into one is bound to reduce the standard deviation for the single green channel relative to the red and blue channels. Of course, the red and blue channels will seem to be at least slightly more noisy as a result. And that's before we get to the undesirable WB effect on the red and blue channels of a black frame, which also increases their apparent noisiness relative to the green channel.

________________________

*Beyond the specific black level problem noted for the EM1iii, there's another way in which application of black levels can adversely affect the post-black level subtraction DN averages. Black level subtraction is performed on the raw digital numbers (DNs) using simple arithmetic. A whole number is subtracted from every DN in the raw file. Since some DNs will have starting values less than the black level value (e.g., less than 254 or 253 for the EM1iii, 142 for the G9, and 2048 for the G9ii), fully subtracting the black level value from these smaller DNs would result in negative values (DNs of less than 0), which isn't allowed. Instead, these smaller DNs will all be set to 0, which means they are now lighter relative to other values after the black level subtraction than they were prior to the subtraction operation, which isn't ideal.
Higher DR gives more room for light, when the "light-line" level is above the "noise-line" level. In the given case you are simply modify black-line level and make specific procedure of the color noise reduction known as discrimination method. But in presence of light you will also shift down the "light-line" level, which can be even lower than this mean level you push down .

The black-line level is measured by special masked pixels, and I have found that the black-line level measurements are reliable and should not be touched in case you are measuring the DR.
Based on my findings - at least, with respect to my personal EM1iii as described above - in-camera black level settings aren't always reliable. I have read posts by others complaining about inconsistent black level settings in their cameras, so I rather doubt that the problem is unique to me.
BTW, the better DR can be visualized on real-life images taken at low light for sensors with different pixel size if the conditions of same exposure per pixel is satisfied.
Admittedly, based on my DR measurements derived from the raw data reported by Rawdigger, I really don't have that much of a dispute with your DR measurements. The difference in our respective calculations is only about 1/3 Ev. I do hope, however, we can get past the continued promotion of a confused and simplistic correlation of color cast and read noise and a dismissiveness of methods and tools that don't exactly match your preferred DR metric and tool. Unfortunately, this post is already way too long and detailed, so I'll stop here, catch my breath and post separately a comparison of very low light G9 and G9ii test shots helpfully provided by jrsforums and hopefully enlightening about which DR metric will be of more practical use to photographers interested in comparing these cameras.
 
I wasn't relying on anything I did in ACR and Photoshop for any "data analysis" I considered. Rather, I was using those tools to visualize the differences caused by different variables in the dark frame captures. Since all examples shown were processed with identical settings in ACR+PS, the visual differences between the displayed crops from my EM1iii are both informative for analytical purposes but, more importantly, from a photographic imaging perspective.
Have in mind, that even the correct visualization of dark frames is problematic with ACR and Photoshop. Nobody knows what happens inside this "black-box".
We don't know the exact demosaicing algorithm, but otherwise we know a lot about the internals of ACR and PS. We can linearize output from ACR. We can also substitute our own profiles if preferred; but regardless of that option, the "standard" Adobe profiles (Adobe Standard and Adobe Color) are sufficiently consistent between cameras to make them adequate for the kind of visualization-based comparisons we're doing here. Remember, I'm not basing my observations on these visualizations alone. However, to humor you, I'm switching to Rawdigger for the visualizations below. That's in addition to continuing to rely on Rawdigger for the unadulterated raw data analysis, which I contend is less "black box" than the interpolated and white balanced data displayed and used for DR calculations in IWE.
I'm well aware of what Adobe does to manipulate raws, so when I do camera comparisons of raws processed in ACR, I usually ensure that I've reset ACR to eliminate the hidden baseline exposure compensation and hidden non-linear tone curve. I also often utilize linear profiles I've generated for the specific cameras being compared. That's what I did in this instance, and the discrepancy between your G9 dark frame and an EM1iii dark frame I produced with identical ISO, shutter speed and lens-on settings was baffling. The processing results pretty clearly favored the EM1iii.
If you aware what Adobe does to manipulate raws then try to answer simple question: “What are the data registered by your EM1III-sensor to display gray-white (R, G, B)=(128, 128, 128) on your display?”

As RAW-digger-statistics shows (your data below) the EM1III-DR is very close to one of G9 (of course, within iWE-inaccuracy (0.02EV) I have already reported and, unfortunately, unknown inaccuracy of the RawDigger statistics).
If you give me the black frames from you EM1III then I can quickly compare DR with the G9.
No need for that. See below.
I need blackframes at SS=1/500s and maximal F-number + black cap (lens type is not important; you can close viewfinder, but I think it is important only for DSLRs).
That observation about OVFs vs. EVFs is a fair point and it got me thinking that maybe I was missing something in my initial black frame testing with my EM1iii. I wondered whether the source of light leakage could be the EVF itself when turned on. I repeated the black frame tests and eventually stumbled on the real reason I was seeing radically different color casts with my EM1iii. The bottom line is that it was just a coincidence that the blue-tinted and magenta-tinted black frame shots from my initial test happened to occur when I blocked the viewfinder. In my subsequent testing done in a windowless pitch-black room with the EVF off, shutter speed set to 1/8000 (mechanical) and body cap on, about half the test black frames I shot were tinted green, about a quarter were tinted magenta and the other quarter were tinted blue when visualized in either Rawdigger or ACR! There was no correlation to any possible light leakage.

At first, I was really baffled and worried that there was something seriously wrong with my camera. Then I dug into the shots more deeply using Rawdigger to see what was really going on with these varying black frames. I think it will be easier for readers to conceptualize the issue by including Rawdigger screen grabs from one of the "magenta" EM1iii black frames. [Sergeui, please note that I'm sure you fully understand the math and related details to follow. The explanation is aimed at other readers and feel free to correct me if I stumble.] In all of the Rawdigger screen grabs below, I've set brightness to +3 to make things easier to visualize. Just bear in mind that the brightness setting does not affect at all any of the data reported in the header part of the screen shots. Also, no raw profile is selected. Black level setting and white balance setting for each screen shot is specified just below the image. Let's start with this one:

EM1iii; black level applied as per image exif (253,254,253,254) ; white balance set to "As Shot"
EM1iii; black level applied as per image exif (253,254,253,254) ; white balance set to "As Shot"
Well, let us look at the data reported by Rawdigger.

Standard deviation (sigma) is as follows:

For R-channel sigma is 0.684

For G-channels G, G2 sigma is 0.440 and 0.479, respectively. Thus, the average is 0.46.

For the B-channel sigma is 0.668.

Because the sigma for R and B channels is higher than for the G-channel, the magenta shown by Raw digger is the correct black-frame visualization. Have in mind, that the mean values are of the secondary significance after the black-level subtraction, because these non-zero mean values have appeared from the noise, which has just positive sign after the black-level subtracting (negative values are assigned to zero, while positive fluctuations (noise) form the mean value)

Let us calculate the DR of your EM1 and compare with my G9-data.

For R-channel we have log2 ((4096-253)/0.684)=12.46 EV (R-channel G9-DR=11.99 EV)

For G-channel log2 ((4096-254)/0.460)=13.0 EV (G9-DR=12.91 EV)

For B-channel log2 ((4096-253)/0.668)=12.49 (B-channel G9-DR=12.0 EV)

As we can see, for the green channel the EM1-DR is 13.0 EV vs 12.9 EV, which is almost the same. For R and B the EM1III shows an advantage of about 0.5EV compared to the G9. Unfortunately, we know nothing about the RawDigger demosaicing ( is the demosaicing was applied or not by Rawdiger before the statistics measurements; in case of iWE the linear interpolation was used, and if, for example, the linear interpolation (demosaicing) is turned-off in iWE then the G-channel G-9-DR is about 13.1 EV).
Looks pretty magenta doesn't it? This is confirmed by the mean ("Avg") values shown at the top for the four channels. Clearly, the red and blue mean values are higher than the two green values. That alone would explain a magenta cast, but the story is more complicated than that. In fact, when no black level is applied to the file, the unadulterated mean values for all four channels are extremely close:
This is you, who complicates the story :)

It is important to understand the black level role and how its inaccuracy influences the black-frame (BF) visualization.

For the BF-visualization a very strong digital amplification is applied (more than +10 stops are necessary in case of the DR is about 13EV). +10 stops means that data are multiplied by a value of 1024. Now, imagine that you have just one lowest bit error (1). After the amplification this one-bit error will become 1024 counts on the linear data-scale. To visualize, this extra 1024 counts the value should be converted to the nonlinear 0-255 RGB-scale. This is exactly what I have already tried to explain for you.

Sure, the four channels should be close, because the mean values in this case are just the black-level values measured by Rawdigger. And, of course, there is some measurements error in this case as well. Let us believe that this error corresponds to the lowest decimal digit in the shown data.
EM1iii; black level NOT applied; WB is still set to As Shot
EM1iii; black level NOT applied; WB is still set to As Shot

As you can see, there is no longer a significant imbalance between the R and B channels vs the G channels in either mean (AVG) or standard deviation (SD) values. Clearly, then, the problem must have been introduced by the application of the black levels seen in the preceding screen grab.
Clearly, you are making an incorrect conclusion. Why you have decided that the black levels from the previous RawDigger-screen are problematic?

Let us look what this screen data statistics tells us.

First, the measured black-level offset for the G, G2-channels is 253.9, and 254.0 which is in very good agreement with the in-camera-measured value of 254. The mean values for the R,B-channels are 253.7 and 253.5, respectively. What is of crucial importance for the black-frame-visualization is that R,B-mean values are lower than G-mean values. These difference about 0.4 after subtracting and digital magnification of about +10 stops will result in the linear data difference of about 400 counts and magenta color, because the noise in R, B-channels is higher.
And if we look a bit closer at the specific black levels that are assigned, we can see that they are set to 254 for the G channels and 253 for the R and B channels. Those values are determined by the camera itself and written to the EXIF header. (Note: if you're wondering why the color has shifted to pink rather than gray despite all channel AVGs being virtually identical, please pin that question until I show and discuss the G9ii screen grabs below.)
Exactly! Which means that RawDigger black-level measurements are in good agreement with the in-camera measurements.
Back to the EM1iii varying color cast issue, look what happens when I manually set the black level for all four channels to 254:
But using 254 for all the channels is a huge error for the black-frame visualization. As I have pointed above an error of about 0.4 becomes 400 counts after +10 stop digital magnification of the linear data. What you are doing is just the “discriminative color denoising” – you are pushing R,B-noise more to the negative values and to zero.

In fact, unfortunately, you neglected or haven’t understand my comments in the previous post. All, what you are talking below with respect to the black-frame visualization is just related to the “discriminative color denoising” and has no relation to the true value of the noise, which indeed defines the color of the visualized black frame.
Same EM1iii black frame as above; black level applied uniformly to 254 for all 4 channels; WB set to As Shot
Same EM1iii black frame as above; black level applied uniformly to 254 for all 4 channels; WB set to As Shot

It turns out that all of the green-tinted EM1iii black frames that I shot had black levels set to 254 for all four channels. The blue-tinted ones had the B channel black level set to 253 and the magenta-tinted ones had both the R and B channel black level set to 253. The green channels in all of my test black frames were always auto-set to 254.

Exactly why the camera switches sometimes away from the all-254 setting is unclear to me. I know that it's derived somehow from the readouts of the optical black pixels, but what camera-specific or environmental conditions that cause the fluctuation in very controlled settings as were present during my second round of testing is a mystery to me. Perhaps it's as simple as random variation in the readout of the optical black pixels combined with the bit depth limitation of the camera causing it to toggle between 254 and 253 when more precision (something "between" 253 and 254) is what's actually needed. If so, this is just a consequence of the EM1iii being a 12-bit camera. Probably 14-bit and certainly 16-bit cameras like the G9ii wouldn't run into this issue (assuming quantization error is really a factor here). Bear in mind, though, that the impact of this problem only becomes visible when dealing with very dark images (or "black" frames like this one) that require significant shadow pushes.
There is no issue. The "issue" is created by you. One needs to understand properly the way which should pass the linear fluctuating data (noise) before they are visualized. Also, is the visualization itself is that what important for the DR-measurements? It just the qualitative characteristic.

What must be used for DR are the statistics data! The accuracy 254-253=1 is good enough for statistics and DR measurements. Also in iWE all the calculations are in floating point format. There is no absolute accuracy. All the measurements have some inaccuracy.
[snip]

My study shows that magenta indeed comes from higher mean values in R and B channels. These higher non-zero mean values come from higher read noise. Of course, one can make additional off-set for the R and B channels and subtract these mean levels removing the magenta. You will get the visual effect, but this will not improve the true DR.
This is where we part ways once again. The magenta color cast is, of course, nominally due to the higher mean values in the R and B channels. That's not in dispute. The question is: WHY ARE THE R AND B DN VALUES HIGHER TO BEGIN WITH?
It is simple and explained many times. After subtracting the black-level we have just the noise near zero line. This noise (fluctuations) are always positive and generate positive mean value. The higher the noise – the higher the mean value.
I showed above one scenario that causes an imbalance among the four channels - namely, less than ideally set black levels. Let's return to the hypothetical question I "pinned" earlier in this post by looking at your G9ii black frame. First the rendering with black levels applied:

G9ii black frame; black level applied as per image exif (2048) ; white balance set to "As Shot"
G9ii black frame; black level applied as per image exif (2048) ; white balance set to "As Shot"
Ok. Let us find the DR of G9II from RawDigger statistics.

Thus, from the screenshot the standard deviation (sigma) for G9II are as follows:

R- channel: sigma_R=1.79

G-channel: sigma_G=1.84, sigma_G2=2.05; the average value is 1.95.

B-channel: sigma_B=2.02

Now, taking into account that the maximum measured value by G9II photodiodes is 65536-2048=63490 let us calculate the Dynamic range of the G9II.

R-channel: log2(63490/1.79)=15.1 EV (iWE value 14.8 EV)

G-channel: log2(63490/1.95)=15.0 EV (iWE value 15.4 EV)

B-channel: log2(63490/2.02)=14.9 EV (iWE value 14.7 EV)

The results from RawDigger are indeed very close to that from iWE. The maximal difference, which achieves about 0.4 EV can be associated with demosaicing method (or absence of the demosaicing) used in RawDigger. Also the accuracy of the statistic data itself is unknown in case of the RawDigger.

I do not want to discuss your text below (it contains misleading conclusions for already mentioned reasons - for example, your conclusions that in-camera black-level measurements are not reliable because of your "findings" is wrong). All important I have already said above.

The principal conclusion is that your RawDigger data just confirmed the iWE results. The DR of the G9II is significantly higher than the DR-G9 and DR-EM1III.

The channel AVGs are better balanced than the EM1iii with black levels applied, but they are tilted toward the blue and the rendering shows a pretty obvious purplish/magenta cast. Your own screen shot from IWE shows the same purplish/magenta cast. So what gives? Let's check what happens when we look at the unadulterated version with no black level adjustment applied:

Same G9ii black frame; black level OFF; white balance set to "As Shot"
Same G9ii black frame; black level OFF; white balance set to "As Shot"

What's interesting here is - just like we saw with the EM1iii with no black level adjust - all four channel AVG values are very closely balanced (as are the SDs, of course). Yet, despite nearly identical AVGs for all channels, the rendering appears pinkish (just like the EM1iii). Now, let's look at one more rendering of the same G9ii black frame:

Same G9ii; black level applied as per image exif (2048); white balance set to AUTO
Same G9ii; black level applied as per image exif (2048); white balance set to AUTO

The ONLY difference between this more neutral rendering and the purplish/magenta one shown above is that the white balance has been switched from "As Shot" to Rawdigger's "Auto" WB. So, obviously, WB has a critical impact on the presence of an apparent color cast, but what exactly is going on?

To begin with, unless some kind of masking is utilized, the R-specific and B-specific WB coefficients are applied to all R and B pixel values in the image, regardless of how light or dark the individual R and B pixels are. It makes sense to apply the co-efficients to correct for the differences in color responsivity caused by the different wavelengths of photoelectrons passing through the color filters on top of pixels. Since the R and B pixels are less responsive to light than G pixels, they end up generating lower DNs in the raw files, so the WB operation multiplies these DNs by the amount of the WB co-efficients in order to prevent the RGB image output by the raw converter from having an overly strong green color cast. However, what does NOT make sense is to apply this WB multiplying effect to pixels that received no light. Ideally, there would be a tapering off of the WB multiplier with respect to to any very dark pixels in which read noise plays a significant role in setting the DN value of the pixel. The failure to taper the WB effect on read noise-dominated pixels will cause the overall average of these very dark pixels to have an inappropriate magenta color cast. The actual saturation and hue of the cast will depend on several factors:
  • How much of a role read noise plays in establishing the average lightness of these dark pixels.
  • How strong the WB effect is.
  • How well-optimized the black level setting is that gets applied to these darkest pixels (e.g., the black levels for my EM1iii aren't particularly well optimized since the starting mean values, after black levels are applied are already unbalanced and either tilted toward the green, blue or magenta.)*
The bottom line here is that the pinkish color cast seen in the non-black leveled renderings is caused by application of WB to the black frame raws, NOT because the R and B raw channels are inherently more noisy than the G channels. Similarly, huge exposure pushes to very deep shadows/very underexposed shots will have the perverse effect of adding an inappropriate magenta cast to the image. The corrective action to take is to back out the effect by a tone curve adjustment targeted at just these inappropriately affected deep shadows. ACR and LR include a special adjustment slider (in the Calibration tab) that may suffice to ameliorate the problem, but my experience is that it's often necessary to use that adjustment very modestly and to supplement it with curve adjustments in the red and blue channels to re-equalize them with the green channel (plus also address any other errors introduced by the black level subtraction step).

The foregoing is what I've been hammering at now for months whenever the magenta cast issue and its real proximate cause is brought up. THERE REALLY IS NO MEANINGFUL DIFFERENCE IN READ NOISE LEVELS AT THE RAW LEVEL (with the possible exception of some PDAF pixel implementations) because there is no relevant difference in how CMOS pixels are designed and fabbed for each of the four Bayer raw color channels. The circuitry and silicon for one pixel should be identical to every other active pixel on the sensor, regardless of which color channel it is associated with. Remember: we're talking about read noise, which is noise added by the electronics, not any noise associated with light hitting the sensor. Since there is no light involved here (we're talking now just about black frames generated in pitch black conditions), there are no complications and channel-specific variability generated by differing responsivity to specific wavelengths of photoelectrons absorbed by the pixels based on the color filters that sit atop them. I'm not aware of any reason to expect read noise behavior to be correlated by color channel at this level. The correlation is introduced later in the processing chain as has been demonstrated with visualizations and corresponding raw data above.

Furthermore, by waiting until later in the processing chain to extract the standard deviation data needed to calculate DR, you're adding your own version of what you've called a "black box". IWE appears to perform some kind of white balance operation in addition to the interpolation of the four raw channel data into three channels. This is bound to be more confounding than performing the measurements at the front end (as can be done with Rawdigger). For instance, any reasonable type of interpolation of two green channels into one is bound to reduce the standard deviation for the single green channel relative to the red and blue channels. Of course, the red and blue channels will seem to be at least slightly more noisy as a result. And that's before we get to the undesirable WB effect on the red and blue channels of a black frame, which also increases their apparent noisiness relative to the green channel.

________________________

*Beyond the specific black level problem noted for the EM1iii, there's another way in which application of black levels can adversely affect the post-black level subtraction DN averages. Black level subtraction is performed on the raw digital numbers (DNs) using simple arithmetic. A whole number is subtracted from every DN in the raw file. Since some DNs will have starting values less than the black level value (e.g., less than 254 or 253 for the EM1iii, 142 for the G9, and 2048 for the G9ii), fully subtracting the black level value from these smaller DNs would result in negative values (DNs of less than 0), which isn't allowed. Instead, these smaller DNs will all be set to 0, which means they are now lighter relative to other values after the black level subtraction than they were prior to the subtraction operation, which isn't ideal.
Higher DR gives more room for light, when the "light-line" level is above the "noise-line" level. In the given case you are simply modify black-line level and make specific procedure of the color noise reduction known as discrimination method. But in presence of light you will also shift down the "light-line" level, which can be even lower than this mean level you push down .

The black-line level is measured by special masked pixels, and I have found that the black-line level measurements are reliable and should not be touched in case you are measuring the DR.
Based on my findings - at least, with respect to my personal EM1iii as described above - in-camera black level settings aren't always reliable. I have read posts by others complaining about inconsistent black level settings in their cameras, so I rather doubt that the problem is unique to me.
BTW, the better DR can be visualized on real-life images taken at low light for sensors with different pixel size if the conditions of same exposure per pixel is satisfied.
Admittedly, based on my DR measurements derived from the raw data reported by Rawdigger, I really don't have that much of a dispute with your DR measurements. The difference in our respective calculations is only about 1/3 Ev. I do hope, however, we can get past the continued promotion of a confused and simplistic correlation of color cast and read noise and a dismissiveness of methods and tools that don't exactly match your preferred DR metric and tool. Unfortunately, this post is already way too long and detailed, so I'll stop here, catch my breath and post separately a comparison of very low light G9 and G9ii test shots helpfully provided by jrsforums and hopefully enlightening about which DR metric will be of more practical use to photographers interested in comparing these cameras.
 
I wasn't relying on anything I did in ACR and Photoshop for any "data analysis" I considered. Rather, I was using those tools to visualize the differences caused by different variables in the dark frame captures. Since all examples shown were processed with identical settings in ACR+PS, the visual differences between the displayed crops from my EM1iii are both informative for analytical purposes but, more importantly, from a photographic imaging perspective.
Have in mind, that even the correct visualization of dark frames is problematic with ACR and Photoshop. Nobody knows what happens inside this "black-box".
We don't know the exact demosaicing algorithm, but otherwise we know a lot about the internals of ACR and PS. We can linearize output from ACR. We can also substitute our own profiles if preferred; but regardless of that option, the "standard" Adobe profiles (Adobe Standard and Adobe Color) are sufficiently consistent between cameras to make them adequate for the kind of visualization-based comparisons we're doing here. Remember, I'm not basing my observations on these visualizations alone. However, to humor you, I'm switching to Rawdigger for the visualizations below. That's in addition to continuing to rely on Rawdigger for the unadulterated raw data analysis, which I contend is less "black box" than the interpolated and white balanced data displayed and used for DR calculations in IWE.
I'm well aware of what Adobe does to manipulate raws, so when I do camera comparisons of raws processed in ACR, I usually ensure that I've reset ACR to eliminate the hidden baseline exposure compensation and hidden non-linear tone curve. I also often utilize linear profiles I've generated for the specific cameras being compared. That's what I did in this instance, and the discrepancy between your G9 dark frame and an EM1iii dark frame I produced with identical ISO, shutter speed and lens-on settings was baffling. The processing results pretty clearly favored the EM1iii.
If you aware what Adobe does to manipulate raws then try to answer simple question: “What are the data registered by your EM1III-sensor to display gray-white (R, G, B)=(128, 128, 128) on your display?”
As noted already, to humor you, I've moved away from any reliance whatsoever on the Adobe tools as the source for the visualizations being used. The text you've highlighted above is also missing context of the qualifications I provided in following paragraphs. More importantly, my subsequent responses (especially the one you're responding to now) have acknowledged that the sources of the differences in lightness and color were not due to light leakage or other possible problems with your G9 black frame. I've moved on from that erroneous speculation. You should too.
As RAW-digger-statistics shows (your data below) the EM1III-DR is very close to one of G9 (of course, within iWE-inaccuracy (0.02EV) I have already reported and, unfortunately, unknown inaccuracy of the RawDigger statistics).
LOL. Rawdigger is coded and supported by the same team that coded and supports LibRaw. Since the LibRaw libraries that underlie RawDigger are used by IWE, you probably shouldn't be insinuating anything negative about the "inaccuracy of the RawDigger statistics." FYI, per the RawDigger documentation, internal calculations are performed using 16-bit unsigned integer data representation. I'm pretty sure that the raw format of the cameras being discussed here don't contain floating point data, so we don't need to get into RawDigger's options for handling floating-point raws.
If you give me the black frames from you EM1III then I can quickly compare DR with the G9.
No need for that. See below.
I need blackframes at SS=1/500s and maximal F-number + black cap (lens type is not important; you can close viewfinder, but I think it is important only for DSLRs).
That observation about OVFs vs. EVFs is a fair point and it got me thinking that maybe I was missing something in my initial black frame testing with my EM1iii. I wondered whether the source of light leakage could be the EVF itself when turned on. I repeated the black frame tests and eventually stumbled on the real reason I was seeing radically different color casts with my EM1iii. The bottom line is that it was just a coincidence that the blue-tinted and magenta-tinted black frame shots from my initial test happened to occur when I blocked the viewfinder. In my subsequent testing done in a windowless pitch-black room with the EVF off, shutter speed set to 1/8000 (mechanical) and body cap on, about half the test black frames I shot were tinted green, about a quarter were tinted magenta and the other quarter were tinted blue when visualized in either Rawdigger or ACR! There was no correlation to any possible light leakage.

At first, I was really baffled and worried that there was something seriously wrong with my camera. Then I dug into the shots more deeply using Rawdigger to see what was really going on with these varying black frames. I think it will be easier for readers to conceptualize the issue by including Rawdigger screen grabs from one of the "magenta" EM1iii black frames. [Sergeui, please note that I'm sure you fully understand the math and related details to follow. The explanation is aimed at other readers and feel free to correct me if I stumble.] In all of the Rawdigger screen grabs below, I've set brightness to +3 to make things easier to visualize. Just bear in mind that the brightness setting does not affect at all any of the data reported in the header part of the screen shots. Also, no raw profile is selected. Black level setting and white balance setting for each screen shot is specified just below the image. Let's start with this one:

EM1iii; black level applied as per image exif (253,254,253,254) ; white balance set to "As Shot"
EM1iii; black level applied as per image exif (253,254,253,254) ; white balance set to "As Shot"
Well, let us look at the data reported by Rawdigger.

Standard deviation (sigma) is as follows:

For R-channel sigma is 0.684

For G-channels G, G2 sigma is 0.440 and 0.479, respectively. Thus, the average is 0.46.

For the B-channel sigma is 0.668.

Because the sigma for R and B channels is higher than for the G-channel, the magenta shown by Raw digger is the correct black-frame visualization. Have in mind, that the mean values are of the secondary significance after the black-level subtraction, because these non-zero mean values have appeared from the noise, which has just positive sign after the black-level subtracting (negative values are assigned to zero, while positive fluctuations (noise) form the mean value)
None of the foregoing is in dispute, including the point about negative black subtraction values being assigned to zero, which I also noted.
Let us calculate the DR of your EM1 and compare with my G9-data.

For R-channel we have log2 ((4096-253)/0.684)=12.46 EV (R-channel G9-DR=11.99 EV)

For G-channel log2 ((4096-254)/0.460)=13.0 EV (G9-DR=12.91 EV)

For B-channel log2 ((4096-253)/0.668)=12.49 (B-channel G9-DR=12.0 EV)

As we can see, for the green channel the EM1-DR is 13.0 EV vs 12.9 EV, which is almost the same. For R and B the EM1III shows an advantage of about 0.5EV compared to the G9. Unfortunately, we know nothing about the RawDigger demosaicing ( is the demosaicing was applied or not by Rawdiger before the statistics measurements;
It should be obvious that no demosaicing/interpolation is applied by RawDigger, given the fact that the statistics are shown for four channels (R, G, B, G2), not three. I also explained that one reason I was relying on RawDigger for the statistics it because it doesn't interpolate. Let me put it bluntly: because RawDigger works directly with the raw DNs AND enables the user to explicitly control black level subtraction (camera specified, manual or none) AND it does NOT apply white balance or other display-related parameters to any of the DR-related statistics, it's simply cleaner and less "black box" than what your IWE-generated DR-related statistics are based on.
in case of iWE the linear interpolation was used, and if, for example, the linear interpolation (demosaicing) is turned-off in iWE then the G-channel G-9-DR is about 13.1 EV).
FYI, per RawDigger, the sigma for the raw green channels (with black subtraction applied) are:

G= 0.660, therefore log2((4096-142)/.660)=12.55 EV

G2= 0.682, therefore log2((4096-142)/.682)=12.50 EV

Averaging the two raw G channels yields 12.525 EV

By the way, I'm still struggling to understand why interpolating together two more-or-less Gaussian sets of samples would increase SD (hence, decrease DR). Please point me to an explanation (hopefully one that doesn't require more than rudimentary math skills to understand).
Looks pretty magenta doesn't it? This is confirmed by the mean ("Avg") values shown at the top for the four channels. Clearly, the red and blue mean values are higher than the two green values. That alone would explain a magenta cast, but the story is more complicated than that. In fact, when no black level is applied to the file, the unadulterated mean values for all four channels are extremely close:
This is you, who complicates the story :)

It is important to understand the black level role and how its inaccuracy influences the black-frame (BF) visualization.

For the BF-visualization a very strong digital amplification is applied (more than +10 stops are necessary in case of the DR is about 13EV). +10 stops means that data are multiplied by a value of 1024. Now, imagine that you have just one lowest bit error (1). After the amplification this one-bit error will become 1024 counts on the linear data-scale. To visualize, this extra 1024 counts the value should be converted to the nonlinear 0-255 RGB-scale. This is exactly what I have already tried to explain for you.
The only thing I don't understand is why you think I don't understand! The imbalance (which doesn't exist before black subtraction) is evident in RawDigger when black subtraction is toggled on. The mean numbers pre- and post-black subtraction are clear. It isn't even necessary to use a visualization (with or without digital amplification) to "see" the imbalance since the min, max, avg and sigma statistics are easily readable. The visualizations are just supporting evidence to assist readers in absorbing the implications of the presented statistics.
Sure, the four channels should be close, because the mean values in this case are just the black-level values measured by Rawdigger. And, of course, there is some measurements error in this case as well. Let us believe that this error corresponds to the lowest decimal digit in the shown data.
EM1iii; black level NOT applied; WB is still set to As Shot
EM1iii; black level NOT applied; WB is still set to As Shot

As you can see, there is no longer a significant imbalance between the R and B channels vs the G channels in either mean (AVG) or standard deviation (SD) values. Clearly, then, the problem must have been introduced by the application of the black levels seen in the preceding screen grab.
Clearly, you are making an incorrect conclusion.
Or you are making an incorrect assumption.
Why you have decided that the black levels from the previous RawDigger-screen are problematic?
The "problem" I'm referring to is that black subtraction (253,254,253,254) is what causes the closely balanced averages before black subtraction to become relatively unbalanced after subtraction. Simply by setting the R and B values to the same 254 setting as are applied to the G values the "problem" is reduced (albeit swung toward green rather than magenta). The "problem" is also the inconsistent behavior of the camera. Under identical conditions, the camera often - but definitely not always - sets all values to 254, which is a more acceptable compromise than when it sets both R and B to 253 and sometimes just B to 253. Call it whatever you want, but I call that unwelcome phenomenon a "problem".
Let us look what this screen data statistics tells us.

First, the measured black-level offset for the G, G2-channels is 253.9, and 254.0 which is in very good agreement with the in-camera-measured value of 254. The mean values for the R,B-channels are 253.7 and 253.5, respectively. What is of crucial importance for the black-frame-visualization is that R,B-mean values are lower than G-mean values. These difference about 0.4 after subtracting and digital magnification of about +10 stops will result in the linear data difference of about 400 counts and magenta color,
That's not in dispute. You are simply restating the obvious results of simple math. The question (and source of the "problem") is why did the camera choose to set the black values for the R and B channels lower this time (as opposed to the majority of times when it set it to 254)?
because the noise in R, B-channels is higher.
More precisely, the less-than ideal black subtraction that was applied causes the virtually equal amounts of pre-subtraction read noise to appear to be significantly unequal post-subtraction.
And if we look a bit closer at the specific black levels that are assigned, we can see that they are set to 254 for the G channels and 253 for the R and B channels. Those values are determined by the camera itself and written to the EXIF header. (Note: if you're wondering why the color has shifted to pink rather than gray despite all channel AVGs being virtually identical, please pin that question until I show and discuss the G9ii screen grabs below.)
Exactly! Which means that RawDigger black-level measurements are in good agreement with the in-camera measurements.
You make it sound like RawDigger is independently determining the black values. It isn't. It's simply applying the values specified by the camera in the EXIF header (unless the RawDigger user manually overrides this default).
Back to the EM1iii varying color cast issue, look what happens when I manually set the black level for all four channels to 254:
But using 254 for all the channels is a huge error for the black-frame visualization. As I have pointed above an error of about 0.4 becomes 400 counts after +10 stop digital magnification of the linear data.
Sure, but what you fail to acknowledge is that the error in the other direction is even smaller if the black subtraction amounts applied to the R and B channels are set to 254.
What you are doing is just the “discriminative color denoising” – you are pushing R,B-noise more to the negative values and to zero.
Which is a good thing. Look at the histograms for screen grabs below of the magenta-cast version (R and B black subtractions set to 253) and the green-cast version (all four channels set to 254):

Screen grab from magenta cast rendering in RawDigger
Screen grab from magenta cast rendering in RawDigger

Screen grab from green cast rendering in RawDigger
Screen grab from green cast rendering in RawDigger

I don't know how much experience you have working with correcting deep shadow color casts with raw converters/editors other than your own IWE, but I have considerable experience with a number of them - particularly the Adobe products. Trust me when I say that (just as one would expect by looking at the distances of the peaks in the two histograms above) the green cast is more easily correctible and the better starting point for any subsequent processing/editing.
In fact, unfortunately, you neglected or haven’t understand my comments in the previous post. All, what you are talking below with respect to the black-frame visualization is just related to the “discriminative color denoising” and has no relation to the true value of the noise, which indeed defines the color of the visualized black frame.
I am not familiar with the term "discriminative color denoising". As best I can determine from a quick Google search, it's related to work done in the wave domain to denoise, making it relevant to IWE, I suppose, but not a very useful conceptual framework for the tools I work with (and that the vast majority of readers here would be working with). In my world, correcting errors or suboptimal settings generated upstream by applying curve adjustments is not "denoising" and is a good thing as long as it reverses the original error and doesn't over-correct.
Same EM1iii black frame as above; black level applied uniformly to 254 for all 4 channels; WB set to As Shot
Same EM1iii black frame as above; black level applied uniformly to 254 for all 4 channels; WB set to As Shot

It turns out that all of the green-tinted EM1iii black frames that I shot had black levels set to 254 for all four channels. The blue-tinted ones had the B channel black level set to 253 and the magenta-tinted ones had both the R and B channel black level set to 253. The green channels in all of my test black frames were always auto-set to 254.

Exactly why the camera switches sometimes away from the all-254 setting is unclear to me. I know that it's derived somehow from the readouts of the optical black pixels, but what camera-specific or environmental conditions that cause the fluctuation in very controlled settings as were present during my second round of testing is a mystery to me. Perhaps it's as simple as random variation in the readout of the optical black pixels combined with the bit depth limitation of the camera causing it to toggle between 254 and 253 when more precision (something "between" 253 and 254) is what's actually needed. If so, this is just a consequence of the EM1iii being a 12-bit camera. Probably 14-bit and certainly 16-bit cameras like the G9ii wouldn't run into this issue (assuming quantization error is really a factor here). Bear in mind, though, that the impact of this problem only becomes visible when dealing with very dark images (or "black" frames like this one) that require significant shadow pushes.
There is no issue. The "issue" is created by you. One needs to understand properly the way which should pass the linear fluctuating data (noise) before they are visualized. Also, is the visualization itself is that what important for the DR-measurements? It just the qualitative characteristic.

What must be used for DR are the statistics data! The accuracy 254-253=1 is good enough for statistics and DR measurements. Also in iWE all the calculations are in floating point format. There is no absolute accuracy. All the measurements have some inaccuracy.
Already addressed at the beginning of my reply.
[snip]

My study shows that magenta indeed comes from higher mean values in R and B channels. These higher non-zero mean values come from higher read noise. Of course, one can make additional off-set for the R and B channels and subtract these mean levels removing the magenta. You will get the visual effect, but this will not improve the true DR.
This is where we part ways once again. The magenta color cast is, of course, nominally due to the higher mean values in the R and B channels. That's not in dispute. The question is: WHY ARE THE R AND B DN VALUES HIGHER TO BEGIN WITH?
It is simple and explained many times. After subtracting the black-level we have just the noise near zero line. This noise (fluctuations) are always positive and generate positive mean value. The higher the noise – the higher the mean value.
That was a rhetorical question I asked intended to focus the reader on the underlying cause of the higher DNs rather than just blindly accepting the NOMINAL color cast as definitive proof of more noise relative to other images generated by other sensors that don't display the same amount of magenta color cast.
I showed above one scenario that causes an imbalance among the four channels - namely, less than ideally set black levels. Let's return to the hypothetical question I "pinned" earlier in this post by looking at your G9ii black frame. First the rendering with black levels applied:

G9ii black frame; black level applied as per image exif (2048) ; white balance set to "As Shot"
G9ii black frame; black level applied as per image exif (2048) ; white balance set to "As Shot"
Ok. Let us find the DR of G9II from RawDigger statistics.

Thus, from the screenshot the standard deviation (sigma) for G9II are as follows:

R- channel: sigma_R=1.79

G-channel: sigma_G=1.84, sigma_G2=2.05; the average value is 1.95.

B-channel: sigma_B=2.02

Now, taking into account that the maximum measured value by G9II photodiodes is 65536-2048=63490 let us calculate the Dynamic range of the G9II.

R-channel: log2(63490/1.79)=15.1 EV (iWE value 14.8 EV)

G-channel: log2(63490/1.95)=15.0 EV (iWE value 15.4 EV)

B-channel: log2(63490/2.02)=14.9 EV (iWE value 14.7 EV)

The results from RawDigger are indeed very close to that from iWE.
The point of my post wasn't to evaluate the differences in the DR calculations generated by the two apps. I specifically noted at the end of my post that there appears to be about 1/3 Ev difference in the results, which is in line with your calculations above.
The maximal difference, which achieves about 0.4 EV can be associated with demosaicing method (or absence of the demosaicing) used in RawDigger. Also the accuracy of the statistic data itself is unknown in case of the RawDigger.
Already addressed above. Unlike your IWE DR calculations, RawDigger doesn't rely on processed data that's been interpolated into three channels and white balanced. It's calculating directly from the raw DNs and allows you to calculate before and after black level subtraction.
I do not want to discuss your text below (it contains misleading conclusions for already mentioned reasons - for example, your conclusions that in-camera black-level measurements are not reliable because of your "findings" is wrong). All important I have already said above.
Actually, all you've done is completely skirt the question of why my EM1iii - under identical conditions - varies the black level settings, which results in the dramatically different color casts for extremely pushed dark pixels. You've chosen not to respond to my speculation about the possible role of the camera's bit depth being insufficiently precise for optimal black level setting.
The principal conclusion is that your RawDigger data just confirmed the iWE results. The DR of the G9II is significantly higher than the DR-G9 and DR-EM1III.
That topic was not the issue being addressed by my post. Please, re-read the end of my post (still embedded below). The usefulness of the "scientific" DR metric vs. a "photographic" DR metric is a topic for another post I plan to submit as soon as I receive confirmation from jrsforum that it's ok to use his test shots (previously shared publicly on this forum).
The channel AVGs are better balanced than the EM1iii with black levels applied, but they are tilted toward the blue and the rendering shows a pretty obvious purplish/magenta cast. Your own screen shot from IWE shows the same purplish/magenta cast. So what gives? Let's check what happens when we look at the unadulterated version with no black level adjustment applied:

Same G9ii black frame; black level OFF; white balance set to "As Shot"
Same G9ii black frame; black level OFF; white balance set to "As Shot"

What's interesting here is - just like we saw with the EM1iii with no black level adjust - all four channel AVG values are very closely balanced (as are the SDs, of course). Yet, despite nearly identical AVGs for all channels, the rendering appears pinkish (just like the EM1iii). Now, let's look at one more rendering of the same G9ii black frame:

Same G9ii; black level applied as per image exif (2048); white balance set to AUTO
Same G9ii; black level applied as per image exif (2048); white balance set to AUTO

The ONLY difference between this more neutral rendering and the purplish/magenta one shown above is that the white balance has been switched from "As Shot" to Rawdigger's "Auto" WB. So, obviously, WB has a critical impact on the presence of an apparent color cast, but what exactly is going on?

To begin with, unless some kind of masking is utilized, the R-specific and B-specific WB coefficients are applied to all R and B pixel values in the image, regardless of how light or dark the individual R and B pixels are. It makes sense to apply the co-efficients to correct for the differences in color responsivity caused by the different wavelengths of photoelectrons passing through the color filters on top of pixels. Since the R and B pixels are less responsive to light than G pixels, they end up generating lower DNs in the raw files, so the WB operation multiplies these DNs by the amount of the WB co-efficients in order to prevent the RGB image output by the raw converter from having an overly strong green color cast. However, what does NOT make sense is to apply this WB multiplying effect to pixels that received no light. Ideally, there would be a tapering off of the WB multiplier with respect to to any very dark pixels in which read noise plays a significant role in setting the DN value of the pixel. The failure to taper the WB effect on read noise-dominated pixels will cause the overall average of these very dark pixels to have an inappropriate magenta color cast. The actual saturation and hue of the cast will depend on several factors:
  • How much of a role read noise plays in establishing the average lightness of these dark pixels.
  • How strong the WB effect is.
  • How well-optimized the black level setting is that gets applied to these darkest pixels (e.g., the black levels for my EM1iii aren't particularly well optimized since the starting mean values, after black levels are applied are already unbalanced and either tilted toward the green, blue or magenta.)*
The bottom line here is that the pinkish color cast seen in the non-black leveled renderings is caused by application of WB to the black frame raws, NOT because the R and B raw channels are inherently more noisy than the G channels. Similarly, huge exposure pushes to very deep shadows/very underexposed shots will have the perverse effect of adding an inappropriate magenta cast to the image. The corrective action to take is to back out the effect by a tone curve adjustment targeted at just these inappropriately affected deep shadows. ACR and LR include a special adjustment slider (in the Calibration tab) that may suffice to ameliorate the problem, but my experience is that it's often necessary to use that adjustment very modestly and to supplement it with curve adjustments in the red and blue channels to re-equalize them with the green channel (plus also address any other errors introduced by the black level subtraction step).

The foregoing is what I've been hammering at now for months whenever the magenta cast issue and its real proximate cause is brought up. THERE REALLY IS NO MEANINGFUL DIFFERENCE IN READ NOISE LEVELS AT THE RAW LEVEL (with the possible exception of some PDAF pixel implementations) because there is no relevant difference in how CMOS pixels are designed and fabbed for each of the four Bayer raw color channels. The circuitry and silicon for one pixel should be identical to every other active pixel on the sensor, regardless of which color channel it is associated with. Remember: we're talking about read noise, which is noise added by the electronics, not any noise associated with light hitting the sensor. Since there is no light involved here (we're talking now just about black frames generated in pitch black conditions), there are no complications and channel-specific variability generated by differing responsivity to specific wavelengths of photoelectrons absorbed by the pixels based on the color filters that sit atop them. I'm not aware of any reason to expect read noise behavior to be correlated by color channel at this level. The correlation is introduced later in the processing chain as has been demonstrated with visualizations and corresponding raw data above.

Furthermore, by waiting until later in the processing chain to extract the standard deviation data needed to calculate DR, you're adding your own version of what you've called a "black box". IWE appears to perform some kind of white balance operation in addition to the interpolation of the four raw channel data into three channels. This is bound to be more confounding than performing the measurements at the front end (as can be done with Rawdigger). For instance, any reasonable type of interpolation of two green channels into one is bound to reduce the standard deviation for the single green channel relative to the red and blue channels. Of course, the red and blue channels will seem to be at least slightly more noisy as a result. And that's before we get to the undesirable WB effect on the red and blue channels of a black frame, which also increases their apparent noisiness relative to the green channel.

________________________

*Beyond the specific black level problem noted for the EM1iii, there's another way in which application of black levels can adversely affect the post-black level subtraction DN averages. Black level subtraction is performed on the raw digital numbers (DNs) using simple arithmetic. A whole number is subtracted from every DN in the raw file. Since some DNs will have starting values less than the black level value (e.g., less than 254 or 253 for the EM1iii, 142 for the G9, and 2048 for the G9ii), fully subtracting the black level value from these smaller DNs would result in negative values (DNs of less than 0), which isn't allowed. Instead, these smaller DNs will all be set to 0, which means they are now lighter relative to other values after the black level subtraction than they were prior to the subtraction operation, which isn't ideal.
Higher DR gives more room for light, when the "light-line" level is above the "noise-line" level. In the given case you are simply modify black-line level and make specific procedure of the color noise reduction known as discrimination method. But in presence of light you will also shift down the "light-line" level, which can be even lower than this mean level you push down .

The black-line level is measured by special masked pixels, and I have found that the black-line level measurements are reliable and should not be touched in case you are measuring the DR.
Based on my findings - at least, with respect to my personal EM1iii as described above - in-camera black level settings aren't always reliable. I have read posts by others complaining about inconsistent black level settings in their cameras, so I rather doubt that the problem is unique to me.
BTW, the better DR can be visualized on real-life images taken at low light for sensors with different pixel size if the conditions of same exposure per pixel is satisfied.
Admittedly, based on my DR measurements derived from the raw data reported by Rawdigger, I really don't have that much of a dispute with your DR measurements. The difference in our respective calculations is only about 1/3 Ev. I do hope, however, we can get past the continued promotion of a confused and simplistic correlation of color cast and read noise and a dismissiveness of methods and tools that don't exactly match your preferred DR metric and tool. Unfortunately, this post is already way too long and detailed, so I'll stop here, catch my breath and post separately a comparison of very low light G9 and G9ii test shots helpfully provided by jrsforums and hopefully enlightening about which DR metric will be of more practical use to photographers interested in comparing these cameras.
 
Last edited:
Massimo (Interceptor) loves his ‘Photons to Photos’ chart, which are interesting, but limited to the real world. Back in December, I posted lots of G9ii vs G9 raw images, which clearly showed the G9ii recovered deep shadows at least 2 stops better than the G9 (if I remember correctly, someone (Lothar?) showed 1+ stop better than Om-1).
That chimes with my experience but I guess there’s another agenda at play 😎
Seems to be….but not a surprise….quite consistent…beats me, can’t figure it out 😀
The endless bashing posts about supposed bad dynamic range during the launch are really something in retrospect…
 
Massimo (Interceptor) loves his ‘Photons to Photos’ chart, which are interesting, but limited to the real world. Back in December, I posted lots of G9ii vs G9 raw images, which clearly showed the G9ii recovered deep shadows at least 2 stops better than the G9 (if I remember correctly, someone (Lothar?) showed 1+ stop better than Om-1).
That chimes with my experience but I guess there’s another agenda at play 😎
Seems to be….but not a surprise….quite consistent…beats me, can’t figure it out 😀
The endless bashing posts about supposed bad dynamic range during the launch are really something in retrospect…
No bashing if I recall the discussion was about the electronic shutter to which there were no good answers for a while

In terms of dynamic range if you look at this chart the situation has stayed unchanged as this is only mechanical shutter

Some people here have gone on a tanget to say the G9M2 was amazingly better however the best case data point is opyczne saying at SNR=1 the DR would be 13.3 Ev mostly due to increased bit depth which is 1.3 Ev more than any other MFT camera

And there is no agenda this forum is just full of people with some inferiority complex or endlessly defending turf

The G9M2 at base ISO is marginally better than other MFT camera and mostly due to bit depth. At higher ISO is worse that is what the chart says and same as before
 
Massimo (Interceptor) loves his ‘Photons to Photos’ chart, which are interesting, but limited to the real world. Back in December, I posted lots of G9ii vs G9 raw images, which clearly showed the G9ii recovered deep shadows at least 2 stops better than the G9 (if I remember correctly, someone (Lothar?) showed 1+ stop better than Om-1).
That chimes with my experience but I guess there’s another agenda at play 😎
Seems to be….but not a surprise….quite consistent…beats me, can’t figure it out 😀
The endless bashing posts about supposed bad dynamic range during the launch are really something in retrospect…
No bashing if I recall the discussion was about the electronic shutter to which there were no good answers for a while
Perfect answer came from manufacturer last year very fast: There was/is no effect about electronic shutter. That "electronic shutter effect" was just haluzination with origin in this forum.
In terms of dynamic range if you look at this chart the situation has stayed unchanged as this is only mechanical shutter

Some people here have gone on a tanget to say the G9M2 was amazingly better however the best case data point is opyczne saying at SNR=1 the DR would be 13.3 Ev mostly due to increased bit depth which is 1.3 Ev more than any other MFT camera

And there is no agenda this forum is just full of people with some inferiority complex or endlessly defending turf

The G9M2 at base ISO is marginally better than other MFT camera and mostly due to bit depth. At higher ISO is worse that is what the chart says and same as before
 
Massimo (Interceptor) loves his ‘Photons to Photos’ chart, which are interesting, but limited to the real world. Back in December, I posted lots of G9ii vs G9 raw images, which clearly showed the G9ii recovered deep shadows at least 2 stops better than the G9 (if I remember correctly, someone (Lothar?) showed 1+ stop better than Om-1).
That chimes with my experience but I guess there’s another agenda at play 😎
Seems to be….but not a surprise….quite consistent…beats me, can’t figure it out 😀
The endless bashing posts about supposed bad dynamic range during the launch are really something in retrospect…
No bashing if I recall the discussion was about the electronic shutter to which there were no good answers for a while
Perfect answer came from manufacturer last year very fast: There was/is no effect about electronic shutter. That "electronic shutter effect" was just haluzination with origin in this forum.
Nobody was allucinating. They sent the data to the same source. Bill relies on users to do the test the procedure is clear. If someone does not follow it that's a different issue. Or maybe somethine else went on and the camera got an update when the shots were retaken.

On the other hand the camera is a very mild improvement only at base ISO and mostly due to the 14 bits depth. At higher values is not better and the noise reduction will no doubt have consequences

Long exposures are also compromised so all of this comes at a high cost

It does not bother me as that is no longer my use case however if I was still doing night photography on micro four thirds I would avoid this camera (and the GH7 for that matter)
In terms of dynamic range if you look at this chart the situation has stayed unchanged as this is only mechanical shutter

Some people here have gone on a tanget to say the G9M2 was amazingly better however the best case data point is opyczne saying at SNR=1 the DR would be 13.3 Ev mostly due to increased bit depth which is 1.3 Ev more than any other MFT camera

And there is no agenda this forum is just full of people with some inferiority complex or endlessly defending turf

The G9M2 at base ISO is marginally better than other MFT camera and mostly due to bit depth. At higher ISO is worse that is what the chart says and same as before
 
Massimo (Interceptor) loves his ‘Photons to Photos’ chart, which are interesting, but limited to the real world. Back in December, I posted lots of G9ii vs G9 raw images, which clearly showed the G9ii recovered deep shadows at least 2 stops better than the G9 (if I remember correctly, someone (Lothar?) showed 1+ stop better than Om-1).
That chimes with my experience but I guess there’s another agenda at play 😎
Seems to be….but not a surprise….quite consistent…beats me, can’t figure it out 😀
The endless bashing posts about supposed bad dynamic range during the launch are really something in retrospect…
No bashing if I recall the discussion was about the electronic shutter to which there were no good answers for a while
Perfect answer came from manufacturer last year very fast: There was/is no effect about electronic shutter. That "electronic shutter effect" was just haluzination with origin in this forum.
Nobody was allucinating. They sent the data to the same source. Bill relies on users to do the test the procedure is clear. If someone does not follow it that's a different issue. Or maybe somethine else went on and the camera got an update when the shots were retaken.

On the other hand the camera is a very mild improvement only at base ISO and mostly due to the 14 bits depth. At higher values is not better and the noise reduction will no doubt have consequences

Long exposures are also compromised so all of this comes at a high cost

It does not bother me as that is no longer my use case however if I was still doing night photography on micro four thirds I would avoid this camera (and the GH7 for that matter)
I understand what you say. I have learned in the last 6 months that the only reliable source for information is the manufacturer. He tells us every week and answers our questions und this way we can make the best photos. In case you follow the infos from some websites or this forum you get worse images - just because you use the camera in wrong/less optimal way misleaded by the wrong informations: Some may have avoided electronic shutter or even the purchase of a superior camera due to such wrong informations. I personally had a hard time to work through those missinformations and this forum was not very helpful for sorting this out - rather the opposite. Still today some people try to blame Pana for the wrong informations resulting from this forum (not Bill, he was not the source of the images delivered with wrong data/info e.g. the "electronic shutter" property while actually something else was manipulated - which it was is still hidden, we all only guess here...).

Conclusion: Trust the manufacturer - only he can be punished for wrong informations. And so far there was no wrong info delivered from Panasonic, they are now the single trustworthy source.

Concerning G9m2 - DR is very nice as well as image quality - but the camera is full of compromises since it is at the limit in all directions. But you may choose as you need - both high pixel density and high DR at low ISO - but not max speed. Low rolling shutter at high speed but with much less DR. I like it since I can always get the max that I need at the cost of something else I'd need less in that moment. If I need everything at the same time I am lost anyway with MFT...
In terms of dynamic range if you look at this chart the situation has stayed unchanged as this is only mechanical shutter

Some people here have gone on a tanget to say the G9M2 was amazingly better however the best case data point is opyczne saying at SNR=1 the DR would be 13.3 Ev mostly due to increased bit depth which is 1.3 Ev more than any other MFT camera

And there is no agenda this forum is just full of people with some inferiority complex or endlessly defending turf

The G9M2 at base ISO is marginally better than other MFT camera and mostly due to bit depth. At higher ISO is worse that is what the chart says and same as before
 
Massimo (Interceptor) loves his ‘Photons to Photos’ chart, which are interesting, but limited to the real world. Back in December, I posted lots of G9ii vs G9 raw images, which clearly showed the G9ii recovered deep shadows at least 2 stops better than the G9 (if I remember correctly, someone (Lothar?) showed 1+ stop better than Om-1).
That chimes with my experience but I guess there’s another agenda at play 😎
Seems to be….but not a surprise….quite consistent…beats me, can’t figure it out 😀
The endless bashing posts about supposed bad dynamic range during the launch are really something in retrospect…
No bashing if I recall the discussion was about the electronic shutter to which there were no good answers for a while
Perfect answer came from manufacturer last year very fast: There was/is no effect about electronic shutter. That "electronic shutter effect" was just haluzination with origin in this forum.
Nobody was allucinating. They sent the data to the same source. Bill relies on users to do the test the procedure is clear. If someone does not follow it that's a different issue. Or maybe somethine else went on and the camera got an update when the shots were retaken.
i think the word you want is “hallucinating”…which I agree, was not happening. However, there was blind following and absolute belief in charts, when there was clear evidence that the charts were wrong

there were ample supply of users and images which showed no difference between MS & ES. I personally provided you with a set of test images for your review and testing…which you found no difference in, yet still continued to make negative comments based on Bill’s charts.

you reported that Bill had found “weird findings” (or similar wording) in the G9ii sensor. I don’t know if it was the ES difference of (I suspect) something else. Some of the world’s greatest discoveries have been found because someone said “that’s strange”…..and followed up on it. If I find any fault with P2P in this, it is not following up on strange/weird behavior….the ES and whatever else he saw/sees. It might explain why people are seeing greater quality than your precious charts show.
On the other hand the camera is a very mild improvement only at base ISO and mostly due to the 14 bits depth. At higher values is not better and the noise reduction will no doubt have consequences
again, the ‘chart view’ vs what real users are seeing in their images. I’m sure 14bit helps, but, I suspect, the DR Boost blending of low/high images does more to improving the midrange quality than just somewhat improving DR.

‘consequences’ of NR? Reminds me of a discussion in another thread where one person is claiming all AI noise reduction is garbage and not real because it is creating image parts which were not there. Probably true from a ‘purist’ view…but, from a practical basis, how do I handle the decreased SNR of higher ISO, which, without some change makes the image unusable? Here come the tradeoffs….consequences, yes….but acceptable ones…??
Long exposures are also compromised so all of this comes at a high cost
cup half empty vs half full 😀 So, long exposure (more than 1/15 depending on ISO) lacks DR Boost. It is interesting that you label this as ‘compromised’, yet give no benefit to DR Boost.
It does not bother me as that is no longer my use case however if I was still doing night photography on micro four thirds I would avoid this camera (and the GH7 for that matter)
that’s why you have (and keep mentioning) the Sony FF, isn’t it? If I were focusing on night photography, I’d go FF or larger. Regardless, lots are getting results they are happy with on these cameras.
In terms of dynamic range if you look at this chart the situation has stayed unchanged as this is only mechanical shutter

Some people here have gone on a tanget to say the G9M2 was amazingly better however the best case data point is opyczne saying at SNR=1 the DR would be 13.3 Ev mostly due to increased bit depth which is 1.3 Ev more than any other MFT camera

And there is no agenda this forum is just full of people with some inferiority complex or endlessly defending turf

The G9M2 at base ISO is marginally better than other MFT camera and mostly due to bit depth. At higher ISO is worse that is what the chart says and same as before
 
Massimo (Interceptor) loves his ‘Photons to Photos’ chart, which are interesting, but limited to the real world. Back in December, I posted lots of G9ii vs G9 raw images, which clearly showed the G9ii recovered deep shadows at least 2 stops better than the G9 (if I remember correctly, someone (Lothar?) showed 1+ stop better than Om-1).
That chimes with my experience but I guess there’s another agenda at play 😎
Seems to be….but not a surprise….quite consistent…beats me, can’t figure it out 😀
The endless bashing posts about supposed bad dynamic range during the launch are really something in retrospect…
No bashing if I recall the discussion was about the electronic shutter to which there were no good answers for a while
Perfect answer came from manufacturer last year very fast: There was/is no effect about electronic shutter. That "electronic shutter effect" was just haluzination with origin in this forum.
Nobody was allucinating. They sent the data to the same source. Bill relies on users to do the test the procedure is clear. If someone does not follow it that's a different issue. Or maybe somethine else went on and the camera got an update when the shots were retaken.
i think the word you want is “hallucinating”…which I agree, was not happening. However, there was blind following and absolute belief in charts, when there was clear evidence that the charts were wrong

there were ample supply of users and images which showed no difference between MS & ES. I personally provided you with a set of test images for your review and testing…which you found no difference in, yet still continued to make negative comments based on Bill’s charts.

you reported that Bill had found “weird findings” (or similar wording) in the G9ii sensor. I don’t know if it was the ES difference of (I suspect) something else. Some of the world’s greatest discoveries have been found because someone said “that’s strange”…..and followed up on it. If I find any fault with P2P in this, it is not following up on strange/weird behavior….the ES and whatever else he saw/sees. It might explain why people are seeing greater quality than your precious charts show.
On the other hand the camera is a very mild improvement only at base ISO and mostly due to the 14 bits depth. At higher values is not better and the noise reduction will no doubt have consequences
again, the ‘chart view’ vs what real users are seeing in their images. I’m sure 14bit helps, but, I suspect, the DR Boost blending of low/high images does more to improving the midrange quality than just somewhat improving DR.

‘consequences’ of NR? Reminds me of a discussion in another thread where one person is claiming all AI noise reduction is garbage and not real because it is creating image parts which were not there. Probably true from a ‘purist’ view…but, from a practical basis, how do I handle the decreased SNR of higher ISO, which, without some change makes the image unusable? Here come the tradeoffs….consequences, yes….but acceptable ones…??
Long exposures are also compromised so all of this comes at a high cost
cup half empty vs half full 😀 So, long exposure (more than 1/15 depending on ISO) lacks DR Boost. It is interesting that you label this as ‘compromised’, yet give no benefit to DR Boost.
It does not bother me as that is no longer my use case however if I was still doing night photography on micro four thirds I would avoid this camera (and the GH7 for that matter)
that’s why you have (and keep mentioning) the Sony FF, isn’t it? If I were focusing on night photography, I’d go FF or larger. Regardless, lots are getting results they are happy with on these cameras.
In terms of dynamic range if you look at this chart the situation has stayed unchanged as this is only mechanical shutter

Some people here have gone on a tanget to say the G9M2 was amazingly better however the best case data point is opyczne saying at SNR=1 the DR would be 13.3 Ev mostly due to increased bit depth which is 1.3 Ev more than any other MFT camera

And there is no agenda this forum is just full of people with some inferiority complex or endlessly defending turf

The G9M2 at base ISO is marginally better than other MFT camera and mostly due to bit depth. At higher ISO is worse that is what the chart says and same as before
You are dreaming I concluded nothing the chart were an anomaly and I wasn’t convinced myself

however the camera and the dr boost dont don’t anything and generally the IQ hasnt been a step forward

what I say here is the same I said then I just wanted to remove the exposures bias from the mix

Panasonic could have kept using better performing sony sensors and paid to get 14 bit depth instead they took a risk and went their own path

for video it works for photos the use case is smaller so I dont use anymore any MFT cameras for landscape, portraits, events, street as there is no real reason even looking at costs

the edge is on specific use case for video and strong depth of field that I need underwater

hopefully the autofocus is improved but I am pretty sure when it comes to birds and sports is far away from many brands

nobody wants to say it but even olympus or OM systems autofocus is far from amazing
 
Massimo (Interceptor) loves his ‘Photons to Photos’ chart, which are interesting, but limited to the real world. Back in December, I posted lots of G9ii vs G9 raw images, which clearly showed the G9ii recovered deep shadows at least 2 stops better than the G9 (if I remember correctly, someone (Lothar?) showed 1+ stop better than Om-1).
That chimes with my experience but I guess there’s another agenda at play 😎
Seems to be….but not a surprise….quite consistent…beats me, can’t figure it out 😀
The endless bashing posts about supposed bad dynamic range during the launch are really something in retrospect…
No bashing if I recall the discussion was about the electronic shutter to which there were no good answers for a while
Perfect answer came from manufacturer last year very fast: There was/is no effect about electronic shutter. That "electronic shutter effect" was just haluzination with origin in this forum.
Nobody was allucinating. They sent the data to the same source. Bill relies on users to do the test the procedure is clear. If someone does not follow it that's a different issue. Or maybe somethine else went on and the camera got an update when the shots were retaken.
i think the word you want is “hallucinating”…which I agree, was not happening. However, there was blind following and absolute belief in charts, when there was clear evidence that the charts were wrong

there were ample supply of users and images which showed no difference between MS & ES. I personally provided you with a set of test images for your review and testing…which you found no difference in, yet still continued to make negative comments based on Bill’s charts.

you reported that Bill had found “weird findings” (or similar wording) in the G9ii sensor. I don’t know if it was the ES difference of (I suspect) something else. Some of the world’s greatest discoveries have been found because someone said “that’s strange”…..and followed up on it. If I find any fault with P2P in this, it is not following up on strange/weird behavior….the ES and whatever else he saw/sees. It might explain why people are seeing greater quality than your precious charts show.
On the other hand the camera is a very mild improvement only at base ISO and mostly due to the 14 bits depth. At higher values is not better and the noise reduction will no doubt have consequences
again, the ‘chart view’ vs what real users are seeing in their images. I’m sure 14bit helps, but, I suspect, the DR Boost blending of low/high images does more to improving the midrange quality than just somewhat improving DR.

‘consequences’ of NR? Reminds me of a discussion in another thread where one person is claiming all AI noise reduction is garbage and not real because it is creating image parts which were not there. Probably true from a ‘purist’ view…but, from a practical basis, how do I handle the decreased SNR of higher ISO, which, without some change makes the image unusable? Here come the tradeoffs….consequences, yes….but acceptable ones…??
Long exposures are also compromised so all of this comes at a high cost
cup half empty vs half full 😀 So, long exposure (more than 1/15 depending on ISO) lacks DR Boost. It is interesting that you label this as ‘compromised’, yet give no benefit to DR Boost.
It does not bother me as that is no longer my use case however if I was still doing night photography on micro four thirds I would avoid this camera (and the GH7 for that matter)
that’s why you have (and keep mentioning) the Sony FF, isn’t it? If I were focusing on night photography, I’d go FF or larger. Regardless, lots are getting results they are happy with on these cameras.
In terms of dynamic range if you look at this chart the situation has stayed unchanged as this is only mechanical shutter

Some people here have gone on a tanget to say the G9M2 was amazingly better however the best case data point is opyczne saying at SNR=1 the DR would be 13.3 Ev mostly due to increased bit depth which is 1.3 Ev more than any other MFT camera

And there is no agenda this forum is just full of people with some inferiority complex or endlessly defending turf

The G9M2 at base ISO is marginally better than other MFT camera and mostly due to bit depth. At higher ISO is worse that is what the chart says and same as before
You are dreaming I concluded nothing the chart were an anomaly and I wasn’t convinced myself

however the camera and the dr boost dont don’t anything and generally the IQ hasnt been a step forward

what I say here is the same I said then I just wanted to remove the exposures bias from the mix

Panasonic could have kept using better performing sony sensors and paid to get 14 bit depth instead they took a risk and went their own path

for video it works for photos the use case is smaller so I dont use anymore any MFT cameras for landscape, portraits, events, street as there is no real reason even looking at costs

the edge is on specific use case for video and strong depth of field that I need underwater

hopefully the autofocus is improved but I am pretty sure when it comes to birds and sports is far away from many brands

nobody wants to say it but even olympus or OM systems autofocus is far from amazing
Unfortunately your…I guess best called ‘arm waving’…is a bit difficult to understand. I will say…if I wanted AMAZING AF, I’d get a Nikon Z9 like my son has @$5,500….but, trade-offs…$$, size, etc.
 
Last edited:
Massimo (Interceptor) loves his ‘Photons to Photos’ chart, which are interesting, but limited to the real world. Back in December, I posted lots of G9ii vs G9 raw images, which clearly showed the G9ii recovered deep shadows at least 2 stops better than the G9 (if I remember correctly, someone (Lothar?) showed 1+ stop better than Om-1).
That chimes with my experience but I guess there’s another agenda at play 😎
Seems to be….but not a surprise….quite consistent…beats me, can’t figure it out 😀
The endless bashing posts about supposed bad dynamic range during the launch are really something in retrospect…
No bashing if I recall the discussion was about the electronic shutter to which there were no good answers for a while
Perfect answer came from manufacturer last year very fast: There was/is no effect about electronic shutter. That "electronic shutter effect" was just haluzination with origin in this forum.
Nobody was allucinating. They sent the data to the same source. Bill relies on users to do the test the procedure is clear. If someone does not follow it that's a different issue. Or maybe somethine else went on and the camera got an update when the shots were retaken.
i think the word you want is “hallucinating”…which I agree, was not happening. However, there was blind following and absolute belief in charts, when there was clear evidence that the charts were wrong

there were ample supply of users and images which showed no difference between MS & ES. I personally provided you with a set of test images for your review and testing…which you found no difference in, yet still continued to make negative comments based on Bill’s charts.

you reported that Bill had found “weird findings” (or similar wording) in the G9ii sensor. I don’t know if it was the ES difference of (I suspect) something else. Some of the world’s greatest discoveries have been found because someone said “that’s strange”…..and followed up on it. If I find any fault with P2P in this, it is not following up on strange/weird behavior….the ES and whatever else he saw/sees. It might explain why people are seeing greater quality than your precious charts show.
On the other hand the camera is a very mild improvement only at base ISO and mostly due to the 14 bits depth. At higher values is not better and the noise reduction will no doubt have consequences
again, the ‘chart view’ vs what real users are seeing in their images. I’m sure 14bit helps, but, I suspect, the DR Boost blending of low/high images does more to improving the midrange quality than just somewhat improving DR.

‘consequences’ of NR? Reminds me of a discussion in another thread where one person is claiming all AI noise reduction is garbage and not real because it is creating image parts which were not there. Probably true from a ‘purist’ view…but, from a practical basis, how do I handle the decreased SNR of higher ISO, which, without some change makes the image unusable? Here come the tradeoffs….consequences, yes….but acceptable ones…??
Long exposures are also compromised so all of this comes at a high cost
cup half empty vs half full 😀 So, long exposure (more than 1/15 depending on ISO) lacks DR Boost. It is interesting that you label this as ‘compromised’, yet give no benefit to DR Boost.
It does not bother me as that is no longer my use case however if I was still doing night photography on micro four thirds I would avoid this camera (and the GH7 for that matter)
that’s why you have (and keep mentioning) the Sony FF, isn’t it? If I were focusing on night photography, I’d go FF or larger. Regardless, lots are getting results they are happy with on these cameras.
In terms of dynamic range if you look at this chart the situation has stayed unchanged as this is only mechanical shutter

Some people here have gone on a tanget to say the G9M2 was amazingly better however the best case data point is opyczne saying at SNR=1 the DR would be 13.3 Ev mostly due to increased bit depth which is 1.3 Ev more than any other MFT camera

And there is no agenda this forum is just full of people with some inferiority complex or endlessly defending turf

The G9M2 at base ISO is marginally better than other MFT camera and mostly due to bit depth. At higher ISO is worse that is what the chart says and same as before
You are dreaming I concluded nothing the chart were an anomaly and I wasn’t convinced myself

however the camera and the dr boost dont don’t anything and generally the IQ hasnt been a step forward

what I say here is the same I said then I just wanted to remove the exposures bias from the mix

Panasonic could have kept using better performing sony sensors and paid to get 14 bit depth instead they took a risk and went their own path

for video it works for photos the use case is smaller so I dont use anymore any MFT cameras for landscape, portraits, events, street as there is no real reason even looking at costs

the edge is on specific use case for video and strong depth of field that I need underwater

hopefully the autofocus is improved but I am pretty sure when it comes to birds and sports is far away from many brands

nobody wants to say it but even olympus or OM systems autofocus is far from amazing
Unfortunately your rant is a bit difficult to understand. I will say…if I wanted AMAZING AF, I’d get a Nikon Z9 like my son has @$5,500….but, trade-offs…$$, size, etc.
My Sony A7CIi is small and has great autofocus it costs just a bit more than the G9M2 and less than the OM-1

Panasonic killed the all purpose MFT camera with the GH6 it is not coming back any time soon
 
Massimo (Interceptor) loves his ‘Photons to Photos’ chart, which are interesting, but limited to the real world. Back in December, I posted lots of G9ii vs G9 raw images, which clearly showed the G9ii recovered deep shadows at least 2 stops better than the G9 (if I remember correctly, someone (Lothar?) showed 1+ stop better than Om-1).
That chimes with my experience but I guess there’s another agenda at play 😎
Seems to be….but not a surprise….quite consistent…beats me, can’t figure it out 😀
The endless bashing posts about supposed bad dynamic range during the launch are really something in retrospect…
No bashing if I recall the discussion was about the electronic shutter to which there were no good answers for a while
Perfect answer came from manufacturer last year very fast: There was/is no effect about electronic shutter. That "electronic shutter effect" was just haluzination with origin in this forum.
Nobody was allucinating. They sent the data to the same source. Bill relies on users to do the test the procedure is clear. If someone does not follow it that's a different issue. Or maybe somethine else went on and the camera got an update when the shots were retaken.
i think the word you want is “hallucinating”…which I agree, was not happening. However, there was blind following and absolute belief in charts, when there was clear evidence that the charts were wrong

there were ample supply of users and images which showed no difference between MS & ES. I personally provided you with a set of test images for your review and testing…which you found no difference in, yet still continued to make negative comments based on Bill’s charts.

you reported that Bill had found “weird findings” (or similar wording) in the G9ii sensor. I don’t know if it was the ES difference of (I suspect) something else. Some of the world’s greatest discoveries have been found because someone said “that’s strange”…..and followed up on it. If I find any fault with P2P in this, it is not following up on strange/weird behavior….the ES and whatever else he saw/sees. It might explain why people are seeing greater quality than your precious charts show.
On the other hand the camera is a very mild improvement only at base ISO and mostly due to the 14 bits depth. At higher values is not better and the noise reduction will no doubt have consequences
again, the ‘chart view’ vs what real users are seeing in their images. I’m sure 14bit helps, but, I suspect, the DR Boost blending of low/high images does more to improving the midrange quality than just somewhat improving DR.

‘consequences’ of NR? Reminds me of a discussion in another thread where one person is claiming all AI noise reduction is garbage and not real because it is creating image parts which were not there. Probably true from a ‘purist’ view…but, from a practical basis, how do I handle the decreased SNR of higher ISO, which, without some change makes the image unusable? Here come the tradeoffs….consequences, yes….but acceptable ones…??
Long exposures are also compromised so all of this comes at a high cost
cup half empty vs half full 😀 So, long exposure (more than 1/15 depending on ISO) lacks DR Boost. It is interesting that you label this as ‘compromised’, yet give no benefit to DR Boost.
It does not bother me as that is no longer my use case however if I was still doing night photography on micro four thirds I would avoid this camera (and the GH7 for that matter)
that’s why you have (and keep mentioning) the Sony FF, isn’t it? If I were focusing on night photography, I’d go FF or larger. Regardless, lots are getting results they are happy with on these cameras.
In terms of dynamic range if you look at this chart the situation has stayed unchanged as this is only mechanical shutter

Some people here have gone on a tanget to say the G9M2 was amazingly better however the best case data point is opyczne saying at SNR=1 the DR would be 13.3 Ev mostly due to increased bit depth which is 1.3 Ev more than any other MFT camera

And there is no agenda this forum is just full of people with some inferiority complex or endlessly defending turf

The G9M2 at base ISO is marginally better than other MFT camera and mostly due to bit depth. At higher ISO is worse that is what the chart says and same as before
You are dreaming I concluded nothing the chart were an anomaly and I wasn’t convinced myself

however the camera and the dr boost dont don’t anything and generally the IQ hasnt been a step forward

what I say here is the same I said then I just wanted to remove the exposures bias from the mix

Panasonic could have kept using better performing sony sensors and paid to get 14 bit depth instead they took a risk and went their own path

for video it works for photos the use case is smaller so I dont use anymore any MFT cameras for landscape, portraits, events, street as there is no real reason even looking at costs

the edge is on specific use case for video and strong depth of field that I need underwater

hopefully the autofocus is improved but I am pretty sure when it comes to birds and sports is far away from many brands

nobody wants to say it but even olympus or OM systems autofocus is far from amazing
Unfortunately your rant is a bit difficult to understand. I will say…if I wanted AMAZING AF, I’d get a Nikon Z9 like my son has @$5,500….but, trade-offs…$$, size, etc.
My Sony A7CIi is small and has great autofocus it costs just a bit more than the G9M2 and less than the OM-1

Panasonic killed the all purpose MFT camera with the GH6 it is not coming back any time soon
As I said, still negative on all things Panasonic! Stay with your Sony….oh, why did you post that you were buying a GH7 if you are so down on Panasonic…just to justify posting here with more negativity? Beats me what you are up to??
 

Keyboard shortcuts

Back
Top