7D Maze/low ISO artifacts UPDATE

It would be a sad affair if the banding remained in the rendered images.

Anyway, I don't understand how you created this. Is this ACR? Which version?

I don't have that ISO 200 image of yours (that was unusable in the ZIP package), but I converted the ISO 100 version, #4059. I am converting the CR2 in DNG and processing it with ACR 4.6.

Luminance NR=0, chrominance NR=25, sharpening 25/0.7, blacks=0, exposure=+4, everything else 0. You upresed the crop; I left it in the original size.



--
Gabor

http://www.panopeeper.com/panorama/pano.htm
 
That seems consistent with most of the 7D's I've checked, although most of those samples aren't as narrowband as a pure LED light. But the orange patch in the cc24 is very low in blue content (a full decade down), and the cyan patch is down in red content by a factor of app. 5-7 times. Both contain enough green to show the difference.

This slight channel colour sensitivity inconsistency, combined with a small channel amplification difference would explain why the "mazing" effect is quite a lot stronger in some colours (with the same green content and total luminosity/brightness) as compared to some other colours - in the same camera.
 
http://www.ojodigital.com/foro/perfectraw-perfectblend/257378-labyrinth-artefacts-green-equilibration.html

I invite the folk that are getting so stressed about 7D mazing to pay particular attention to the sky shots in post 7 of this thread.

Looks familiar?

And the solution is a better demosaicing algorithm...
I am very familiar with that discussion; I am the developer of an advanced demosaic algorithm (AMaZE) which should appear in a future release of PerfectRAW.

The workaround is an averaging of the green channels, with a concomitant loss of resolution, as I have been saying all along.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
http://www.ojodigital.com/foro/perfectraw-perfectblend/257378-labyrinth-artefacts-green-equilibration.html

I invite the folk that are getting so stressed about 7D mazing to pay particular attention to the sky shots in post 7 of this thread.

Looks familiar?

And the solution is a better demosaicing algorithm...
I am very familiar with that discussion; I am the developer of an advanced demosaic algorithm (AMaZE) which should appear in a future release of PerfectRAW.

The workaround is an averaging of the green channels, with a concomitant loss of resolution, as I have been saying all along.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
if it was a 3-color CFA (but they simply didn't balance the green gains very carefully) couldn't you just avg each channel over the whole image and then equalize them and only then go on and do all the rest. Over the entire image maybe that could get one to balance to 0.2% or less?

if it was a 4-color CFA then you'd have to try very tricky stuff that might not work out
 
It would be a sad affair if the banding remained in the rendered images.

Anyway, I don't understand how you created this. Is this ACR? Which version?

I don't have that ISO 200 image of yours (that was unusable in the ZIP package), but I converted the ISO 100 version, #4059. I am converting the CR2 in DNG and processing it with ACR 4.6.

Luminance NR=0, chrominance NR=25, sharpening 25/0.7, blacks=0, exposure=+4, everything else 0. You upresed the crop; I left it in the original size.

--
Gabor

http://www.panopeeper.com/panorama/pano.htm
Mine is a 100% crop of different files.

http://www.mediafire.com/?gtcrmzqllnj
 
Interesting indeed...

What is this Green Equilibrium all about ?

Sounds like making both green G1 G2 equal ?
Averaging ?
Averaging (across pixel to pixel) is probably a 'no-no' because it would destroy detail and sharpness.

I believe that this would be the better, and most likely solution...

1. Take the average for the whole frame, of each of the two green channels.

2. Where there is a significant difference between the two green channels' averages, apply a scaling correction to both green channels (one channel up, one channel down) to balance them i.e. bring them into 'equilibrium'.

...that's my guess, and what I think would be done.
Except that the ratio of the two green channels varies with the spectral content of the image. John's examples confirm what I've been seeing, that the difference between the two channels varies with the light spectrum.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
Interesting indeed...

What is this Green Equilibrium all about ?

Sounds like making both green G1 G2 equal ?
Averaging ?
Averaging (across pixel to pixel) is probably a 'no-no' because it would destroy detail and sharpness.

I believe that this would be the better, and most likely solution...

1. Take the average for the whole frame, of each of the two green channels.

2. Where there is a significant difference between the two green channels' averages, apply a scaling correction to both green channels (one channel up, one channel down) to balance them i.e. bring them into 'equilibrium'.

...that's my guess, and what I think would be done.
Except that the ratio of the two green channels varies with the spectral content of the image. John's examples confirm what I've been seeing, that the difference between the two channels varies with the light spectrum.
I know - I previously detailed and demonstrated one mechanism (albeit in a different camera) whereby the green channels' balance could vary according to subject colour, but certain people weren't interested and/or it didn't appear to fit some examples.

Even though it's variable with subject colour - an average for the whole frame may still be a good start.
 
I don't see any problem on the uploaded checker card shot, but you should make one more useful:

1. Defocus only a tiny bit, or not at all. The shot you uploaded is killed by defocusing.

2. Underexpose it by at least two stops. The darkest patch in the one you presented is in the seventh stop of the DR, not where the banding becomes visible.

--
Gabor

http://www.panopeeper.com/panorama/pano.htm
side-tracking back to the vertical banding in deep shadows bit, I don't really notice that DPP does anything to fight this, ACR and DPP both show it about the same (the different tone curves and other things they day can sometimes make things a bit different, but overall I would say they show it about the same)

back to mazing, DPP does seem to show the mazing artifacts less readily though

i guess i will eventually have to carefully look into the G1 and G2 response for different shades and under different lighting

it would be nice to get a colorchecker or even white wall shot from a camera that is said to have them balanced under white
 
if it was a 3-color CFA (but they simply didn't balance the green gains very carefully) couldn't you just avg each channel over the whole image and then equalize them and only then go on and do all the rest. Over the entire image maybe that could get one to balance to 0.2% or less?
There's no need for any loss of green resolution if the issue were simply different gains. You'd just multiply one green channel by a factor. 14-bit readout should be sufficient to avoid visible quantization issues. It seems the data is scaled anyway in the camera; it could just use a different factor, with no speed or quantization impact at all.

--
John

 
That seems consistent with most of the 7D's I've checked, although most of those samples aren't as narrowband as a pure LED light. But the orange patch in the cc24 is very low in blue content (a full decade down), and the cyan patch is down in red content by a factor of app. 5-7 times. Both contain enough green to show the difference.

This slight channel colour sensitivity inconsistency, combined with a small channel amplification difference would explain why the "mazing" effect is quite a lot stronger in some colours (with the same green content and total luminosity/brightness) as compared to some other colours - in the same camera.
Well, then, maybe a converter should optimally apply high-pass versions of each green channel to a smoothed bi-channel version. That would get rid of the difference, without losing details. Or, use the smoothed version for color, and use the high-passes only for luminance.

--
John

 
Well, then, maybe a converter should optimally apply high-pass versions of each green channel to a smoothed bi-channel version. That would get rid of the difference, without losing details. Or, use the smoothed version for color, and use the high-passes only for luminance.
How does the converter distinguish Nyquist scale texture in the scene being imaged from mismatched gain garbage? One can smooth this stuff away, but how does one recover the Nyquist scale resolution?

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
Well, then, maybe a converter should optimally apply high-pass versions of each green channel to a smoothed bi-channel version. That would get rid of the difference, without losing details. Or, use the smoothed version for color, and use the high-passes only for luminance.
How does the converter distinguish Nyquist scale texture in the scene being imaged from mismatched gain garbage? One can smooth this stuff away, but how does one recover the Nyquist scale resolution?
You wouldn't correct anything unless the local means were off. What is the chance that the subject aligns with the lines?

Another variation on what I wrote would be to get the smoothed versions of each channel, average them together, and then add back the difference between the literal channels and their individually smoothed versions, so effectively, all you are doing is pulling the low-pass versions of the two green channels together, without affecting the high-pass versions of the individual channels.

If the response is consistent across all specimens, the local color could be used to force a local adjustment. There are a number of ways to deal with this.

Obviously, correction is not going to be easy in area of high chromatic frequency, but then again, you probably would not notice the mazing there, anyway.

--
John

 
I just peeked at a few shots (not necessarily the most banding prone) that i took when testing AF and processed 3 in both DPP and ACR

It seems that DPP does not give up any fine detail in really detailed areas but it almost seems like it is smoother in the smooth areas with almost a smoother over look rather than natural, although it is hard to say. You can actually see the hints of mazing in all the exact same spots to the pixel as with ACR only in the less detailed areas DPP seems to have them sort of smoothed over looking, almost filtered over looking, hiding the effect a good deal, sometimes you can still see the same line the same and sometimes only just a starting dot or two. I think it still leaves things a bit uglier compared to my old 50D files since they don't seem to have those traces of the lines or the dark starting dots so much and look naturally smooth.

I didn't really balance the tone curves and contrast well enough so in some cases it probably hides things a bit more on DPP just because of that.

DPP is also more prone to a zipper sort of artifact and it seems like that might actually help with mazing since it breaks some solid lines into dotted lines.

I should go back and compare with a spot that has major mazing. Will do that later or maybe thursday or friday.

It looks even with DPP it does leave some left over issues, at the very least it makes smooth areas a bit more dotted and noisy than maybe they could be (as similar from my old 40D/50D look cleaner there) if not as much as with ACR.

ACR 20D:



ACR 7D 1:





ACR vs. DPP 7D (not all of these are in super maze prone areas):











 
I'd really like to see that image of the red berry (I'm guessing that's what it is) processed through DPP for comparison with the ACR version you show.

Any chance you could do that?
 
Well, then, maybe a converter should optimally apply high-pass versions of each green channel to a smoothed bi-channel version. That would get rid of the difference, without losing details. Or, use the smoothed version for color, and use the high-passes only for luminance.
How does the converter distinguish Nyquist scale texture in the scene being imaged from mismatched gain garbage? One can smooth this stuff away, but how does one recover the Nyquist scale resolution?
You wouldn't correct anything unless the local means were off. What is the chance that the subject aligns with the lines?
You mean like this (D70 shot)?



The columns between the windows are Nyquist level texture. If they are to be resolved, the greens must be matched. Any averaging of the greens gives mush. While this is an extreme example, it illustrates the fact that averaging the greens is going to drop the resolution of the camera, no matter how the texture is oriented.
Another variation on what I wrote would be to get the smoothed versions of each channel, average them together, and then add back the difference between the literal channels and their individually smoothed versions, so effectively, all you are doing is pulling the low-pass versions of the two green channels together, without affecting the high-pass versions of the individual channels.
Any such averaging throws away data that allows the highest resolution that the sensor array is capable of, if the greens were matched.
If the response is consistent across all specimens, the local color could be used to force a local adjustment. There are a number of ways to deal with this.

Obviously, correction is not going to be easy in area of high chromatic frequency, but then again, you probably would not notice the mazing there, anyway.
Look, there are ways of dealing with the mismatch. The point is that to do so means turning an 18MP camera into a substantially lower MP camera. The green channels carry the highest spatial frequency luminance information, and to have to average them amounts to low pass filtering that information. It is lost to the demosaic process.

--
emil
--



http://theory.uchicago.edu/~ejm/pix/20d/
 
ejmartin wrote[ b]:

It would be interesting to compare the new one with the old one shot under identical conditions -- I imagine the indoor tungsten would be the one you could come closest to reproducing the exact lighting of the previous test.

Were these processed identically? It looks like the 5D result is at least a stop less imbalanced, and more uniform overall, if so.[ quote]

I think I just slid highlights in the levels setting down to 27 for all of them and did nothing more so I believe the processing was the same on all. I didn't run it on my old 20D/40D/50D files yet but I bet it might even be more uniform with a couple of those than with the 5D2.

I have a tungsten version from the first 7D stashed somwhere on my HD and sometime on thursday i should be able to get a cloudy version from copy #2 so they can be directly compared.

edit found the ISO100 tungsten from the first 7D (it definitely shows clearer mazing and more vertical bands, but otoh that one patch is a heck of a lot more even, not sure why the one patch with the new 7D has a weird gradient across it....):

tungsten lighting, ISO100, the dcraw stuff and then the blending stuff in CS4 followed by dragging highlights in the levels tool down to 27

new 7D:



old 7D:



larger version of old 7D:



100% crop old 7D:



same but blended opposite direction:



100% crop new 7D:

 
I try to read & understand what you are saying but I don't have the background or training to begin to understand most of what you are saying so this is probably a really stupid question. I downloaded Raw photo processor & it has Red green blue & green lines with a slider that changes their values to a positive or negative side & a read out for each one telling you what is happening when the sliders are moved. When the application opens a raw file some of the sliders are already showing a value. Should the two green sliders show the same no? I f they don't does that mean that the greens are imbalanced? Bab
 
I try to read & understand what you are saying but I don't have the background or training to begin to understand most of what you are saying so this is probably a really stupid question. I downloaded Raw photo processor & it has Red green blue & green lines with a slider that changes their values to a positive or negative side & a read out for each one telling you what is happening when the sliders are moved. When the application opens a raw file some of the sliders are already showing a value. Should the two green sliders show the same no? I f they don't does that mean that the greens are imbalanced? Bab
hmm i'm not familiar with that.... what program or do you have a screen shot?
 
I think this is nothing new. If memory serves me correctly, while ISO 200 has more noise, it tends to have better DR, and colour
This is a contradiction, for the dynamic range is limited by noise, as long as the bit depth is enough.

--
Gabor

http://www.panopeeper.com/panorama/pano.htm
Well I looked at the reviews of past DSLR's, and seems like my memory was correct, not a huge difference, but there is a slight increase in DR at ISO 200 for these cameras:

50D:
ISO 100 -4.8 EV 3.5 EV 8.3 EV
ISO 200 -4.9 EV 3.6 EV 8.5 EV
http://www.dpreview.com/reviews/canoneos50d/page19.asp

1000D:
ISO 100 -5.2 EV 3.5 EV 8.7 EV
ISO 200 -5.2 EV 3.6 EV 8.8 EV
http://www.dpreview.com/reviews/CanonEOS1000D/page21.asp

500D:
ISO 100 -5.1 EV 3.4 EV 8.5 EV
ISO 200 -5.1 EV 3.5 EV 8.6 EV
http://www.dpreview.com/reviews/CanonEOS500D/page17.asp

Looks like older cameras with lower pixel density do about the same between ISO 100 and 200. But there is definitely a 'history' here. So 7D having ISO 100 DR problem is not new issue. It just different enough that people can notice it.

-

 

Keyboard shortcuts

Back
Top