Canon and ETTR

Captive18

Senior Member
Messages
1,183
Solutions
1
Reaction score
802
Is it just me or does Canon sensors not seem to pull back highlight details in RAW? Obviously I strive to expose correctly, but When I shot other digital cameras (Nikon, Olympus, etc.) in the past it seemed like I could pull back about 2 stops over-exposed images. i haven’t found that to be true with my Canon R5. It actually seems to be able to lift shadows better than bring down highlights….which doesn’t make sense to me with ETTR.

To be clear I’m not dissing Canon. I love my camera. Im just a little confused on the sensor differences. Has anyone else experienced this? I are there specific settings to restrict highlight detail, but preserve shadows more?



TIA
 
I had reset the files, so they had no lens corrections.
It looks so much like the lens profile correction. Also, why DNG? So you convert to DNG? Lightroom has a bug where it applies lens profile correction on import regardless if it's actually ticked. Any chance you import CR3s as DNG and Lightroom produces DNGs with lens correction baked in?

I wouldn't be surprised if it's something related to your workflow and bugs in Lightroom/ACR.
The last one is my fiddling about in Photoshop to try and make it more interesting. Obviously did not succeed!.
 
Can certainly be the case. That said, when I shoot dedicated ETTR I'm normally using UNI-WB
How are you doing that?
I have a dedicated UNI-WB file for the cameras I do ETTR with. If memory serves.. Jim Kasson had a good article on his blog on how to make one
I got a magenta filter back when I used a Canon 10D in order to even out the channels. I stopped using it, though, but I don't recall why.
I can see the rationale - but surely not a good idea as you are simply losing light, which risks increasing the noise, rather than normalising the colour channels.
The physical channels don't need normalisation. Not through a magenta filter anyway. It looks like a strange attempt to fix RGB white balance coefficients (purely a software thing) through a physical filter. Sounds like a bad idea. UniWB does that in a non-destructive way.
Filters change per channel exposure, UniWB is there to tame in-camera histogram a bit. Different things.
Exactly, and magenta filter was mentioned in the context of ETTR and UniWB, so I was wondering how it would be related.
Very simple. ETTR is a technique to improve SNR by increasing the signal in all the channels without clipping.
Using a magenta filter is a technique to improve SNR by increasing the signal in the blue and red channels without affecting the green channel. I thought this was clear when I said, "I got a magenta filter back when I used a Canon 10D in order to even out the channels." Evening out the channels is essentially applying ETTR to the channels that are not the one being filtered out. It's a per-channel ETTR.
The "without clipping" is relevant here: you apply the magenta filter to the green channel to keep it from clipping.
Why do you think (or why did the inventors of magenta filter think) that the green channel will have greater max signal than the other two?
By looking at the data. And note that I don't just claim this, I posted screen shots that demonstrate it clearly. It depends upon the lighting, of course. Light the scene with magenta lights and green won't be the dominant channel, but I'm talking about daylight, using the sensor on my camera. And the histogram in the screen shots I gave included both raw histograms and rendered histograms. They are quite different.
 
  • Iliah Borg wrote:
Can certainly be the case. That said, when I shoot dedicated ETTR I'm normally using UNI-WB
How are you doing that?
I have a dedicated UNI-WB file for the cameras I do ETTR with. If memory serves.. Jim Kasson had a good article on his blog on how to make one
I got a magenta filter back when I used a Canon 10D in order to even out the channels. I stopped using it, though, but I don't recall why.
I can see the rationale - but surely not a good idea as you are simply losing light, which risks increasing the noise, rather than normalising the colour channels.
The physical channels don't need normalisation. Not through a magenta filter anyway. It looks like a strange attempt to fix RGB white balance coefficients (purely a software thing) through a physical filter. Sounds like a bad idea. UniWB does that in a non-destructive way.
Filters change per channel exposure, UniWB is there to tame in-camera histogram a bit. Different things.
Exactly, and magenta filter was mentioned in the context of ETTR and UniWB, so I was wondering how it would be related.
Very simple. ETTR is a technique to improve SNR by increasing the signal in all the channels without clipping.
Using a magenta filter is a technique to improve SNR by increasing the signal in the blue and red channels without affecting the green channel. I thought this was clear when I said, "I got a magenta filter back when I used a Canon 10D in order to even out the channels." Evening out the channels is essentially applying ETTR to the channels that are not the one being filtered out. It's a per-channel ETTR.
The "without clipping" is relevant here: you apply the magenta filter to the green channel to keep it from clipping.
Why do you think ... that the green channel will have greater max signal than the other two?
Check raw histograms, you will see when.
Again it's WB coefficients applied before RGB conversion that may make the green channel look more saturated.
Check coefficients for green channels. You will find them very close to one.
ok, we suppress the green channels (50% of all pixels) and get less light in them, therefore more noise. Then we get more light in the R and B channels, also 50% of all pixels.

Is it a win?
You are missing a critical element. The time the shutter is open increases to compensate so that the exposure to the green channel remains the same. When you add a filter to a camera, you adjust the exposure to compensate.
PS. Not necessarily more light in R and B.
Yes, necessarily.
Given the same exposure as without the filter - less light overall.
Now we get to definitions of exposure. If it's the same exposure, by definition, it's the same overall light. What you meant was exposure time. Yes, with the same exposure time, you are right. But nobody using this technique would use the same exposure time. That's the whole point. You are adding the filter in order to be able to increase the exposure time of red and blue without overexposing green.
PPS. Ok I understand that can be compensated by corresponding increase in the exposure time.
Given your answers, sometimes I wonder. :)
If the filter takes -1ev from the green channels, we need to increase the exposure by +1ev. But that only works when the difference between the G and RB is 1ev.
It works when the difference is anything > 0 ev, which it is in daylight.
 
I use Lightroom 6 so convert CR3 to DNG. So you are saying that Lens profile is baked in, even if I don't select it in the Lightroom corrections tab and then getting twice the correction, when I select lens profiles.

That means all my R5 images in DNG have a profile for my 100-400 2, but not all the images have that circular pattern. The circular pattern goes when I invoke correction in Lightroom, perhaps for the second time.



Lens profile see ticked boxes

f83415274c7748cc8b8efba2a5e7d0cc.jpg

No profile no ticked boxes



61c801b14a98460f9eb80d32fd0077ab.jpg

Lens profile



f59ff8bff484444dbf067139e1506704.jpg

No lens profile



039aa1e40fac4bd8968123bc45c686dd.jpg

Perhaps its a Adobe DNG bug?

Just my seagull images with the bird in the centre seem to have the pattern, which go when a profile is applied
 
It works when the difference is anything > 0 ev, which it is in daylight.
Not exactly, see my calculations above. But I agree it may work in certain range of conditions.
 
I use Lightroom 6 so convert CR3 to DNG. So you are saying that Lens profile is baked in, even if I don't select it in the Lightroom corrections tab and then getting twice the correction, when I select lens profiles.

That means all my R5 images in DNG have a profile for my 100-400 2, but not all the images have that circular pattern. The circular pattern goes when I invoke correction in Lightroom, perhaps for the second time.

Lens profile see ticked boxes

f83415274c7748cc8b8efba2a5e7d0cc.jpg

No profile no ticked boxes

61c801b14a98460f9eb80d32fd0077ab.jpg

Lens profile
I just tried this (less the DNG step) on one of my images also taken with the 100-400 II. I think what you're seeing is the vignetting caused by the lens. The concentric lines in the second image are at the boundaries of specific brightness levels. Profile correction adjusts these levels to eliminate vignetting, so you don't see them when that item is ticked.

If I'm right about this, these bands will be more pronounced in high speed mode because only 12 bits are used making the boundary between levels more pronounced.

The white spots in this view show where the value of a pixel differs from the value of neighboring pixels. By how much depends on the value of the slider.

You'll probably see it most pronounced when most of the image is an even deep blue sky. That scenario has caused issues with digital photography since I can remember and is exacerbated by noise removal. It can be reduced by adding a small amount of noise, if, for example, you want to eliminate banding in blue skies.

--
Victor Engel
 
It works when the difference is anything > 0 ev, which it is in daylight.
Not exactly, see my calculations above. But I agree it may work in certain range of conditions.
I was being simplistic. In any case, from the data I've seen the difference between G and the other two channels is almost always more than that in daylight conditions.
 
I use Lightroom 6 so convert CR3 to DNG. So you are saying that Lens profile is baked in,
It was a theory...
even if I don't select it in the Lightroom corrections tab and then getting twice the correction, when I select lens profiles.
The bug is in LR classic, the latest version. You're on LR6, but perhaps DNG converter has some issues. Maybe it really does bake the lens correction into DNGs. Or writes incorrect metadata related to lens corrections.
That means all my R5 images in DNG have a profile for my 100-400 2, but not all the images have that circular pattern. The circular pattern goes when I invoke correction in Lightroom, perhaps for the second time.
That's very odd that the pattern vanishes with the correction enabled, but now looks even more likely there are some bugs in the Adobe DNG converter, or compatibility of converted DNGs with the old LR version.

Maybe LR6 doesn't work well with the noise reduction in those files, so you're having those noise issues too. Or, as your last message suggests, the extra noise may be linked to the use of the electronic shutter and 12-bit raws.

--
https://www.instagram.com/quarkcharmed/
https://500px.com/quarkcharmed
 
Last edited:
I generally only use single shot and these images are just single shot.

Interestingly some DNG conversions seem to have embedded lens geometry, but not vignetting and other have none embedded. I can see what changes when I apply a lens profile.

I have just up dated Adobe DNG converter so I can see R7 files, which the older converter could not.

Appling vignetting correction removes the pattern and just makes noise, from the pattern.

All rather weird!
 
I also use DPP and the R5 noise at lower ISOs is still pretty apparent.

It is why I never go beyond 1600, if any cropping is needed. More and more the R5 reminds me of the 7D. Fair weather cameras both of them..
 
I generally only use single shot and these images are just single shot.

Interestingly some DNG conversions seem to have embedded lens geometry, but not vignetting and other have none embedded. I can see what changes when I apply a lens profile.

I have just up dated Adobe DNG converter so I can see R7 files, which the older converter could not.

Appling vignetting correction removes the pattern and just makes noise, from the pattern.

All rather weird!
I'm trying to figure out how you get those screen shots. The way I get the view is to hold the option key and then move the slider. If I remove my finger from the option key or from the trackpad, the view goes away, so how to get a screen shot? I could record a video of the screen but then I'd have to extract a frame from the video.
 
I wondered when you would ask.

First I asked my wife to hold down the ALT key, when I pressed print screen

Then I discovered that the ALT GR key will lock the view. Hold it down and click the masking slider.

I assume you are an Apple user, so don't know if they have such a key.
 
Last edited:
I also use DPP and the R5 noise at lower ISOs is still pretty apparent.

It is why I never go beyond 1600, if any cropping is needed. More and more the R5 reminds me of the 7D. Fair weather cameras both of them..
I notice on your screen shots you have noise reduction set to 0. I almost always use color noise reduction set to something like 25. That doesn't affect the detail of the image, and for human vision, color is generally blurred anyway. I'm more careful with luminance noise reduction, using it only when necessary. For some images, like aircraft, for example, Topaz Sharpen AI is effective both for sharpening and reducing noise. It works well for subjects like aircraft where there are clean, predictable lines. For things with textures, like feathers, though, it often introduces artifacts, so despite all the ads I see for it on Facebook, I don't use it much for birds. I do use it for macro photography (1x-5x magnification) where it's often very effective at recovering detail. But that's a whole separate conversation.

Anyway, my point is that I wouldn't be afraid to dial in some color noise reduction.
 
I wondered when you would ask.

First I asked my wife to hold down the ALT key, when I pressed print screen

Then I discovered that the ALT GR key will lock the view. Hold it down and click the masking slider.

I assume you are an Apple user, so don't know if they have such a key.
I had to google that. https://en.wikipedia.org/wiki/AltGr_key

Yes, I'm a Mac user. There's probably some similar function.
 
Ok for simplicity, let's say R and B channels are the same, so we equalise GG vs RB.
Let's skip simplicity and turn to WB coefficients for different camera presets like flash, shade, cloudy, daylight.
I think the idea of that magenta filter was to equalise the RGGB channels in raw, before applying the WB. That implies one would need to apply a custom WB later on, either in camera or in post. That WB would need to be fine tuned for the specific camera and specific filter, and one may need multiple correcting WBs for different conditions.

I've never used magenta filters and from this discussion they look like a big hassle if used just for the sake of better ETTR (let's call it per-channel ETTR).
 
Ok for simplicity, let's say R and B channels are the same, so we equalise GG vs RB.
Let's skip simplicity and turn to WB coefficients for different camera presets like flash, shade, cloudy, daylight.
I think the idea of that magenta filter was to equalise the RGGB channels in raw,
Yes, more or less.
That implies one would need to apply a custom WB later on
Yes, but weaker coeffs.
That WB would need to be fine tuned for the specific camera and specific filter
and specific light.
, and one may need multiple correcting WBs for different conditions.
As always.
I've never used magenta filters and from this discussion they look like a big hassle
"I haven't read Pasternak, but I condemn him".
 
I was just being lazy. I set the amount to 1 as it is the most basic to make the mask slider work. I did not need other parameters for screen shots

I never use the colour more than 16 as it does very little after that.

I always start from 0, but I do have a preset

I do have Neat Image and have tried the others. They should win prizes for fiction. Neat Image is honest
 
Last edited:
Ok for simplicity, let's say R and B channels are the same, so we equalise GG vs RB.
Let's skip simplicity and turn to WB coefficients for different camera presets like flash, shade, cloudy, daylight.
I think the idea of that magenta filter was to equalise the RGGB channels in raw, before applying the WB. That implies one would need to apply a custom WB later on, either in camera or in post. That WB would need to be fine tuned for the specific camera and specific filter, and one may need multiple correcting WBs for different conditions.

I've never used magenta filters and from this discussion they look like a big hassle if used just for the sake of better ETTR (let's call it per-channel ETTR).
When I did it, it was only for sun illumination. I shot a photo of the blank overcast sky or a white card to use for custom white balance. That's what the magenta colored shot I posted earlier was for. That custom WB could then be used for subsequent shots in daylight. If memory serves, I also created a custom profile using a color checker passport. No other special processing was required. This is normal handling anyway, so not really a big deal.

The other main scenario with different lighting involved incandescent lighting, which has strong enough reds that using a magenta filter was pointless. Besides, in the incandescent light situations, it was typically dark enough that I wanted as much light as I could get, so adding a filter was not something I wanted to do.

So really, there was just one scenario where I used it - outdoor shooting by natural light.
 

Keyboard shortcuts

Back
Top