Workaround for highlight blinkies in stills mode

Too bad we can't make this thread sticky.
 
On Nikons, the tone curve of the target jpeg is set by the picture control. There is always a picture control in effect. I think the camera comes with 20 or so (Vivid, Neutral, etc.). There is demosaicing going on also and it is the combination of the two (plus saturation, clarity, sharpness, noise reduction, etc. in the picture control) that maps the raw data to the pixels that go to the in-camera JPEG compression to create the SOOC JPEG and raw preview.
I've not done the Rawdigger check to see exactly how close to (or over) clipping the exposure is when the picture control changes to black. I do know that in extreme cases I have needed to back off 1/3 of a stop beyond what the picture control indicates in order to avoid highlight clipping on pixels near the limit. An example of such a case is a moonrise over the ocean, where I wanted to avoid blowing out the moon while maximizing the DR in the dim reflections on the waves.
Todd
BTW, here is a comparison of rawdigger to a different picture control I made. The picture controls are calculated to 16-bit precision and then scaled out to 8-bit, so it is possible to find and highlight (roughly) the raw clipping point.
 
BTW, here is a comparison of rawdigger to a different picture control I made. The picture controls are calculated to 16-bit precision and then scaled out to 8-bit, so it is possible to find and highlight (roughly) the raw clipping point.
Both the O/P approach and yours can be very useful but also have limitations: they may indicate that an area is clipped while in reality it is not: this will occur because either the red channel or the blue channel or both are clipping after applying white balance coefficients, which depending on the light can be larger than 1.

For example:
  • A red pixel with a value of 12,288 (assuming 14-bit raw), which maps to 192 in 8 bit
  • A white balance coefficient of 1.6 (like in daylight) for the red channel
The resulting value of ~307 is above 255 and will be displayed as overloaded while it is not.
 
BTW, here is a comparison of rawdigger to a different picture control I made. The picture controls are calculated to 16-bit precision and then scaled out to 8-bit, so it is possible to find and highlight (roughly) the raw clipping point.
Both the O/P approach and yours can be very useful but also have limitations: they may indicate that an area is clipped while in reality it is not: this will occur because either the red channel or the blue channel or both are clipping after applying white balance coefficients, which depending on the light can be larger than 1.
I can't think of a reason that this technique couldn't be used in conjunction with UniWB.

 
BTW, here is a comparison of rawdigger to a different picture control I made. The picture controls are calculated to 16-bit precision and then scaled out to 8-bit, so it is possible to find and highlight (roughly) the raw clipping point.
Both the O/P approach and yours can be very useful but also have limitations: they may indicate that an area is clipped while in reality it is not: this will occur because either the red channel or the blue channel or both are clipping after applying white balance coefficients, which depending on the light can be larger than 1.
I can't think of a reason that this technique couldn't be used in conjunction with UniWB.

https://blog.kasson.com/using-in-caera-histograms-for-ettr/
Good point, solves the problem.
 
I will see if this is something that also can be done with Canon too ... as Canon in its inf. wisdom also does not give (at least with R5) us neither blinkies nor zebra in realtime for stills shooting (only blinkies to review post shot) I resort (with UniWB) to see if any area in EVF turns pure white... if any shade of green - no clipping, but if "white" then clipping in raw channel (then I give it 1/3 1/2 stop more exposure and roll back just to be sure I am not mistaken about the white) ... Canon supposed to have picture controls as well (but I never tried them) - interesting to see if something like this can be done
 
BTW, here is a comparison of rawdigger to a different picture control I made. The picture controls are calculated to 16-bit precision and then scaled out to 8-bit, so it is possible to find and highlight (roughly) the raw clipping point.
Both the O/P approach and yours can be very useful but also have limitations: they may indicate that an area is clipped while in reality it is not: this will occur because either the red channel or the blue channel or both are clipping after applying white balance coefficients, which depending on the light can be larger than 1.
I can't think of a reason that this technique couldn't be used in conjunction with UniWB.

https://blog.kasson.com/using-in-caera-histograms-for-ettr/
I agree, and I do that sometimes myself, though very rarely. UniWB will scale the red, green, and blue channels appropriately; and then the picture control will approximate the limits of raw clipping. Usually, I don't expose to full raw anyway (I dial it back 1/3 stop or more); and I rarely find red or blue clipping individually.

However, overall, this combination is probably the simplest & most precise technique today to maximize exposure in-camera while in the field.
 
Last edited:
Thank you very much, i just downloaded the file and its amazing in my z7. I wonder if there was a way to show also the underexposed pixels lets say with blue color? Anyway thank you again for your solution!
I tried something like that, where, in addition to mapping 255-in to 0-out on the high end, I mapped 0-in to 255-out on the low end. I found that it didn't help much. Perhaps pushing it up a little so that the clipping point wasn't exactly 0 might have made it more useful. I put the picture control file in the same folder, alongside the one that only shows the clipped highlights:

https://drive.google.com/drive/folders/11JDOGdWH8VtN8W466i_knVwh0v9eiWOB?usp=sharing

It's unimaginatively called Flat-Clip2.NP3. Feel free to give it a try.
I tried it and i think it works. It gives you real time indications for both the under and over exposed areas. The only real issue is that while you can exposure compensate live for the over exposed areas you cannot do this reliably for the underexposed. But the hole idea was in the first place to add functionallity with your camera in real time. Anyway thank you again for bothering to build these picture controls for us.
 
Last edited:
checked how similar stuff might work with Canon - turns out that Canon does not let you do the same with curves (they try to fool proof how you can bend the curve, damn)... the only way to imitate was to adjust HSL parameters for certain swath of green colors (requires UniWB to make everything green then) to make them dark towards black ... it works that way but when you clip in the face area Canon face/eye detection stops working too (apparently green faces are OK till you get a lot of black color on them) :-) and blinkies in post shot review also do not work (as the clipped areas are not dark) ... not good.
 
Thanks for this, looks of interest in hunting blinkies.

Just one question how I do take the downloaded file and get it onto my Z6 to use? I’m not familiar with how this works.

Cheers
Copy the file to NIKON/CUSTOMPC on your memory chip
Put the chip in the camera & turn the camera on
Press the MENU button on the camera
Under the PHOTO SHOOTING MENU, tap on Manage Picture Control
Tap on Load/Save
Tap on Copy to Camera
From there, it should prompt you to select the picture control, which should be called Flat-Clip, and the custom picture control slot to put it in (which will be C1, if you've never done this before).

To activate it, under the PHOTO SHOOTING MENU, tap on Set Picture Control
Scroll to "C1 Flat-Clip" and select OK

There are 28 pre-installed picture controls on my Z7 and the custom ones are at the bottom of the list. Note that the list "wraps" which means that you can scroll up from the top of the list and get to the bottom of the list with just one button press.
Thanks for the instructions. Worked perfectly, I’ll give it a go and see how it works out.
 
Thanks again for this workaround for getting the zebra effect. At first, I also used this image profile very actively. But then I wanted to transfer this original curve, "baked" on the basis of the "Flat Profile", to the "Standard Profile". For my purposes, this "Standard profile" is better. I'm posting here a link on my Google Drive to this image profile file - maybe it will be useful to someone. Once again I express my gratitude to the author of the original image profile for the original idea.

 
Thanks again for this workaround for getting the zebra effect. At first, I also used this image profile very actively. But then I wanted to transfer this original curve, "baked" on the basis of the "Flat Profile", to the "Standard Profile". For my purposes, this "Standard profile" is better. I'm posting here a link on my Google Drive to this image profile file - maybe it will be useful to someone. Once again I express my gratitude to the author of the original image profile for the original idea.

https://drive.google.com/file/d/10fDpcyiu0FHzdM7NZCoMSo33HSyGQyLK/view?usp=sharing
For those of us not fully versed in all this can you explain why this file is better for your purposes? I've used the OP's two files and really liked them, but if there is an added advantage to this one I'd love to know. Thanks!
 
I tried it and i think it works. It gives you real time indications for both the under and over exposed areas. The only real issue is that while you can exposure compensate live for the over exposed areas you cannot do this reliably for the underexposed. But the hole idea was in the first place to add functionallity with your camera in real time. Anyway thank you again for bothering to build these picture controls for us.
Noting that RawDigger defaults its Under Exposure stats to 8 EV below saturation, I created an updated picture control that roughly matched that. While it "works", such that pixels with values lower than a particular threshold are highlighted, I'm not sure that there's much benefit to be had.

On the upper end of exposure, pixel signal-to-noise ratio (SNR) gets better and better as the exposure (photons per unit area) gets higher, and then, as a step function, gets much worse beyond the saturation threshold. The lower end doesn't work that way. As photon density gets lower and lower, pixel quality gets worse and worse, but there's no step function at a certain threshold.

What I think I would want to be able to highlight on the lower end (if anything), are regions of the image that are below an SNR threshold. But we don't get S/N, we get S+N. More precisely, we get (S+Ns)*ISOa+Nr, where Ns and Nr are shot and read noise respectively and ISOa is the pre-read on-sensor amplification based on the active ISO setting. This highlights a problem with the approach for the lower end, which is that, for high ISO where Nr is relatively small, the pixel value gets directly scaled by ISO without changing the SNR. Crank up the ISO and the highlighted "dark" pixels stop being highlighted, but the pixels aren't really any better: the SNR is the same (or very nearly so).

Maybe this could be useful for base-ISO HDR, so as to be able to determine the bracketing parameters that guarantee that the darkest regions are above a particular SNR threshold in the brightest image, and the brightest pixels are below saturation in the darkest image. I'll give this some more thought.

-Todd Johnson
 
I never understood the reliance on the histogram with ML. A histo will show that SOMETHING is blown or crushed, but you don't know what part of the shot necessarily. There are shots where a person might not care if one part is blown, but would another part, it would be nice to know exactly where the overexposure is happening.

Visual aids do just that.
 
I never understood the reliance on the histogram with ML. A histo will show that SOMETHING is blown or crushed, but you don't know what part of the shot necessarily. There are shots where a person might not care if one part is blown, but would another part, it would be nice to know exactly where the overexposure is happening.

Visual aids do just that.
Agree completely. That’s why I like this custom picture control so much!
 
I never understood the reliance on the histogram with ML. A histo will show that SOMETHING is blown or crushed, but you don't know what part of the shot necessarily.
Most of the time, you can tell with a glance at the subject.
There are shots where a person might not care if one part is blown, but would another part, it would be nice to know exactly where the overexposure is happening.
Can't argue with that.
Visual aids do just that.
The reason I like zebras so much is that they are harder to ignore than the histogram.
 
I never understood the reliance on the histogram with ML. A histo will show that SOMETHING is blown or crushed, but you don't know what part of the shot necessarily.
Most of the time, you can tell with a glance at the subject.
But there are times when you can't. Such as when a bird has a bit of white on it but there are also a few spots of light coming through the trees/leaves in the background. I have found that very challenging with the histogram in places like Costa Rica. Always end up taking a shot and checking.
 

Keyboard shortcuts

Back
Top