New Sharpening Concept

Compare the grey/monochrome pelican with the flamingoes.

Im still not all that convinced - I took the image into photoshop and did a traditional USM on the orginal version. (on the left) With using a fairly light USM I could get a very similar result - in fact I think the right hand image looks a little processed to me. Still, its an interesting idea. Perhaps its just a matter of finding the right images.
--
http://www.pbase.com/timothyo

 
Digital photography is not yet perfected. The very interesting article brought up some real issues, and presented one work-around which I certainly will try out.

I do suspect that the Sigma approach, which is closer to the ideal, will be used in the furure (RGB for each sensor element), which will fix the problem at source.

There is no need to get upset because those in the know work on improving things. If they didn't we wouldn't have come very far by now.
Well I read this whole thing. I don't like it, at all. I think it's
something that someones trying to sell. I can sharpen my own
photos. But I don't have to cause they come out that way. The guy
makes it sound like theres a problem with those cameras he
mentioned. The problem is not focused properly causing weak not
sharp images. You dont need that product to fix them.. do you work
for them?
 
Hmm. I guess its personal preference, (perhaps I like softish
looking images?) but the right hand image sucks to my eyes. (like
its been USM'ed way too much)
I also don't like a lot of sharpening for portraits..they look better when a tad soft. That was not a good exemple for this sharpening as it is usualy prefered to have portraits not that sharp.
--
I am not an English native speaker!
Please email me at [email protected] for questions
http://www.pbase.com/zylen
http://www.photosig.com/go/users/userphotos?id=26918
 
...look at the whole image...to get a sense of the depth. NOT just sharpness.

It's hard to separate the overall sharpening from the fact that some areas were sharpened more than others in the EQed images. Those are both pretty good example images.

Remember that recent thread with the embossed coin that looked concave or convex if you turned it 180 degrees? But not to everyone? Maybe this effect is like that - we experience it differently from each other...

Stan
 
Here is what I observed, with this new sharpening, it will do about the same thing as USM but will affect only the edges if you check the box for edges. I think I will use it on the original image full size in order to get some of the "fuzzyness" out of some photos..then I will resize and then use USM.
Compare the grey/monochrome pelican with the flamingoes.

Im still not all that convinced - I took the image into photoshop
and did a traditional USM on the orginal version. (on the left)
With using a fairly light USM I could get a very similar result -
in fact I think the right hand image looks a little processed to
me. Still, its an interesting idea. Perhaps its just a matter of
finding the right images.
--
http://www.pbase.com/timothyo

--
I am not an English native speaker!
Please email me at [email protected] for questions
http://www.pbase.com/zylen
http://www.photosig.com/go/users/userphotos?id=26918
 
yes I coudl see the embossed and recessed version but this to me only look like a photo with sharpening. I don't see that depth effect on it like there was on the coin.
...look at the whole image...to get a sense of the depth. NOT just
sharpness.

It's hard to separate the overall sharpening from the fact that
some areas were sharpened more than others in the EQed images.
Those are both pretty good example images.

Remember that recent thread with the embossed coin that looked
concave or convex if you turned it 180 degrees? But not to
everyone? Maybe this effect is like that - we experience it
differently from each other...

Stan
--
I am not an English native speaker!
Please email me at [email protected] for questions
http://www.pbase.com/zylen
http://www.photosig.com/go/users/userphotos?id=26918
 
You probably didn't see the heading of this forum? Or are you just trolling?

P.S: I shot 100 pics today just to check out a few things and learn my new 300D. Didn't cost me one cent.
Nice, isn't it?

(I had a digital P&S for three years, and basically stopped using my EOS 500. So getting a digital SLR was a necessity to get back inte SLRs.)

Take care, and don't be afraid to take the leap into digital.
Good article and interesting concept. I liked the demo picture of
the flamingoes.

So can I selectivly sharpen the color channels to emulate the
technique?
--
Tim
--
I'm a poet and I didn't even realize it.
Learn the rules, then forget the rules.
http://www.pbase.com/intrinsic
 
Thanks for a good explanation!
The previous poster had me fooled there for a while! ;-)
Think of each pixel that makes up the final image. Each one of
these final pixels is made up of information from 3 channels. Now
if this final pixel is grey, no sharpening is required. On the
other hand, if this final pixel was red, then sharpening would be
required.

Now stay with me, this is the important part. If you sharpen each
individual color channel, how would the sharpening software know
which red pixels were contributing to a grey 'final' pixel (no
sharpening required) versus a red 'final' pixel (sharpeing
required)?
Good article and interesting concept. I liked the demo picture of
the flamingoes.

So can I selectivly sharpen the color channels to emulate the
technique?
--
Tim
 
Same thing!

Aliasing in audio is the same thing as aliasing in a picture. A low-pass filter is needed to remove frequencies above 1/2 the sampling frequency. For a photo-sensor the sampling frequency is the pixel-pitch. You don't want any variation across that sensor to be faster (i.e sharper) than 1/2 the pixel-pitch. Moire-patterns are examples of aliasing.

Alisaing in audio is not really a "temporal" thing. It is a "frequency-blending thing". Aliasing would occur in an audiofile on you hard disk if you were to let a program mathematically add a frequency above 22 kHz.

I hope my explanation was understandable!
If I understand it correctly then, especially the bit about RAW
conversion, wouldn't this type of approach be the most effective if
done by the same people who wrote the "de-mosaic" algorithm, since
they would know exactly what kind of weighting is given to each
colour? Seems to me like the best spot to do this would be right in
C1 itself. I wonder if the built-in sharpening in C1 takes anything
like this into account? Methinks MIchael Tapes and Co. should read
this article and see what they think...
I agree with your point, Neil. One issue with Chaney's method is
that many demosaicing algorithms already include some compensation
for the relative densities of 3 color filters in the Bayer pattern.
Without knowing how an image was demosaiced, it's easy to under- or
overcompensate. At least this feature is classified by Mike as
experimental and includes a slider to control the magnitude.

Another issue I have with the article is the continued promotion of
the idea that color resolution of a Bayer sensor is somehow 1/2 of
what the pixel pitch would predict in each direction. This is the
same argument used to bolster the Foveon stacked-sensor scheme, but
isn't entirely accurate for two reasons.

Most real-world images don't exhibit high-frequency variations in
color. They may look like they do, but, if you convert them to LAB
mode and look at the luminance vs. the A and B channels, you'll see
that most of the "detail" is luminance detail and that the color
itself varies much more slowly. A color growing darker and lighter
rapidly can look like its changing rapidly, but its really just a
luminance variation.

Because the R, G and B filters have considerable spectral overlap,
each pixel generally records something even if the color doesn't
match its filter type. A red pixel will show some response to a
color that might look green to the eye, for example. This means
that luminance information can be found in every pixel. If you read
articles on demosaicing, what this amounts to is that the algorithm
has to pick apart the luminance and chrominance information, which
interfere with each other. An algorithm does this not by just
averaging together adjacent pixels of the same color, but by
involving all surrounding pixels in the calcuation of the missing
color channels at any given pixel location. It's not a collapse of
four physical pixels into one output pixel, but rather the use of a
set of data to create another larger set of data that is, in some
sense, a best fit to the subset within the constraints of how a
natural image tends to behave. Good algorithms can achieve
luminance resolution, though, which amounts to perhaps 80% of what
you'd expect from pixel pitch.

Because of this, resolution charts that intentionally consist of
alternating stripes of different color misrepresent the way color
varies in the real world and exaggerate the problem. One can think
of a Bayer sensor as being a form of data compression, taking
advantage of the slowly-varying behavior of color across an image.
6Mpixels worth of raw sensor data ends up yielding a very high
quality full-color image in the end. A 3Mpixel Foveon sensor may
match the quality (somewhat less luminance resolution, somewhat
better color resolution), but requires 50% more data to be captured
and routed off the sensor chip.

As such, its success depends on whether or not its recording the
type of images that its expecting. If demosacing algorithms are
designed around natural images, they won't work very well with
artificial charts. An analogy is JPEG. This compression scheme can
work quite transparently on natural images but produces far worse
results on, say, line art. Other compression schemes, such as RLE,
would work much better there. The success of a scheme depends on
optimizing for the intended data.

I think the biggest issue with the Bayer sensor is that it requires
a relatively aggressive antialiasing filter to prevent Moire
patterns, which demosaicing doesn't handle well and reproduces as
false color patterns. Once you put such a filter in place, though,
you are also blurring the green channel by the same amount, so I
don't know why Chaney's equalization scheme should even work.

David
 
Now you are talking about light-frequency, which is colour. That is not the primary problem here.

The frequency att issue here is how quickly the luminance varies accross the sensor.

Just picture the luminance as your typical audio sine wave, and the sensor, seen from the thin side, as your time axis. Sure, it is not time, but it aint time either when you process an audio file on your computer (for example).

For a very sharp picture you need to allow tha luminance to vary very rapidly across a given space. However, if this variation is quicker than half the pixel-pitch, you get aliasing. Nyquist!
This brings to mind a question that has been bothering me. At one
time I was deeply involved in digital signal processing as used in
remote sensing and geophysics. Anti-aliasing was referencing the
Nyquist frequency and sampling rate, i.e., multiple sampling of a
time varying signal. How does this enter into a digital snapshot
which, I assume, is only one sample in time? Has aliasing assumed a
new orthogonal definition from what I used in signal processing?
Hi Jim,

No, the universe still works the same as always, but here's how
your experience and image capture differ. In your case, you were
sampling a 1D signal in the time domain. Such a signal could be
represented as a sum (actually, an integral for non-periodic
functions) of basis functions that take the form of sin(wt) and
cos(wt), or, if you prefer e^iwt and e^(-iwt). Here, w = 2*pi*f,
where f is frequency. In layman's terms, the signal is a
combination of different frequency sinusoidal pieces. For such a
signal, the Nyquist theorem specifies the maximum time interval
between regular-spaced samples of a band-limited (very important)
signal in order to be able to later reconstruct the signal from
the samples exactly.

In the image capture world, we're dealing with a 2D signal and
sampling spatially, not in time. Thus, the "frequencies" making up
the original analog image are spatial frequencies and the basis
functions are of the form [e^(i * kx t )] * [e^(i * ky * t)]. That
is, they are pairwise products of sinusoids of two different k
values, one for the x dimension and one for the y. Here, Nyquist
tells us the maximum interval in space (rather than time) for
regularly-spaced samples in order to reproduce a bandlimited image
exactly from the samples.


Just as with a 1D time domain signal, a 2D image has to be
bandlimited to ensure exact reproduction from the samples. One way
to bandlimit is to defocus the image or use a bad lens that does so
automatically. A better way is to have more precise control by
including an antialiasing filter in front of the sensor. Such a
filter produces a slight (spatial) low-pass filter function. The
consequence of not having such a filter is aliasing, or
higher-frequency components that appear in the image as though they
are low frequency. That is, you see "beat" frequencies or Moire
patterns in high-frequency detail. If this occurs in a Bayer
sensor, the patterns take on various false color hues, since they
differ depending on color (R and B have a different pixel pitch
from G). This is very undesirable. However, such patterns occur
equally in full-color sensors like the Foveon, where they are
less-objectionable luminance beat patterns rather than taking on
rainbow hues. Less objectionable doesn't necessarily mean
acceptable or invisible, however, which is why many people rail on
Sigma for not including an AA filter in the SD9 and SD10 in order
to add an additional level of sharpness through the unsafe practice
of "unprotected capture."

David
Thanks David.

I've worked in that realm also, just never thought about that
aspect of it. Now how a low pass filter is constructed would be
another interesting topic. I assume you are now in the frequency
domain, up at the visable end of the spectrum, not the frequencies
below about 1 kilohertz that I was working with?
Jim
 
Paul,

thanks for the article, it's an interesting idea, but I'm on a Mac and unable to try this guy's ideas (QImage appears to be a PC-thang)...

However, after reading this comparison: http://www.insflug.org/raw/analysis/dcrawvsfvu/crops.php3

I've been processing all the CRWs with dcraw and then doing the linear 16-bit color space conversion in ImageMagick (it's free after all) and applying the color profiles in 16-bit depth with LCMS (also free) by way of UNIX shell scripts so I get batch processing without Photoshop CS (which isn't free)...

anything remaining I do in Photoshop Elements...

I wonder if this color-based method of sharpening could be done in Image Magick... or if it would even help the dcraw-processed images...

I can't even imagine if he's simply masking by color and applying USM to an adjustment layer or what... ??? To figure out how to do this in IM would be fun, but I wonder if anyone could compare a dcraw interpolated RAW and see if it even helps...

Thanks,

JBM
 
"This is why i shoot at -2 sharpness, it gives me the best options for output time"

Is that arrived at by testing, or do you know that the -2 setting is the same as turning off all sharpening (as oposed to introducing software softening?

Thanks!
USM sharpening is used primarily for commercial halftone printing
(newsweek, playboy, artbooks), where overemphasizing the contrast
in certain parts of the image "restores" detail lost in the analog
process of printing and returns the human perception of the image
back to that of sharp photograph when viewed in a book or magazine.

Images displayed for the web don't need unsharp mask style
sharpening. instead they benefit more from traditional computer
image sharpening, or what i refer to as "edge sharpening"

This is because USM sharpening is a very specific process that is
designed for high resolution images destined for print publication.
These sharpened bitmaps are never really viewed at 100% pixels on a
computer monitor except by the artist preparing the file for
printing.

A properly USM style sharpened image will actually look quite
strange when viewed at 100% pixels. When viewed in a magazine
(which has about 4x the resolution of a computer monitor) the
printed image will look reletively natural. That is to say that the
goal here is to make the final printed image look like the
UNSHARPENED image that the artist was seeing on his computer.

When applying USM to images destined to be viewed on the web at
100%, images can take on a blotchy, smeary effect that actually
softens and degrades the image.

The unsharp mask effect is a trick magazine printers have been
doing for decades which combines two negatives of the same image
(one of which is defocused or "unsharp") to make the image appear
more natural when viewed in the final magazine. it is an optical
"trick" that can be reproduced digitally by programs such as
photoshop.

they all need different amounts and styles of sharpening, for
example, a 3000x2000 pixel image printed at 4x6 will need alot of
unsharp mask sharpening to look "sharp" when printed, but that same
image resampled to 600x400 for screen viewing will need just a hint
of edge sharpening to pop on the screen.

This is why i shoot at -2 sharpness, it gives me the best options
for output time

commercial 4 color haftone process for magazines where ALL
continuous tone bitmap images have to be "oversharpened" to look
more like the original image when output.

technically this is true for inkjets and digital enlargers too, but
the higher resolution requires less sharpening than the relatively
low resolution of newspapers or magazines

http://www.silvercrayon.com/workflow.html
 
Think of each pixel that makes up the final image. Each one of
these final pixels is made up of information from 3 channels. Now
if this final pixel is grey, no sharpening is required. On the
other hand, if this final pixel was red, then sharpening would be
required.

Now stay with me, this is the important part. If you sharpen each
individual color channel, how would the sharpening software know
which red pixels were contributing to a grey 'final' pixel (no
sharpening required) versus a red 'final' pixel (sharpeing
required)?
Good article and interesting concept. I liked the demo picture of
the flamingoes.

So can I selectivly sharpen the color channels to emulate the
technique?
if selectively sharpening color channels doe not work, what is the detiail paradigm that does work?

Tim
 
Um, that's me. Had plenty of coffee this morning so I was on that one pretty quick...
Wolff
I was about to post this on the Picture Flow site, and I see that
"turboquattro" already did. Michael Tapes says - will be
considered for a future version. Beat me to it.

Ok, which one of you is turboquattro?

Paul
.

--
'Nobody can forget the sound.'- Michele Mouton
 
thanks for the article, it's an interesting idea, but I'm on a Mac
and unable to try this guy's ideas (QImage appears to be a
PC-thang)...
Indeed. I wish it was presented as a method with instructions rather than a "black box" Qimage function.
linear 16-bit color space conversion in ImageMagick (it's free
after all)
I'm Macish too, 10.3, and haven't heard of that program. What's it for?

Tried the GIMP?
I can't even imagine if he's simply masking by color and applying
USM to an adjustment layer or what... ??? To figure out how to do
I imagine you could achieve something similar by using two levels of USM on two different layers and creating a layer mask for "greens" for one and one for the other hues for the other. Again, it would be nice if new methods were presented in such terms so they'd be generally useful without having to run yet another proprietary plug-in, or worse, yet another proprietary image processing application.

As for the guy talking about "regular" and "USM style" sharpening, it sounds like what he's calling regular is USM with a mask to affect only edges. Or perhaps difference-of-gaussians? Hard to tell from the explanation. Saying how do you do "regular" sharpening in Photoshop, for instance, would help.

From what I know I believe all common sharpening is USM with masks. A common one is to limit the effect to white halos or dark halos only. The other major one is to limit it to edges so that noise is not sharpened in areas of continuous tone.

There's not much mathematical variety to sharpening methods that I know of. Edge detection methods, yes.

But if you really want the skinny quit listening to me and go read something by one of the founders of the field of digital image processing: http://www.drjohnruss.com/
 
Once converted out of RAW, the image is already in the RGB domain and gray IS determined by RGB channels. Only LAB and CMYK have a separate gray (Lightness/K) channel. I fail to see how the gray pixels avoid being effected by the sharpening process in RGB mode. Of course, you cannot sharpen R/G/B components in the other 2 modes.

My recent technique of choice for sharpening is as follow:
  • Complete editing the image.
  • Stamp visible to a new layer (create a new layer and press Sh+Ctl+Alt+E)
  • Desaturate the Layer.
  • Apply Hi-Pass filter (1 to 3 works for me, experiment with your setting)
  • Change layer blend mode to Overlay
  • Create an action for the above steps (for a very mild sharpening)
  • Run this action on any image requiring sharpening.
I like this process for the following reasons:
  • The process is non-destructive to the image.
  • You can stack the action for more sharpening.
  • You can reduce the fill % for less sharpening.
  • A combination of the above should give you the precise amount of sharpening that you want.
  • You can select a copy of any of the R/G/B channel for the Overlay layer.
  • You can use mask for fine tuning on the Overlay layer.
I usually copy & paste the red channel into the Overlay layer. It has the least detail on skin complexion but plenty of other hard edges for the Hi-Pass filter to pick up.

Regards,
Alan
 
So how about using NIK sharpener in LAB mode.....I find that NIK
sharpener is often less brutal than USM.....and have read that
sharpening in LAB is best.....comments?

Spiritman :)
Each time you convert from one mode to another, the image degrades to some degree. It may not be a concern to many, but can be objectionable to the purists.
--
Alan
 

Keyboard shortcuts

Back
Top