Re: Pixel Density: when Moore is less

Started Jul 8, 2008 | Discussions
Wayne Larmon Forum Pro • Posts: 10,317
Re: How can we resample properly with PhotoShop?

JensR wrote:

Is there a way to resample properly with PhotoShop? If so, then what is the procedure?

Well, as has been said, use a lowpass filter first, then resize using Lanczos or bicubic and then add a bit of highpass filtering for visual sharpness.

Can you clarify this? What tools in PhotoShop correspond to low pass and high pass filters? I can guess that Gaussian blur could be used for low pass, but what radius should be used?

What PS tool is used for high pass filtering?

PS doesn't have Lanczos. It has Nearest Neighbor, Bicubic, Bicubic Sharper, and Bicubic Smoother. Adobe says to use Bicubic Sharper when downsampling and Bicubic Smoother when upsampling. Are you saying they we should skip Bicubic Smoother and Bicubic Sharper and instead use plain Bicubic?

(I have found that sometimes when I use Bicubic Sharper for downsampling and the image was exceptionally sharp to begin with, that the downsampled image ends up with artifacts. So I undo and downsample with Bicubic Smoother and then sharpen with USM to taste. I don't know exactly what the "Smoother" and "Sharper" components do. Adobe never defined this.)

BTW, I use CS3. I should have mentioned this in my previous post.

Wayne

halc
halc Regular Member • Posts: 240
Re: Pixel Density: when Moore is less

Roland Karlsson wrote:

2. The mythical optimal PD
There is a myth that there exist some kind of optimal pixel density (PD). It
does not - which several posters here have shown. John Sheehy even
clam to have shown that with todays sensors more pixels is always better.
One special case of this myth is that lower PD always is better. It certainly is
not; one pixel cameras are rather uninteresting for photography and to
have VGA resolution in your mobile phone is no fun.

I haven't been following the discussion, but to me this makes no sense ("no optimal PD").

We can continue the extreme measurements. At one end we have 1 pixel sensor camera.

At the other end we have, theoretically infinite pixel sensor camera, but to stay within the realms of reality, let's say 10000Mpixels.

Let's take a typical standard 1/2.x" sensor size.

Now. Clearly the 1 sensor pixel camera is silly.

How about the 10000Mpixel?

Considering that in many if not most lighting / exposure situtations that camera would not be able to provide a single photon for each sensor, clearly it would be useless. It would produce very little except noise. Yes, some clever statistical algorithms could perhaps be cooked up to analyze adjacent noise characteristics and then form a meta-pixel out of say each 10x10 unit. But it would more than likely still show mostly just noise.

So, to me - based on the above - it seems obvious there has to be a range of optimal PD, depending of course on the intended purpose.

Just adding pixels infinitely to the same surface area isn't going to cut it.

At some point one runs out of electrons in the well for each sensor and that's the absolute maximum theoretical useful density that I can think of and the real-world optimal density is likely to be reached a long before that.

Care to comment?

As to what is the ideal measure of indicating pixel sensor performance per surface area, I think the question is complicated. Pixel density related issues that I see affecting image quality, which is what counts, are:

  • pixel sensor area size (height, width, diagonal, whatever measure is preferred), perhaps even shape (may not be important now)

  • pixel sensor gap or distance from pixel sensor to pixel sensor (it is after all possible to have exact same pixel sensory density, but differing pixel sensor gap)

  • sensor unit area (sensor unit being the whole imaging sensor of the device, not a single pixel)

  • pixel sensory geometry , that is arrangement pattern of pixel sensors

Additionally of course issues not directly perhaps related to density, like type/layers of sensors used, backlight sensor type, etc.

What would be a single good measure for this?

How about: amount of photons hitting each pixel sensor at a controlled light input to the sensor unit (or lens). That is, how many photons do each pixel sensor get, other things being equal?

If you enlargen the surface area of a pixel sensor -> more photons
If you reduce the pixel sensor gap, you increase the pixel sensor surface area

If you enlargen the sensor unit area without adding more pixels, you effectively enlargen the amount of photons hitting each pixel sensor
ETc.

BTW, good luck calculating that

halc
halc Regular Member • Posts: 240
Ah, old thread - much good discussion, ignore above

Just noticed this was much longer and older thread. Now reading all ther replies, I've learned a lot. Thanks!

Malcolm Practice Contributing Member • Posts: 593
Re: Ah, old thread - much good discussion, ignore above

halc wrote:

Just noticed this was much longer and older thread. Now reading all ther replies, I've learned a lot. Thanks!

Yes, it was a good thread. I was going to repond to your earlier post, saying you'll find all the answers in the thread, but you have already discovered this.
--
Mal

Jonson PL Veteran Member • Posts: 3,600
Re: Pixel Density: when Moore is less

JensR wrote:

The only difference is that the D40x need not be resampled to deliver this same level of noise, it merely needs to be viewed at equal size.

No, to get the full noise-reducing benefit of viewing at a smaller size, you'd need to filter and resize using a good quality algorithm. dpreview tried to prove how ineffective downsampling is but did not include the filtering step.
http://blog.dpreview.com/editorial/2008/11/downsampling-to.html

Oh yes, there is one additional difference -- the D40x has 30% higherresolution while having that same level of noise.

There is no free lunch in downsampling. You cannot have the resolution of the high MP image and the noise of the low MP one at the same time. If you want the low noise, you will have to reduce the resolution with filtering.

So, I can imagine the route by which arrives to the conclusion that the megapixel race is a bad thing, it is a viewpoint entirely reinforced by this site's testing methodology which concentrates on pixel-level noise without (say) rescaling it to noise as a percentage of frame height.

Indeed, "noise per printarea" would make much more sense than "noise per pixel".

Is it easy to do the resampling

Well, you need to match filtering and downsampling....

it is done automatically, if one view images at equal size ?

This depends on the application you use. An old webbrowser will downsize with some bad algorithm. A newer one will better, irfanview will be even better and Lightroom in it's "compare mode" will be better again. Neither of them uses matched filtering, you'd have to do that yourself.

Hi Jens, thanks for coming to the rescue again. And also thanks to Ola.

For a low-tech person like myself, who doesn’t care for much PP, I guess a lower MP camera like the D700 would be the best option for me. (Not that I'm planning of buying at the moment).
So I guess I’m more in agreement with the response from Graystar :

Graystar wrote:
Well if I have to process the hell out of an image just to get it to
look like it came from big pixels, I’d rather just start out with the
big pixels and save the trouble.
http://forums.dpreview.com/forums/read.asp?forum=1018&message=28564326

-- hide signature --

kind regards
Sune

JensR Forum Pro • Posts: 17,950
Re: How can we resample properly with PhotoShop?

Hi Wayne!

I do not use PS, so my terminology might be a bit off.

Can you clarify this? What tools in PhotoShop correspond to low pass and high pass filters?
What PS tool is used for high pass filtering?

I was pretty sure they are called filtering-> "low pass" and "high pass". If that's not the case, we ask google:

http://photoshoptutorials.ws/photoshop-tutorials/photo-retouching/airbrushing-natural-smooth-skin.html
http://www.photoshopessentials.com/photo-editing/sharpen-high-pass/

I can guess that Gaussian blur could be used for low pass, but what radius should be used?

Any that softens the image enough to reduce aliasing after downsizing to the desired degree. (Yes, that is a vague answer, as the radius will depend on the scene, the resize factor and your tolerance to aliasing).

PS doesn't have Lanczos. It has Nearest Neighbor, Bicubic, Bicubic Sharper, and Bicubic Smoother. Adobe says to use Bicubic Sharper when downsampling and Bicubic Smoother when upsampling. Are you saying they we should skip Bicubic Smoother and Bicubic Sharper and instead use plain Bicubic?

Both Bicubic sharper and smoother algorithms will usually be "okay" - for a given shot, though, one might be better than the other. Most seems to agree that simple bicubic is inferior. Other software titles only have one "bicubic", which probably is different from Adobe's definition.

IrfanView (freeware) offers Lanczos, if you want to try. Generally, for downsizing the algorithm isn't as important as for upsizing, so you might decide it's not worth to worry.

I remember there are free Lanczos plugins for PS, but I might be wrong. I found several non-free ones, only.

(I have found that sometimes when I use Bicubic Sharper for downsampling and the image was exceptionally sharp to begin with, that the downsampled image ends up with artifacts. So I undo and downsample with Bicubic Smoother and then sharpen with USM to taste. I don't know exactly what the "Smoother" and "Sharper" components do. Adobe never defined this.)

I guess one is sharper, while the other is smoother

BTW, I use CS3. I should have mentioned this in my previous post.

Makes little difference for me, as I do know neither
Maybe someone with CS experience can chime in.

Cheers
Jens

-- hide signature --

'Well, 'Zooming with your feet' is usually a stupid thing as zoom rings are designed for hands.' (Me, 2006)
My Homepage: http://www.JensRoesner.de

BJL Veteran Member • Posts: 9,164
pixel density equally uncertain: how about linear pixel density, per mm?

Phil Askey wrote:

... many of the things people have been 'asking for' ... are simply unavailable (such as exact pixel pitch). We do provide sensor dimensions where available (as you will note from the camera database).

For compatibility with the long-standing measurement of resolution in linear terms like line pairs per mm, I would suggest a middle way: linear pixel density, in pixels per mm: the reciprocal of pixel pitch. This could also avoid your concern that pixel pitch should be stated exactly or not at all, while accepting some margin of error in your published pixel density calculations. The inaccuracy in each case comes from the same source: you do not always know the exact area of the part of the sensor that contains the stated pixel counts, since you often have only sensor dimensions for a region that includes additional photosites not corresponding to output pixels.

As one example, you seem to compute pixel density for 4/3 sensors based on dividing the stated effective pixel count by the area given by the nominal 13.5x18mm dimensions of 4/3" format, whereas in fact the output pixels in fact all come from a region of about 13x17.3mm in Four Thirds sensors (except for the wider format GH1 sensor).

However linear density in "pixels per mm" might emphasize too much the advantages of higher density (higher resolution) rather than its alleged disadvantages (worse per pixel noise, at least when viewed at huge magnification of 100% on screen), which seems to what pixel density was invented to measure.

Aside: pixel pitch is by definition the spacing from one photosite (cell) to the next, including the space taken by any wiring between the electron wells and such, so there are indeed exact formulas for converting between "pixel density" (pixels per unit area), "linear pixel density" (pixels per mm in either direction), and "pixel pitch"; for square photosites anyway:

  • pixels per mm = sqrt(pixels per square cm) / 10

  • pixel pitch in microns = 1000/(pixels per mm)

 BJL's gear list:BJL's gear list
Olympus E-1 Olympus E-620 Olympus OM-D E-M5 Olympus Zuiko Digital 14-54mm 1:2.8-3.5 Olympus Zuiko Digital ED 50-200mm 1:2.8-3.5 SWD +8 more
Malcolm Practice Contributing Member • Posts: 593
Re: Pixel Density: when Moore is less

Jonson PL wrote:

For a low-tech person like myself, who doesn’t care for much PP, I guess a lower MP camera like the D700 would be the best option for me. (Not that I'm planning of buying at the moment).
So I guess I’m more in agreement with the response from Graystar :

Graystar wrote:
Well if I have to process the hell out of an image just to get it to
look like it came from big pixels, I’d rather just start out with the
big pixels and save the trouble.

Except that you don't have to 'process the hell out of an image', all you have to do is resample it to the output resolution that you need, which is something that you would always do in any case, unless you're one of those people who always prints at whatever size their camera gives at 300dpi. Critical users generally resample using PP tools in any case, because you can't trust printer drivers to do a good job.

-- hide signature --

Mal

PIXmantra Senior Member • Posts: 1,637
This is *completely* false...

Malcolm Practice wrote:

Except that you don't have to 'process the hell out of an image', all you have to do is resample it to the output resolution that you need,

...There is no such thing as simply resampling or, in plain terms, wasting pixels and assuming that less pixels is needed by the end user.

Furthermore, you are discarding the effective noise-spectra in the final RGB conversion: if it is stronger or has visible Mid and low range components, these will SURVIVE ANY downsampling you perform, which then puts you back at the Noise Reduction bench as mandatory before even considering any DPI-intensive use of the file.

The above also puts you in another seat, which is being forced to use a RAW conversion algorithm extremely biased towards high-frequency, so when you squeeze the file either by higher DPIs or simply by discarding pixels, you are in a better position hide the noise effects due to the finer pitched-spectra... Well, this horse also has a problem, because mid and mid-to-hi response is poor, and noise signature is so finely-pitch that it appears "stuck" to fine-detail, thus immediately transfering a "contaminated" look to the file's appereance.

In short, your recipe pretty much implies lower inherent quality with higher pixel-count files, and (incorrectly) assumes a pre-defined, arbitrary use/intent for such pixels.

PIX

-- hide signature --

'What we do in life, echoes in eternity...'

Malcolm Practice Contributing Member • Posts: 593
nope, it's true...

PIXmantra wrote:

Malcolm Practice wrote:

Except that you don't have to 'process the hell out of an image', all you have to do is resample it to the output resolution that you need,

...There is no such thing as simply resampling or, in plain terms, wasting pixels and assuming that less pixels is needed by the end user.

True, if you're selling images on to someone else who will do the PP for themselves. Usually those people want as many pixels as they can get, and for you to have messed with the image as little as possible yourself. They will process it as they need for their distribution medium, whatever it be.

Furthermore, you are discarding the effective noise-spectra in the final RGB conversion: if it is stronger or has visible Mid and low range components, these will SURVIVE ANY downsampling you perform, which then puts you back at the Noise Reduction bench as mandatory before even considering any DPI-intensive use of the file.

The above also puts you in another seat, which is being forced to use a RAW conversion algorithm extremely biased towards high-frequency, so when you squeeze the file either by higher DPIs or simply by discarding pixels, you are in a better position hide the noise effects due to the finer pitched-spectra... Well, this horse also has a problem, because mid and mid-to-hi response is poor, and noise signature is so finely-pitch that it appears "stuck" to fine-detail, thus immediately transfering a "contaminated" look to the file's appereance.

In short, your recipe pretty much implies lower inherent quality with higher pixel-count files, and (incorrectly) assumes a pre-defined, arbitrary use/intent for such pixels.

My recipe pretty much implies a real world where photographers distribute their images using a variety of media, each of which has different resolution requirements, and therefore every workflow involves resampling at some point, whether done explicitly in PP or implicitly in printer and display drivers.

Edit: Probably not worth responding to the rest of your meaningless technobabble, it will only result in another few threads in which the world and his dog bang their heads against your ego.
--
Mal

PIXmantra Senior Member • Posts: 1,637
The fact hat you *do not understand it*...

Malcolm Practice wrote:

Edit: Probably not worth responding to the rest of your meaningless technobabble, it will only result in another few threads in which the world and his dog bang their heads against your ego.

...does not mean, at all, that it is "technobabble". Just simply say "I do not understand what you are describing" and that would make you stronger, better, though.

By simply discarding/ignoring the resulting noise-spectra of your RAW CONVERSION (because NO ONE around here prints RAW data and sends it as such to their customers), you have already demonstrated that you are not grasping the actual implications of your "downsampling" / "wasting" recipe.

And here is a perfect example of how low-frequency spectra more than survives a down-sampling and it is still very visible , even with a 50% discarding of pixels:

Left 1D3, center D3 and right D900 (using on-board conversion engine, all at ISo6400):

http://www.pbase.com/feharmat/image/102913073/original
http://www.pbase.com/feharmat/image/102913074/original

HINT: don't bother answering with "well, that's not from RAW", as you will instantly acknowledge my point, because all you are doing is shifting to an alternate conversion engine that produces a much different noise spectra (not so crooked by such a strong mid-to-low-frequency component).

See you at the finish line,

PIX

-- hide signature --

'What we do in life, echoes in eternity...'

Malcolm Practice Contributing Member • Posts: 593
My hat does understand it as well as you

PIX, I really don't want to pollute what was a very valuable and informative thread with an interminable argument based on PIX semantics and PIX bogus comparisons. Save it to say that the necessity of resampling is there in practically every current workflow which produces real output media, so 'resampling' is not an addition stage that would need to be gone through for higher density camera as opposed to a lower density one. In that context, your argument about differential effects of raw processing are pretty much irrelevant.

However, I would add as an observation that if resampling was built into the raw development process, as opposed to a separate stage in the workflow, it is likely that even better results could be obtained by optimisation of the conversion to both the input and output sample rate.

PIXmantra wrote:

Malcolm Practice wrote:

Edit: Probably not worth responding to the rest of your meaningless technobabble, it will only result in another few threads in which the world and his dog bang their heads against your ego.

...does not mean, at all, that it is "technobabble". Just simply say "I do not understand what you are describing" and that would make you stronger, better, though.

By simply discarding/ignoring the resulting noise-spectra of your RAW CONVERSION (because NO ONE around here prints RAW data and sends it as such to their customers), you have already demonstrated that you are not grasping the actual implications of your "downsampling" / "wasting" recipe.

And here is a perfect example of how low-frequency spectra more than survives a down-sampling and it is still very visible , even with a 50% discarding of pixels:

Left 1D3, center D3 and right D900 (using on-board conversion engine, all at ISo6400):

http://www.pbase.com/feharmat/image/102913073/original
http://www.pbase.com/feharmat/image/102913074/original

HINT: don't bother answering with "well, that's not from RAW", as you will instantly acknowledge my point, because all you are doing is shifting to an alternate conversion engine that produces a much different noise spectra (not so crooked by such a strong mid-to-low-frequency component).

See you at the finish line,

PIX

-- hide signature --

Mal

Kawika Nui Contributing Member • Posts: 799
Re: Pixel Density: when Moore is less

ejmartin wrote:

Phil Askey wrote:

Yes, you're right at least half of the reason for doing this is to
'expose' or 'put pressure' on the manufacturers to think about the
terrible tradeoffs that the continuing megapixel race (mostly in
compact cameras) are having on image quality.

Ah, the implicit indictment becomes explicit. Phil, could you please explain what these terrible tradeoffs are, and what quantitative measures you are using to draw such conclusions?

The other half (and to
be fair the initial idea) is to provide people with some method of
reference,

As many have tried to explain in the recent threads, a misleading one. Your real complaint is about small sensors, and so the better figure of merit to be reporting is the sensor area in square cm. The testing methodology used in your reviews results in comparisons of small pixel images at large magnification to large pixel cameras at small magnification, even if the two cameras in question have the same sensor size. The laws of physics guarantee that the higher pixel density camera will appear worse in this comparison, even if it is capturing the same number of photons as the other one.

Your ploy

Oh, please!

has induced a number of people to produce some nice tests of whether pixel density is the problem or sensor size is the problem. When pixel size is held constant and only the equivalent of sensor size is varied, one finds the following:

http://forums.dpreview.com/forums/read.asp?forum=1018&message=28565416
http://forums.dpreview.com/forums/read.asp?forum=1018&message=28570891

For the background on where these two images come from:

http://forums.dpreview.com/forums/read.asp?forum=1018&message=28563024

The nicer looking images come from the equivalent of a big sensor with small pixels, the worse looking ones from a small sensor with the same small size pixels. When all other variables are controlled for, it becomes clear that the problem with image quality is not the small pixels, it's the small sensor they are put into.

Not so. For example, read

http://www.clarkvision.com/articles/does.pixel.size.matter/

and

http://www-isl.stanford.edu/~abbas/group/papers_and_pub/pixelsize.pdf

and

http://www.outbackphoto.com/dp_essentials/dp_essentials_02/essay.html

This demonstration reveals how misleading pixel density can be as a figure of merit, since two images can be generated from the same pixel density that have remarkably different image quality. So how can pixel density be a guide to image quality? Instead, in this example it is purely the implicit sensor area that is the difference, since for the nicer looking image to be the same percentage of the sensor area as the nasty looking one, it would have to come from a sensor 35 times larger in area.

and if nothing else to inspire some thought and
conversation about the issue (I think we've achieved that).

Yes you have. I hope you take to heart what some of the more astute forum participants

I hope you are not including yourself in this category.  Snide does not equal astute, and some of your assertions are totally unsupportable (see references above).

have contributed on this subject.

-- hide signature --
simon2001 Regular Member • Posts: 461
Re: Pixel Density: when Moore is less

I am very new to photography, having bought my first camera, a sony hx400v, a few months ago. But now i am hooked and looking forward to getting my hands on some more capable equipment.

What has struck me about the number of pixels on my bridge camera's sensor is that to put 20 million of them, its a 20 mega pixel sensor, in a sensor of size 6.7mm long by just over 4mm wide, the dimensions per pixel becomes comparable to the wavelengths of visible light. What i mean is that the dimensions are of the order of a few wavelenths of green light etc, according to my quick calculations. This seems to me to be pushing the boundaries of physics. If we think of light as a wave then surely we need to measure it over many wavelengths.

If we try to measure over too small a distance to say for sure if its a wave or not a wave then we must be creating uncertainty in those measurements, i.e, conversion of light into an electric response which allows the sensor to know for sure a wave of green or red or blue light etc just hit it. Uncetainty at the basic physics level would definitely turn to noise at the engineering/photography level.

In fact if the pixel sizes became much smaller then would they not be equivalent to some kind of single photon trap...something that doesnt seem feasible to me, atleast for what I could afford to pay.

So, it seems to me that small sensor sizes with high pixel densities run the risk of becoming self defeating.

I could of course be totally wrong.

TrojMacReady
TrojMacReady Veteran Member • Posts: 8,729
Re: Pixel Density: when Moore is less

Many of the sources in the articles you linked are outdated and based on assumptions built on singled out (old) technology. Math based on outdated sensors cannot be extrapolated to all cameras today. For example, Clark states that small pixels yield less DR for the same sensor size.

Practice tells us there is no such correlation at least for ILC sized sensors, except for high ISO's. See Sensorgen for example.

The cutoff where the above no longer applies, moves to a smaller and smaller pixel size every year. Which explains why the "optimal" 6.5 micron mentioned in the second link paper, is wholly outdated too (2000...?). As even predicted in the very paper itself.

"We found that the optimal pixel size scales with technology, albeit at slower rate than the technology"

Skipper494 Forum Pro • Posts: 11,262
Re: Pixel Density: when Moore is less

Who's Moore?

Keyboard shortcuts:
FForum MMy threads