Monitor resolution and pixel peeping, a conflict.

Paul Belanger

Active member
Messages
84
Reaction score
21
Location
Lethbridge Alberta, CA
Mac now has a 5K monitor, should be really good. Most likely is.

One issue I have noticed though with higher resolution monitors is the less magnification one gets, say, using 100% crop to check out details like noise and sharpness when photoshopping.

Let me explain how I see it. Lets say one has a standard 1920x1080 format monitor. When viewing the entire image there are artifacts because the camera pixels cannot match on a one for one basis. So most if not all photo editing programs offer cropping views like 50% 100% 200% etc. The 100% view crops the image in such a way as to present a pixel for pixel view on the screen. The software calculates the pixels from the camera and the pixels on the screen and takes a portion of the camera image and shows a pixel for pixel view.

Example, My monitor has 1680x1050 resolution, my D90 has about 4100x2800. So when I set 100% cropping to view details I see only 1680x1050 of the total image, about 1/3. Using a D7100 with about 6000x4000 on the same monitor at 100% I see, again, 1680x1050, which is a smaller portion of the image, about 1/4 or more magnification in a sense. I suspect the software uses the long side of the image and lets the short dimension fall where it will.

As a general rule, as I understand it, the larger the camera resolution and the smaller the monitor resolution the greater the magnification of your image at 100% crop.

So when using higher resolution like the newer 5K monitors some cameras like the D90, or any with resolution 5K or less, will show no magnification at 100%. In fact some will not even fill the screen and may appear smaller then the screen available.

Perhaps some enterprising software developer might consider using some form of binning on the monitor to provide a lower monitor resolution for a pseudo 100% crop detail with greater magnification.
 
Mac now has a 5K monitor, should be really good. Most likely is.

One issue I have noticed though with higher resolution monitors is the less magnification one gets, say, using 100% crop to check out details like noise and sharpness when photoshopping.

Let me explain how I see it. Lets say one has a standard 1920x1080 format monitor. When viewing the entire image there are artifacts because the camera pixels cannot match on a one for one basis. So most if not all photo editing programs offer cropping views like 50% 100% 200% etc. The 100% view crops the image in such a way as to present a pixel for pixel view on the screen. The software calculates the pixels from the camera and the pixels on the screen and takes a portion of the camera image and shows a pixel for pixel view.

Example, My monitor has 1680x1050 resolution, my D90 has about 4100x2800. So when I set 100% cropping to view details I see only 1680x1050 of the total image, about 1/3. Using a D7100 with about 6000x4000 on the same monitor at 100% I see, again, 1680x1050, which is a smaller portion of the image, about 1/4 or more magnification in a sense. I suspect the software uses the long side of the image and lets the short dimension fall where it will.

As a general rule, as I understand it, the larger the camera resolution and the smaller the monitor resolution the greater the magnification of your image at 100% crop.

So when using higher resolution like the newer 5K monitors some cameras like the D90, or any with resolution 5K or less, will show no magnification at 100%. In fact some will not even fill the screen and may appear smaller then the screen available.

Perhaps some enterprising software developer might consider using some form of binning on the monitor to provide a lower monitor resolution for a pseudo 100% crop detail with greater magnification.
If the image is too small for your liking at 100% on a high-res monitor, why not simply go to greater than 100% view?
 
I have had no experience using a monitor at such high res, a higher % crop might work but my question is, would a higher crop on a higher resolution monitor provide the one for one pixel matching automatically or is there some particularly special relationship embedded in the software deigned to reproduce the camera sensor image on a one for one basis with no "artifacts" required to try and reproduce each sensor pixel. Some maintain that the 100% crop is a special case.

As I have been led to believe using higher % crops like 200% or 400% introduces artifacts, as can be seen as you increase magnification on lessor resolution monitors until each camera pixel is represented as little squares made up of a number of monitor pixels. Then everything is "blocky" like an image made out of Legos or something. Also, if the camera pixel count and the monitor resolution entail an irrational number ratio it is difficult to get a perfect representation as simple and, unique, one to one.

As proposed in my earlier post using a 2x2 monitor binning would allow for a one to one where every sensor pixel is represented by a 2x2 pseudo pixel, which on a high res monitor, 5K, would simulate a 1250 pixel monitor, almost normal by today's standards. I would bet that it would be indiscernible.
 
I have had no experience using a monitor at such high res, a higher % crop might work but my question is, would a higher crop on a higher resolution monitor provide the one for one pixel matching automatically or is there some particularly special relationship embedded in the software deigned to reproduce the camera sensor image on a one for one basis with no "artifacts" required to try and reproduce each sensor pixel. Some maintain that the 100% crop is a special case.

As I have been led to believe using higher % crops like 200% or 400% introduces artifacts, as can be seen as you increase magnification on lessor resolution monitors until each camera pixel is represented as little squares made up of a number of monitor pixels. Then everything is "blocky" like an image made out of Legos or something. Also, if the camera pixel count and the monitor resolution entail an irrational number ratio it is difficult to get a perfect representation as simple and, unique, one to one.

As proposed in my earlier post using a 2x2 monitor binning would allow for a one to one where every sensor pixel is represented by a 2x2 pseudo pixel, which on a high res monitor, 5K, would simulate a 1250 pixel monitor, almost normal by today's standards. I would bet that it would be indiscernible.
If you simply choose a 2:1 (200 percent) view on a high-res monitor, you are effectively getting the binning you are talking about. Each pixel in the image will now be represented by four pixels on screen. The visual impression should be exactly the same as that from a 1:1 (100 percent) view on a same-size monitor with half the resolution of the high-res one.
 
Also, if the camera pixel count and the monitor resolution entail an irrational number ratio it is difficult to get a perfect representation as simple and, unique, one to one.
Since the camera pixel count either horizontally or vertically is an integer, and the monitor pixel counts in the same directions are as well, their ratio is a rational number.

If the blockiness of quads of pixels at 200% bothers you, just back up a bit until you can't make them out. You have an optical low pass filter in your head . With the right target , you can see how it works.



113a1ba48129484db6e6af5fc8ac9861.jpg.png



Jim

--
 
This does depend on the software being used, though. Lightroom and RawTherapee only offer full multiples of 100% for the obvious reason of linearly binning pixels together (via nearest neighbor I assume). This is very usable on my 2560 px screen and should be just as usable at larger magnifications on 5k screens.

SilkyPix on the other hand offers magnification steps in between multiples of 100% and I don't know which resampling method is used. If it's nearest neighbor there should be no issues at full multiples.

What most people don't know is that their web browsers use bilinear or bicubic resampling for zooming pages. This is bad for pixel peeping, because full multiples are not just binned. The solution is to change your browser to use nearest neighbor resampling (done via default CSS template in Firefox) and then use full multiples of 100% when checking images on web-pages.

With current versions of Firefox you need to watch out that it even samples up 100% view to whatever magnifications you set in Windows. So a magnification of 125% set in Windows makes 100% zoom view display at 125% in Firefox, too, even when it claims 100%. The solution to this is to compensate by using a correspondingly smaller zoom level in Firefox and use multiples of that for zooming. Example: 100% / 125% = 80%, multiples being 160, 240, 320 and so on. I am using an addon that switches image view in Firefox between 80% (=1:1) and fit-to-windows via simple mouse-click and zooms in full multiples via ctrl-+/-/wheel.

--
Red flash eyes save lives and eye-sight!
http://en.wikipedia.org/wiki/Retinoblastoma
 
Last edited:
This does depend on the software being used, though. Lightroom and RawTherapee only offer full multiples of 100% for the obvious reason of linearly binning pixels together (via nearest neighbor I assume). This is very usable on my 2560 px screen and should be just as usable at larger magnifications on 5k screens.

SilkyPix on the other hand offers magnification steps in between multiples of 100% and I don't know which resampling method is used. If it's nearest neighbor there should be no issues at full multiples.

What most people don't know is that their web browsers use bilinear or bicubic resampling for zooming pages. The solution is to change your browser to use nearest neighbor resampling (done via default CSS template in Firefox) and then use full multiples of 100% when checking images on web-pages.

With current versions of Firefox you need to watch out that it even samples up 100% view to whatever magnifications you set in Windows. So a magnification of 125% set in Windows makes 100% zoom view display at 125% in Firefox, too, even when it claims 100%. The solution to this is to compensate by using a correspondingly smaller zoom level in Firefox and use multiples of that for zooming. Example: 100% / 125% = 80%, multiples being 160, 240, 320 and so on. I am using an addon that switches image view in Firefox between 80% (=1:1) and fit-to-windows via simple mouse-click and zooms in full multiples via ctrl-+/-/wheel.
Hi Timur,

Sure. If you want to be sure that you are seeing the original pixels rather than resampled versions of them, you have to use full multiples of 100%. But as long as these are available (and they are), there should be no problem.
 
I expect all RAW converters to use nearest neighbor. Just make sure your browser allows to switch off bilinear/bicubic (Firefox does, albeit rather complicated), too, and also watch your other software (turn off "smooth" zooming in Faststone for example).

I really don't want to know how many people pixel peep in their browser at multiples of 100%, not knowing that they still just get an interpolated (non binned) image. ;)

--

Red flash eyes save lives and eye-sight!
http://en.wikipedia.org/wiki/Retinoblastoma
 
Last edited:
I expect all RAW converters to use nearest neighbor. Just make sure your browser allows to switch off bilinear/bicubic (Firefox does, albeit rather complicated), too, and also watch your other software (turn off "smooth" zooming in Faststone for example).

I really don't want to know how many people pixel peep in their browser at multiples of 100%, not knowing that they still just get an interpolated (non binned) image. ;)
Yes, the browser thing is worth keeping in mind. That's not where I peep if I can avoid it but nevertheless.
 
"Example, My monitor has 1680x1050 resolution, my D90 has about 4100x2800. So when I set 100% cropping to view details I see only 1680x1050 of the total image, about 1/3. Using a D7100 with about 6000x4000 on the same monitor at 100% I see, again, 1680x1050, which is a smaller portion of the image, about 1/4 or more magnification in a sense. I suspect the software uses the long side of the image and lets the short dimension fall where it will."

The way resolution of Bayer sensors is reported differs from how monitor resolution is reported. On most monitors each pixel is 3 dots; one green, one red, one blue. A Bayor sensor with 36 megapixels has spatial resolution 18 MP in green, 9 MP in red and 9 MP in blue. Its full-spectrum spatial resolution is only 9 MP. The D7100 has only 6 full-spectrum MP, 3000*2000. Your 1680x1050 monitor has about 1.8 million full-spectrum pixels (theoretically, in reality it is as much spectrum as its color gamut permits). That translates into 1.8*4=7.2 megapixels in Bayor-sensor terms. i.e. If we were to count monitor pixels the way we count Bayor-sensor pixels, we would say it is a 7.2 MP monitor.

The new Apple 5K monitor has resolution 5120x2880. To get the full potential out of it, you need a Bayor sensor with 4*5120*2880 = about 59 MP, at the same aspect ratio, and a properly processed image. The ideal processing would be converting each 2x2 Bayor grid into a set of 3 RGB values; straight conversion, no demosaicing (and no lossy compression either). White balance, curves, gamma, etc., would still need to be applied as usual.

By the same argument, 1920*1080 monitors in common use would need at least an 8 MP sensor for best results. With all the less-than-ideal processing that goes on, about 10-12 MP (Bayor) is probably what is needed to fill the screen (with full-spectrum spatial resolution).
 
The way resolution of Bayer sensors is reported differs from how monitor resolution is reported. On most monitors each pixel is 3 dots; one green, one red, one blue. A Bayor sensor with 36 megapixels has spatial resolution 18 MP in green, 9 MP in red and 9 MP in blue.
I expect that the OP is looking at demosaced, not raw, images.

Jim
 
One issue I have noticed though with higher resolution monitors is the less magnification one gets, say, using 100% crop to check out details like noise and sharpness when photoshopping.
The idea is to magnify the image until you can clearly see the pixels. There is nothing magical about 100%. Use 200% if you prefer. The higher the monitor resolution, the more you can magnify the image.
 

Keyboard shortcuts

Back
Top