f-number equivalent between m43 and FF

Started Mar 25, 2014 | Discussions thread
crames Regular Member • Posts: 192
Re: Noise is good?

Jack Hogan wrote:

crames wrote:

Jack Hogan wrote:

Well, if you upsample it all the way to one single pixel (independently of upsampling algorithm chosen) you ideally get a pixel with a value equal to the average of all the others. If you then display this single pixel at the same size as your longer fl image above (the larger one), the SNR you will perceive will be entirely due to the setup and physical characteristics of your display medium, having severed any ties to the SNR of The Signal. No?

Downsample it all the way to one single pixel, yes.

Fair enough, would it change if we downsampled/upsampled to 10x10 pixels and displayed at the same size? inmho the downsampled image would be more representative than the upsampled one but I can't quite verbalize why. The fact that in one the noise is uncorrelated/representative while in the other it isn't certainly is one piece of the puzzle, as dwalby says. The other one is posterization/quantization.

Is posterization/quantization an issue with the upsampled image I showed upthread? I don't think so:

520% upsample

And this is 8-bit sRGB. The original is in 16 bits.

I think by now we all understand how downsampling affects the IQ parameters that we are used to dealing with. On the other hand with upsampling we (I) haven't quite figured out how to factor in the additional ones which we always assume away when analyzing images - like posterization/quantization - because cameras and day-to-day viewing conditions assume that we never look at an upsampled image. 'Q:How big can I print an image from my 16MP DSC? A:How far are you going to look at it from?'

Upsamping can improve the appearance of some images. But it's possible that downsampling is more common.

For instance, Janesick shows that in order for quantization to represent an error of less than 1% we need to analyze a uniform portion of the image of at least about 20,000 pixels: that's 100x200 pixels. That's almost never true when we view images 'normally', with typically only 5-15 pixels in the eye's CoC. So there must be some other criterion, much lower than 20,000 pixels, for excellent IQ as far as the HVS is concerned.

Don't know the context of what Janesick is saying.

I am not suggesting that this be part of a workflow (most images are purposedly not enlarged to the point that a displayed pixel is larger than human CoC when projected on the retina, otherwise posterization becomes visible and objectionable). I am simply saying that to compare a visibly posterized image to one that is not is unfair to one and not apples-to-apples. In that case adding noise is one way to level the playing field.

GB's argument has to do with photon counts and photon and read noise. These should all be measurable. Visibility of noise is a completely different can of worms that IMO is best kept out of the discussion or made into a separate one.

Back to GB's experiment, what about upsampling increasing (measurable) photon noise in an image?

Post (hide subjects) Posted by
(unknown member)
(unknown member)
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow