How does L/M/S image size work?

Vincent1960

Member
Messages
43
Reaction score
77
Location
CH
Most cameras allow to choose image size, typically Large, Medium or Small.

Question 1: How is this reduction achieved, for instance for Fujifilm? Binning of pixels? Compression of the whole file?

Question 2: is a Medium size 50mp picture of one camera generally the same quality as the Large size 50mp of another model? (NB: assuming both cameras have sensors with the same pixel size).

Or does lowering image size imply a loss of quality in addition to lowering pixel count?

Many thanks for your insights!
 
Most cameras allow to choose image size, typically Large, Medium or Small.

Question 1: How is this reduction achieved, for instance for Fujifilm? Binning of pixels? Compression of the whole file?
I’m sure it varies between manufacturers, but best practice isn’t pixel binning but rather using mathematical interpolation, where a simple hypothetical equation is fitted to existing pixel values and applied to estimate new pixel values.



Pixel binning, along with the process of decimation, which is another technique which simply drops intervening pixels, can be considered to be two extremes of a continuum of methods. Binning typically softens an image greatly, while decimation makes an image unrealistically sharp with aliasing artifacts. A smooth interpolation will not soften an image too much nor will it cause aliasing. Also, these two extreme methods can’t reduce image size to any arbitrary degree: you can only bin 2x2, 3x3, 4x4 etc. pixels, and simply can’t reduce the size by 10%; likewise with decimation.

Cameras also provide varying amounts of JPEG compression, which is a different thing altogether, and the amount of JPEG compression applied is not related to resolution.
Question 2: is a Medium size 50mp picture of one camera generally the same quality as the Large size 50mp of another model? (NB: assuming both cameras have sensors with the same pixel size).
Any interpolation is just a mathematical estimate, and will provide less accurate image data than organically captured image data. But lens quality and shooting technique will likely have a greater effect on the final results.

However, due to the presence of a color filter array, which is itself interpolated to provide full color data for each pixel, and which can lead to aliasing artifacts, it’s entirely possible that the downsampled camera image is higher quality compared to the full sized image of the other camera, but this is only due to the camera itself, or a raw processor, having access to the original undemosaiced pixel data.
Or does lowering image size imply a loss of quality in addition to lowering pixel count?
I’d do side-by-side comparisons.
Many thanks for your insights!
 
Very interesting, thanks!
 

Keyboard shortcuts

Back
Top