FingerPainter
Forum Pro
- Messages
- 12,593
- Solutions
- 37
- Reaction score
- 13,643
No, it will be less than 1.4x as high. How much less depends on how sharp the lens is. An infinitely sharp lens would produce images 1.4 times as sharp from a 60MP sensor compared to a 30MP sensor. A not particularly sharp lens might not see very much improvement at all.Sure, but when reading tests on lenstip, lensrentals etc it seems that many modern lenses can resolve much more than 60mp. So when they test a lens with a 30mp sensor and then a 60mp one, the lw/ph number will be significantly (2x?) higher on the latter.I think you are forgetting some things.Well, I just did some calculations. If shooting with a 61mp camera, you can crop 2x and be left with 16mp, or 2.5x and still have 10mp. For me, 16mp is great and 10mp is decent enough for most of my needs (small or medium sized prints, or cover a full 27 inch 4k monitor).When you crop from 20 mm to 30 mm equivalent field of view, you throw away 25 of 45 Mp.
I don't think the IQ is that much better with a prime.
And what about the gap from 30 to 40?
Good luck and good light.
The sharpness of an even doesn't depend just upon the njuber of pixels on the sensor. It also depends on the lens resolution.
I don't doubt you'd be happy with 16MP if the lens was OK. But when you crop to 16MP from 61 (actually to 15.25MP) you are also thowing away lens resolution.
Image resolution is measured linearly, in lp/ph. It is the result of digitising the analog image cast by the lens. Lens resolution is often measured in lp/mm.If resolution would stay the same no matter how many pixels your sensor had, then there would be no reason to have more megapixels.
But I get your point. I wonder how much lens resolution you lose though (if you use very high quality glass). Surely it cannot be 75%.
When you crop from 61MP to 15.25MP you are cutting out 1/2 the horizontal and 1/2 the vertical dimension of the image. So the number of line pairs cast by the lens on the retained part of the image is cut in half. It is digitized at the same rate (the pixel pitch didn't change) so you should expect to have 1/2 the lp/ph in the cropped image compared to what you had in the uncropped image.
However, if you kept a centre crop, and the lens is sharper in the centre than at the edges, you'd retain a bit more than half the resolution. Perhaps 55%. In contrast, using a zoom and not cropping probably yields something closer to 80-90%. Your suggested approach therefore loses 2.5-4.5 times as much resolution
Why? A 2x teleconverter gets half the lens resolution cast on the sensor, but doesn't halve the pixel rows as well.Theoretically, it should be the same as using a 2x teleconverter.
In one case the information per pixel changed, in the other case it didn't.A smaller area of the glass has to supply a larger amount of pixels with information.
As above.Please correct me if I'm wrong.
Why would you assume there is additional noise? An image doesn't look noisier because it has more noise. It looks noisier because it has a lower Signal to Noise Ratio (SNR). An image with a lower SNR almost always has less noise than an image with a higher SNR but it also has an even smaller signal than the image with the higher SNR. If image A has half the SNR of image B, then image A will look noisier than image B. Image A will probably have about half the noise of image B and about 1/4 the signal of Image B. That's what you get when you crop to 1/4 of a 61MP image.This makes sense on one level. But wouldn't this only be true if you downsized the original (non-cropped) version to the same size as the crop? Let's say the original image is 10k x 6.1k pixels, and you print it at full size. Then you take a pair of scissors and cut away 3/4ths of the picture, leaving only 10k x 1.6k intact. Where is the additional noise coming from?So cropping to 15MP doesn't give you as sharp an image as a photo on a 15MP camera with a good lens. Instead it's like taking a 15MP Photo with a really bad lens - much owrse thtn your zoom would have been.
The noisiness of an image depends on how much light is captured in it. If you crop your 61MP image tp 15MP, you will have thrown away 3/4 of the captured light. Your cropped image would be as noisy as a 61MP image taken with two stop less exposure.
Or to put it another way, noise isn't something separate that exists alongside of pixel values. instead it is a relationship between pixel values. For any given pixel it is a relationship with all the other pixels in what you are looking at. If you change the number of pixels looked at, you change the set of relationships a pixel has even though you don't change the value of the pixel itself.
As you increase the number of pixels included in an image the effect of the noise of any one pixel (or of any specific set of pixels) on the noisiness of a whole image is reduced. The effect of the noisiness of a small set is drowned our by the greater signal and total noise of the larger set.
The noisiness of any sub-part of an image is always greater than the noisiness of the whole image. even though the pixels in the sub-part do not change,. How noisy a viewed image looks depends on the SNR of what you are looking at, which depends primarily on how much light was captured in what you are looking at.The part you leave hanging is still untouched.
A sub-part of an image has less light captured in it than was captured in the whole image, so the sub-part has a lower SNR, so the sub-part looks noisier than the whole image.
Last edited:






