(unknown member)
•
Forum Pro
•
Posts: 16,732
Re: "Less Megapixels = Better Color"...
5
Truman Prevatt wrote:
stevo23 wrote:
Truman Prevatt wrote:
Batdude wrote:
...Do you agree with that?
I went to one of my local camera stores and there is a gentleman that has been working there for many years and he seems to know his stuff. He is in his late 60's early 70's I would say.
The guy was talking to a customer and I was kind of paying attention to the type of conversation they were having. The customer asked "Should I get a camera with a lot of megapixels and what's the difference?"
To make it really short, then the sales person asked the customer "do you want a lot of resolution or better image quality? The higher the megapixels the more resolution you will get with lots of detail. The less resolution the better the Image quality will be with better richer color".
The key is size of the pixels. On the same area - take an APS-C sensor for example - the more MP the smaller the pixel. Bigger pixels collect more light. They produce more tonal gradation and hence richer colors.
But can we back that up with any data? IE, it sounds good, but do larger pixels actually have the ability to each express a wider range of values? I don't know if that's true today.
My take on it is that smaller and more dense pixels can reproduce more color gradation. A single pixel is only going to have it's one value, so if you have two of them - only two - you will have a very limited color range. But if you have to billion, you can express better gradation along the plane of the sensor.
Now if you put that higher number of MP on a FF sensor - the pixels will be bigger, e.g. 24 MP APS-C sensor has significantly smaller pixels than a 24 MP FF sensor. For example on an APSC 24 MP equates to a pixel with linear dimensions of approximately 3.9 micro meters while on a FF sensor the pixel size will be 6 micro meters. I put 24 MP on a 44x33 medium format camera such as the GFX the pixel size would be 7 1/3 micrometer. Bigger pixels, deeper wells, more light capturing ability, higher DR and greater tonal and color richness. So bigger pixels the greater the tonal gradation so richer tones and richer colors. Nothing has changed. People went to medium format film cameras over 35 mm cameras for better image quality and to 4x5 cameras over medium format as they wanted more image quality, richer tones and colors.
This old codger is right - nothing really new under the sun. The world nor photography didn't change with digital sensors.
Deeper wells will capture more photos and support deeper bit depth in the ADC have more dynamic range and less noise. Hence more tones. A 14 bit ADC will produce more tones than a 12 bit ADC and a 16 bit will produce even more.
http://reedhoffmann.com/size-matter-especially-with-pixels.
Sure you could make an argument that one could integrate pixels - down sample - but that would be after the conversion from raw to RGB (with the exception of maybe the Foveon Q sensor) hence would not be as efficient as a pixel 4X the size and a pixel capturing 4X the number of photons.
Please explain why this is 'less efficient'.
Besides, you don't need to downsample. You just need to look at the image at the same size. Noise is not viewed at a pixel level, it's viewed at the minimum level we can see it.
It's not pixel noise that matters, it's sensor noise. And this is overwhelmingly based on sensor size, not pixel size.
https://www.dxomark.com/Cameras/Compare/Side-by-side/Sony-A7R-III-versus-Sony-a9___1187_1162
That's comparing the A9 and A7Riii at the same image size. If anything, the A7Riii has more DR at low ISO, and better colour depth.
You are trying to argue that a 1 cylinder engine produces more horsepower than a 4 cylinder engine because the cylinder is bigger.
Sure there is a limit - but today we are at the point that all top end cameras have more than enough MP. Go on line and look at some of the images coming out of the 40 and 50 MP Phase One backs which have a bit depth of 16 bits to capture more tonal gradations. The more photons/the higher the dynamic range/the more bits per tone which translates to tonal gradation.
Noise is already far higher than the quantisation step except at very low signals. Bit depth has very little influence on tonal gradations.
There have been arguments that 5 micron pixels are optimal.
http://www.clarkvision.com/articles/does.pixel.size.matter/#sensorconstant
Somewhat out of date, but on the dynamic range issue, yes - larger pixels should produce more DR at high ISO. There is less read noise per pixel. But most of them actually have worse DR at low ISO.
DR only involves read noise, and read noise these days is far better controlled than it was when that article was written. Less than 1 bit on a 14-bit ADC is not uncommon. It has very little effect on colour and tonal range at most ISOs.
The interesting thing is that is where we are today in the 36 -42 MP FF cameras.
However, going back to say the Phase One vs. a X-T2 producing the same filed of view image. We see a light source on the Phase One is spread over a much larger area on the sensor than the T2 since the sensor is bigger. Hence a 24 MP back on the Phase One would actually produce not only more photons per pixel (bigger pixels) but would also produce the same resolution less impact from lens issues and diffraction.
There is also an argument that oversampling would be highly beneficial and would eliminate false detail and colour moire. Let the lens limit the system, not the sensor.
And if you don't believe me, go onto the tech forum and ask Eric Fossum.
-- hide signature --
Reporter: "Mr Gandhi, what do you think of Western Civilisation?"
Mahatma Gandhi: "I think it would be a very good idea!"