Re: "Less Megapixels = Better Color"...
2
Truman Prevatt wrote:
stevo23 wrote:
Truman Prevatt wrote:
Batdude wrote:
...Do you agree with that?
I went to one of my local camera stores and there is a gentleman that has been working there for many years and he seems to know his stuff. He is in his late 60's early 70's I would say.
The guy was talking to a customer and I was kind of paying attention to the type of conversation they were having. The customer asked "Should I get a camera with a lot of megapixels and what's the difference?"
To make it really short, then the sales person asked the customer "do you want a lot of resolution or better image quality? The higher the megapixels the more resolution you will get with lots of detail. The less resolution the better the Image quality will be with better richer color".
The key is size of the pixels. On the same area - take an APS-C sensor for example - the more MP the smaller the pixel. Bigger pixels collect more light. They produce more tonal gradation and hence richer colors.
But can we back that up with any data? IE, it sounds good, but do larger pixels actually have the ability to each express a wider range of values? I don't know if that's true today.
My take on it is that smaller and more dense pixels can reproduce more color gradation. A single pixel is only going to have it's one value, so if you have two of them - only two - you will have a very limited color range. But if you have to billion, you can express better gradation along the plane of the sensor.
Now if you put that higher number of MP on a FF sensor - the pixels will be bigger, e.g. 24 MP APS-C sensor has significantly smaller pixels than a 24 MP FF sensor. For example on an APSC 24 MP equates to a pixel with linear dimensions of approximately 3.9 micro meters while on a FF sensor the pixel size will be 6 micro meters. I put 24 MP on a 44x33 medium format camera such as the GFX the pixel size would be 7 1/3 micrometer. Bigger pixels, deeper wells, more light capturing ability, higher DR and greater tonal and color richness. So bigger pixels the greater the tonal gradation so richer tones and richer colors. Nothing has changed. People went to medium format film cameras over 35 mm cameras for better image quality and to 4x5 cameras over medium format as they wanted more image quality, richer tones and colors.
This old codger is right - nothing really new under the sun. The world nor photography didn't change with digital sensors.
Deeper wells will capture more photos and support deeper bit depth in the ADC have more dynamic range and less noise. Hence more tones. A 14 bit ADC will produce more tones than a 12 bit ADC and a 16 bit will produce even more.
http://reedhoffmann.com/size-matter-especially-with-pixels.
Sure you could make an argument that one could integrate pixels - down sample - but that would be after the conversion from raw to RGB (with the exception of maybe the Foveon Q sensor) hence would not be as efficient as a pixel 4X the size and a pixel capturing 4X the number of photons. Sure there is a limit - but today we are at the point that all top end cameras have more than enough MP. Go on line and look at some of the images coming out of the 40 and 50 MP Phase One backs which have a bit depth of 16 bits to capture more tonal gradations. The more photons/the higher the dynamic range/the more bits per tone which translates to tonal gradation.
There have been arguments that 5 micron pixels are optimal.
http://www.clarkvision.com/articles/does.pixel.size.matter/#sensorconstant
The interesting thing is that is where we are today in the 36 -42 MP FF cameras.
However, going back to say the Phase One vs. a X-T2 producing the same filed of view image. We see a light source on the Phase One is spread over a much larger area on the sensor than the T2 since the sensor is bigger. Hence a 24 MP back on the Phase One would actually produce not only more photons per pixel (bigger pixels) but would also produce the same resolution less impact from lens issues and diffraction.
A sensor generates an electrical signal however the pixels themselves do not know if they are large or small. Their job is to simply generate an electrical signal whether that signal comes from a larger or smaller photosite. Typically, photosites of the same generation based on the same silicon process and technology should generate a cleaner stronger signal if the pixel itself is larger rather than smaller in nature. But that only tells part of the story.
An ADC will quantize this analogue signal and generate a 12 bit, 14 bit or 16 bit representation of that signal which is then processed, filtered, interpreted as an image. Fuji, Sony, Nikon, Panasonic (GH5s), Leica, Canon all use a 14bit raw file format (totally overkill for most of todays 8bit and 10bit panels) and also rarely likely to be a limiting factor in print unless you need to go print fine gradations over a large area. In my experience the differences come down to paper, inkset and careful end to end color management if one wants to attempt to realise the very subtle differences in large print from a 14bit and 16bit raw file. Just my 0.02 on that one. Somebody else with better eyes may very well disagree.
I am NOT saying that larger pixels do not equal better dynamic range etc.... however this larger, therefore better typically only holds true for sensors of a particular generation. A case in point would be to compare the low light performance, dynamic range of an original 5d mark i and then a XT3. The XT3 despite much smaller pixels and higher resolution over a much smaller surface area will perform better at base and high ISO's by a substantial margin.
The quantum efficiency of different sensor designs needs to be taken into consideration.
Furthermore the 'color', 'tonality' 'depth' are not a foreseen byproduct of a larger pixel as it fails to take into image processing or (color science) of one manufacturer or another. This is the secret sauce that makes Fuji colors look like Fuji colors, Canon colors look like canon colors in jpeg etc...
Case in point, Fuji have used the same sensor (albeit with different sensor topping CFA to A6500 in the XT2) as Sony in the XT2 and A6500 lineup yet both produce very different output both in interpretted raw in lightroom where the rubber meets the road on computer screen and also in how jpegs are processed and dumped out to file.
Finally, companies such as Apple and most recently Google are at the forefront of AI, machine learning using visual learning techniques, smart signal processing to push the boundaries of sensor size considerations to allow tiny sensors in phones accomplish much much more than the traditional laws of physics as it relates to film plane and optics would suggest (e.g. computational bokeh).
HDR+ on the google pixel 3 is a fascinating approach towards extending dynamic range in stills and video. Nevermind the demonstration of night vision - essentially the sensor sees somewhat like the human eye does in low light , identifying shapes, objects and using known information about those objects to intelligently 'fill in the blanks' much the same way that human vision works with the brain filling in the blanks.
Heck the Pentax K1 can produce nicer landscape photos than many medium format cameras I've seen when it can leverage its pixel shift capability.
It's interesting that the smaller sensor in the GH5 has allowed Panasonic to have a 2 year head start on any other camera capable of processing 4.2.2 10bit 400 mbps footage and 10bit 4.2.0 internal. Footage that grades significantly better than footage from larger pixel A7sii and A7iii's.
The heat and space savings from not having to run a larger sensor in a smaller body has opened up creative avenues and cram in a tonne of processing power and large heat sinks. Creative avenues that are still not possible with large pixel medium format bodies today
I guess the takeaway of my opinion (keyword opinion!) is that the sensor itself today is not a limiting factor by any stretch of the creative process and is only one part in the overall imaging pipeline. Right tool for the right job.