George Sears
Leading Member
Camera makers like to announce new image processors, often with higher pixel count sensors. The two go together, or do they? Basically, if you need more processing to make a sensor 'work', that could simply be considered a problem. This is most obvious with noise suppression. These days, the higher 'resolution' is smeared removing the noise. Or maybe what one pixel resolves isn't very useable, any more.
If you take a camera with a large sensor and large pixels, you can see that each pixel gets it just about right. If there is a fine line somewhere, you can see precisely how the line is defined. In shadow, the color remains about right, and things are still relatively crisp.
How hard do you have to process an image as a photosite gradually diminishes in size? Is there any 'real' detail at the pixel level? How much do they have to massage the data to make it useable? How aggressively do they fight noise, using what techniques? At some point 'active' processing is the same as making stuff up, 'drawing' an area based on parameters, but without much integrity at the basic pixel level.
The gripe would be that this is essentially stupid. If you use a larger pixel, you get better data from each photosite. Then you don't need so much processing. If you want to process for other effects, fine, but even here you have better data to start, with larger photosites, so better data when you end. Processing is also boosting darker areas, using contrast curves, adding differential saturation, and other things that may create pleasing results. A camera company like Canon can claim it has a definite look. This, by itself, seems fair in the marketplace. But how is this aided by adding pixels, and forcing more processing, even before the picture is massaged to create the Canon look?
Recent reviews seem to indicate that adding pixels to a fixed size sensor is not effective in increasing resolution. When a camera company says it is giving us a lot more pixels, and a fancier processor, what are they really saying?
Why exactly isn't this an idiot's game?
If you take a camera with a large sensor and large pixels, you can see that each pixel gets it just about right. If there is a fine line somewhere, you can see precisely how the line is defined. In shadow, the color remains about right, and things are still relatively crisp.
How hard do you have to process an image as a photosite gradually diminishes in size? Is there any 'real' detail at the pixel level? How much do they have to massage the data to make it useable? How aggressively do they fight noise, using what techniques? At some point 'active' processing is the same as making stuff up, 'drawing' an area based on parameters, but without much integrity at the basic pixel level.
The gripe would be that this is essentially stupid. If you use a larger pixel, you get better data from each photosite. Then you don't need so much processing. If you want to process for other effects, fine, but even here you have better data to start, with larger photosites, so better data when you end. Processing is also boosting darker areas, using contrast curves, adding differential saturation, and other things that may create pleasing results. A camera company like Canon can claim it has a definite look. This, by itself, seems fair in the marketplace. But how is this aided by adding pixels, and forcing more processing, even before the picture is massaged to create the Canon look?
Recent reviews seem to indicate that adding pixels to a fixed size sensor is not effective in increasing resolution. When a camera company says it is giving us a lot more pixels, and a fancier processor, what are they really saying?
Why exactly isn't this an idiot's game?