bobn2 wrote
Depends on the camera. The standard for mFT is 16-24MP, the standard for FF is 20-24 MP. There are FF cameras available with 30-50MP but they are all expensive cameras. Sure, they can capture detail that mFT can't. The mFT lenses also use the smaller image circle and relax size constraint a little and are optically very good indeed, so for normal purposes, there isn't much, if any, deficit due to lens quality.
--
Bob.
DARK IN HERE, ISN'T IT?
so if Micro 4/3 can get up to the same number of pixels as a FF camera why does anyone buy the latter? And what are they doing with all that sensor size if not putting more pixels on it? Are the pixels bigger? Further apart?
forgive my confusion but this seems very counter intuitive!
Well, the advantage of FF is a wider shooting envelope over mFT. As I said somewhere up post, image quality is manly about how much light is got into the picture, not the size of the frame.
But for a given intensity of light falling on the sensor the only reason the FF sensor receives more total light than an m4/3 sensor is because it is bigger. In fact total light and frame sensor size are directly proportional.
So why does that result in higher image quality in the picture? I mean to say, what is the scientific explanation for that then?
You don't need a scientific explanation if you have done film printing.
If you are wondering if that makes a larger sensor less noisy, I do not think so. Fraction of MF film is exactly the same as the FF film.
BTW, the science part is in the digital signal processing (DSP).
Exactly so. Large negatives produce smaller grain and generally better image quality than small negatives because less enlargement is required.
The total quantity of light the negative receives plays no part and is just an interesting, but irrelevant, correlation.
That is in fact completely false. Imagine comparing a 35mm image with a 645 image using the same exposure and emulsion, printed the same size. The 645 image will look to have higher quality due to a greater number of smaller grains. We know the grains aren't really smaller, they just look so, because they are enlarged less, but there really is a bigger number of them. How did we come to have a bigger number of exposed grains? Because there was more light to expose them.
You say that what I said is completely false and then say exactly the same thing using different words. Why are you being deliberately argumentative?
What you said was: "The total quantity of light the negative receives plays no part and is just an interesting, but irrelevant, correlation." (emboldened above). That is, in fact completely false, as I illustrated.
But, more importantly, surely the reason we have a bigger number of grains on the bigger negative is because it is bigger?
Well, duh, as the saying goes.
In fact if you actually do such a comparison in practice (as I have done many times) one finds the number of grains per unit area and the size of the grains is identical. Where does this business of more light to expose them come from?
How do you expose more grains without having more light? To expose a grain means incidence of some number of photons (the number depending on the efficiency of the emulsion). The photons don't get re-used. Their energy is dissipated freeing photoelectrons. So, to expose more grains you need more photons. If you didn't have more photons, you wouldn't expose any more grains.
The average light intensity per unit area
tautology
is exactly the same and the grains are exactly the same size so where is the connection between total quantity of light and grain size.
Because we are not talking about light per unit area, we are talking about the total light, to expose the total number of grains. Light per unit area times area equals amount of light.
An explanation of your reasoning showing the connection would be most welcome as it seems to be a non-issue.
It was perfectly well explained the first time round. I've tried to explain it even more simply above. Frankly, if you can't work it out now, you're probably never going to understand. It seems a simple concept. Same light per unit area, bigger area gives more light.
In fact, shot noise is not even visible in a photographic print as it occurs at a molecular level and is totally swamped by the physical grains that form the image.
And large sensors produce better image quality for similar reasons, with noise tending to be disproportionately low with larger sensors because they usually have a higher signal-to-noise ratio than smaller sensors. In other words image noise is caused by the electronics and processing.
This is also completely untrue. Electronic noise is a very small proportion of the total noise in an image.
Really?
Really.
Try taking a photograph through a telescope of the night sky and then put the lens cap on and make another exposure. If you then examine both frames closely you will find that the noise in both images is identical even though absolutely no photons whatsoever are reaching the sensor during the exposure of the blank frame.
The night sky is not at all a typical subject.
That is why in astrophotography we take blank exposures at the same time as the actual photographs and then subtract them from the image.
The noise produced by the camera being constant - even when no photons are striking the sensor at all.
Actually, that is a different reason. Noise, proper noise, from shot to shot will always be different, just because its random. Subtracting that will not reduce noise. What you're removing with your dark frame subtraction is 'pattern noise', caused by the sensor and its readout having systematic differences between lines and columns.
Whereas the random fluctuations in the rate of arrival of photons average out during the exposure and are not visible in the final image. Try it for yourself.
As above, the night sky is a highly atypical subject. It's all deep shadow. It's not surprising that what you see is shadow noise. Now here is an experiment for you. Take a picture of the night sky like you would one of a normal subject. Take your camera. Point it at the night sky. Set the ISO to the lowest your camera has, take a meter reading using averaged or centre weighted. Now take a photo at using the exposure settings given by the meter. Look at the result. Tell me how noisy the night sky is and whether the electronic noise is the dominant noise.
That is why I would really like to see a scientific explanation for the notion that image noise is principally shot noise and directly dependent on the total number of photons that the sensor or film receives during the exposure. I suspect I will wait a long time!
You have been given them many times. You always find a spurious reason to reject them.
I re-iterate this. If you cannot understand that amount per unit area times area gives total amount, I suspect you're never ever going to understand a 'scientific explanation'.
--
Bob.
DARK IN HERE, ISN'T IT?