Micro 4/3

bobn2 wrote

Depends on the camera. The standard for mFT is 16-24MP, the standard for FF is 20-24 MP. There are FF cameras available with 30-50MP but they are all expensive cameras. Sure, they can capture detail that mFT can't. The mFT lenses also use the smaller image circle and relax size constraint a little and are optically very good indeed, so for normal purposes, there isn't much, if any, deficit due to lens quality.
 
I have just discovered there is an even smaller sensor system than APS-C which is called micro 4/3. Apparently, this dispenses with the prism and mirror altogether and uses an electronic viewfinder only to save size and weight.
First of all it almost has to have EVF because at this size, conventional OVF is very small and darker.

Note that even APS/DX OVF's are smaller/darker than we enjoyed back in film days.
For sure. As you almost certainly remember, the original incarnation of micro 4/3 was simply 4/3, which was a DSLR format.
And in turn, FT was based on the old 110 film format. Kodak decided to make CCD sensors based on their three major film formats, 135, APS-C and 110. The 135 sensors were prohibitively expensive for everyday photography, so manufacturers went for smaller sensors. Kodak and those that followed used the APS-C size and put them in their 135 bodies. Olympus, who had exited the SLR market some years earlier, and had no SLR bodies to use, built a completely new system around the 110 sensor.
 
I wouldn't worry too much, the Canon system you've bought into is very capable. And there are things you can do with FF digital cameras that you couldn't do with mFT, but they far outperform the 35mm film cameras you're used to. As I said, apart from DoF achievable, mFT is outperforming the film cameras you're used to.

--
Bob.
DARK IN HERE, ISN'T IT?
thanks.

a lot of people keep mentioning that FF cameras get better DoF. But I don't get why. Surely DoF is about aperture?
If 'better' means 'shallower', yes. DoF relative to your final viewed image is about aperture diameter and the angular uncertainty relative to the overall angle of view that you're prepared to accept as 'sharp'. This any images with the same angle of view and the same aperture diameter will display the same depth of field. On mFT you need half the focal length to get the same angle of view and thus half the f-number for the same aperture diameter.
What he means is that with a full frame camera you have to reduce the aperture by two stops to get the same depth of field as with a M4/3 camera, all else being equal.
--
Bob.
DARK IN HERE, ISN'T IT?
 
So bigger sensors receive more total light than smaller sensors and that results in less noise in the picture? Why is that then? I mean to say, what is the scientific explanation for that then?
Exposure is defined as the amount of light per unit area of film or sensor. So you can use the same camera settings — shutter speed and aperture value — on any camera, and get the same exposure.

So it makes sense that you'd get more total light with a larger sensor camera — under the same lighting conditions and settings — than you'd get with with a smaller sensor camera.

Under typical conditions, most of the noise that comes from a camera is due to photon shot noise, which is the natural variation in the number of photons collected by a sensor even if the light intensity is presumably completely uniform, which is an intrinsic property of light due to the theory of quantum mechanics.
Really? So you will presumably have a link to the research that established that?
So, you are carrying on with your usual mode of trolling. The research he would have to link is very old, this is basic text-book stuff.
Why so offensive? Rather rude. Incidentally, can you name a text book that explains why, under typical conditions, most of the noise that comes from a camera is due to photon shot noise? I really would be most grateful if you would be so kind as to do so as I have been quite unable to find one, and yet presumably if it is basic text book stuff they must be readily available.
Perhaps a good way of going out it is if rather than just grouping it all as 'that', you itemise point by point the parts which you wish to dispute,
I am not disputing anything. I am asking for information.
then we can establish source which might convince you. I say 'might' because mostly they won't be 'research', because we're going through stuff getting on for 100 years old.
Really? I did not realise digital cameras went that far back - or electronic cameras of any kind for that matter. You learn something new every day, don't you? You don't have any links to articles on the subject, or the names of text books on the subject, do you? That would be most helpful.
So, for instance, here is the link establishing the phenomenon of shot noise.

http://onlinelibrary.wiley.com/doi/...ionid=9B4FA57C8730157DFC82EEC1864BB76F.f03t04

--
Bob.
DARK IN HERE, ISN'T IT?
 
The sensel pitch of the smaller sensor is smaller. The sensel of larger sensors are larger in size (FF of 24 MP has sensels 6x6 microns while a crop sensor from Nikon has only 3.92x3.92 microns).
While this is often true, it doesn't have much relevance to anything.
Not even to the signal-to-noise ratio of the sensels and sensor? Why so?
--
Bob.
DARK IN HERE, ISN'T IT?
 
Last edited:
Really? So you will presumably have a link to the research that established that? I would be most grateful if you would be kind enough to post it here, as in my practical experience most of the noise that comes from a camera is electronic noise and totally dependent on the signal to noise ratio of the sensor and processing electronics, which totally swamps any shot noise.

That is why in astrophotography we take blank exposures at the same time as the actual photographs and then subtract them from the image. The noise produced by the camera being constant - even when no photons are striking the sensor at all. Whereas the random fluctuations in the rate of arrival of photons average out during the exposure and are not visible in the final image. Try it for yourself.
The best sensors these days have an average read noise of about two electrons or lower at best or maybe ten times that much at worst. Of course, older cameras tend to have much higher read noise.

If a sensor has a full well capacity of say 60,000 electrons, the estimated shot noise is no less than the square root of that value, or about 255 electrons, which overwhelms the read noise; even for a well-exposed middle gray tone, the shot noise will be at least 100 electrons on average. The actual values of the shot noise will be higher as most camera sensors have a quantum efficiency considerably less than 100%: they don't absorb 100% of the light, which is a good thing if you want color photography.

Quality terrestrial photography generally tries to keep the image well above the read noise floor.
So much so that an image taken in total darkness is composed of nothing but noise - even though no photons have been received at all. Which means the noise must be generated by the camera and is not shot noise. Try it for yourself.
Right. The read noise floor limits the dynamic range of a camera. But most folks don't operate their cameras in that range: astronomers do of course, and I'll use dark frame subtraction when I'm taking long exposures at night, but the majority of photography is well within the range where shot noise predominates.

--
http://therefractedlight.blogspot.com
 
Last edited:
So, you are carrying on with your usual mode of trolling. The research he would have to link is very old, this is basic text-book stuff.
Why so offensive? Rather rude. Incidentally, can you name a text book that explains why, under typical conditions, most of the noise that comes from a camera is due to photon shot noise? I really would be most grateful if you would be so kind as to do so as I have been quite unable to find one, and yet presumably if it is basic text book stuff they must be readily available.


You can calculate the relative amounts of shot versus read noise from the data found on this site:

then we can establish source which might convince you. I say 'might' because mostly they won't be 'research', because we're going through stuff getting on for 100 years old.
Really? I did not realise digital cameras went that far back - or electronic cameras of any kind for that matter. You learn something new every day, don't you? You don't have any links to articles on the subject, or the names of text books on the subject, do you? That would be most helpful.
Single pixel, single bit depth digital cameras have been around for a long time — they are called optical proximity switches — and Albert Einstein got the Nobel Prize in Physics based on an article he wrote on this photoelectric effect way back in 1905. Basically, he interpreted the experimental data as being from the quantum properties of light, which immediately leads to the statistics of shot noise.
 
bobn2 wrote

Depends on the camera. The standard for mFT is 16-24MP, the standard for FF is 20-24 MP. There are FF cameras available with 30-50MP but they are all expensive cameras. Sure, they can capture detail that mFT can't. The mFT lenses also use the smaller image circle and relax size constraint a little and are optically very good indeed, so for normal purposes, there isn't much, if any, deficit due to lens quality.

--
Bob.
DARK IN HERE, ISN'T IT?
so if Micro 4/3 can get up to the same number of pixels as a FF camera why does anyone buy the latter? And what are they doing with all that sensor size if not putting more pixels on it? Are the pixels bigger? Further apart?

forgive my confusion but this seems very counter intuitive!
Well, the advantage of FF is a wider shooting envelope over mFT. As I said somewhere up post, image quality is manly about how much light is got into the picture, not the size of the frame.
But for a given intensity of light falling on the sensor the only reason the FF sensor receives more total light than an m4/3 sensor is because it is bigger. In fact total light and frame sensor size are directly proportional.

So why does that result in higher image quality in the picture? I mean to say, what is the scientific explanation for that then?
You don't need a scientific explanation if you have done film printing.

If you are wondering if that makes a larger sensor less noisy, I do not think so. Fraction of MF film is exactly the same as the FF film.

BTW, the science part is in the digital signal processing (DSP).
Exactly so. Large negatives produce smaller grain and generally better image quality than small negatives because less enlargement is required. The total quantity of light the negative receives plays no part and is just an interesting, but irrelevant, correlation.
That is in fact completely false. Imagine comparing a 35mm image with a 645 image using the same exposure and emulsion, printed the same size. The 645 image will look to have higher quality due to a greater number of smaller grains. We know the grains aren't really smaller, they just look so, because they are enlarged less, but there really is a bigger number of them. How did we come to have a bigger number of exposed grains? Because there was more light to expose them.
You say that what I said is completely false and then say exactly the same thing using different words. Why are you being deliberately argumentative? But, more importantly, surely the reason we have a bigger number of grains on the bigger negative is because it is bigger? In fact if you actually do such a comparison in practice (as I have done many times) one finds the number of grains per unit area and the size of the grains is identical. Where does this business of more light to expose them come from? The average light intensity per unit area is exactly the same and the grains are exactly the same size so where is the connection between total quantity of light and grain size. An explanation of your reasoning showing the connection would be most welcome as it seems to be a non-issue.
In fact, shot noise is not even visible in a photographic print as it occurs at a molecular level and is totally swamped by the physical grains that form the image.

And large sensors produce better image quality for similar reasons, with noise tending to be disproportionately low with larger sensors because they usually have a higher signal-to-noise ratio than smaller sensors. In other words image noise is caused by the electronics and processing.
This is also completely untrue. Electronic noise is a very small proportion of the total noise in an image.
Really? Try taking a photograph through a telescope of the night sky and then put the lens cap on and make another exposure. If you then examine both frames closely you will find that the noise in both images is identical even though absolutely no photons whatsoever are reaching the sensor during the exposure of the blank frame.

That is why in astrophotography we take blank exposures at the same time as the actual photographs and then subtract them from the image. The noise produced by the camera being constant - even when no photons are striking the sensor at all. Whereas the random fluctuations in the rate of arrival of photons average out during the exposure and are not visible in the final image. Try it for yourself.
That is why I would really like to see a scientific explanation for the notion that image noise is principally shot noise and directly dependent on the total number of photons that the sensor or film receives during the exposure. I suspect I will wait a long time!
You have been given them many times. You always find a spurious reason to reject them.

--
Bob.
DARK IN HERE, ISN'T IT?
 
is anyone able to answer this question? (about whether pixels in a FF sensor are further apart or just bigger than in a MFT of the same pixel resolution)
They are typically bigger.

This website gives us some data to consider:

http://www.sensorgen.info

The "Max Saturation capacity" figure is a rough estimate of pixel size, and typically bigger pixels have bigger values. Of course, more megapixels on any sensor will tend to reduce the size of these, but generally speaking, the full frame cameras usually have much higher pixel capacities than compact cameras.

The Quantum Efficiency value incorporates color filtering, so 100% QE would mean that you'd only get black and white images; but QE values also take into account the spacing of pixels: widely spaced pixels will have lower QE than densely packed pixels.

--
http://therefractedlight.blogspot.com
 
Last edited:
is anyone able to answer this question? (about whether pixels in a FF sensor are further apart or just bigger than in a MFT of the same pixel resolution)
They are typically bigger.

This website gives us some data to consider:

http://www.sensorgen.info

The "Max Saturation capacity" figure is a rough estimate of pixel size, and typically bigger pixels have bigger values. Of course, more megapixels on any sensor will tend to reduce the size of these, but generally speaking, the full frame cameras usually have much higher pixel capacities than compact cameras.

The Quantum Efficiency value incorporates color filtering, so 100% QE would mean that you'd only get black and white images; but QE values also take into account the spacing of pixels: widely spaced pixels will have lower QE than densely packed pixels.
 
bobn2 wrote

Depends on the camera. The standard for mFT is 16-24MP, the standard for FF is 20-24 MP. There are FF cameras available with 30-50MP but they are all expensive cameras. Sure, they can capture detail that mFT can't. The mFT lenses also use the smaller image circle and relax size constraint a little and are optically very good indeed, so for normal purposes, there isn't much, if any, deficit due to lens quality.

--
Bob.
DARK IN HERE, ISN'T IT?
so if Micro 4/3 can get up to the same number of pixels as a FF camera why does anyone buy the latter? And what are they doing with all that sensor size if not putting more pixels on it? Are the pixels bigger? Further apart?

forgive my confusion but this seems very counter intuitive!
Well, the advantage of FF is a wider shooting envelope over mFT. As I said somewhere up post, image quality is manly about how much light is got into the picture, not the size of the frame.
But for a given intensity of light falling on the sensor the only reason the FF sensor receives more total light than an m4/3 sensor is because it is bigger. In fact total light and frame sensor size are directly proportional.

So why does that result in higher image quality in the picture? I mean to say, what is the scientific explanation for that then?
You don't need a scientific explanation if you have done film printing.

If you are wondering if that makes a larger sensor less noisy, I do not think so. Fraction of MF film is exactly the same as the FF film.

BTW, the science part is in the digital signal processing (DSP).
Exactly so. Large negatives produce smaller grain and generally better image quality than small negatives because less enlargement is required. The total quantity of light the negative receives plays no part and is just an interesting, but irrelevant, correlation.
That is in fact completely false. Imagine comparing a 35mm image with a 645 image using the same exposure and emulsion, printed the same size. The 645 image will look to have higher quality due to a greater number of smaller grains. We know the grains aren't really smaller, they just look so, because they are enlarged less, but there really is a bigger number of them. How did we come to have a bigger number of exposed grains? Because there was more light to expose them.
You say that what I said is completely false and then say exactly the same thing using different words. Why are you being deliberately argumentative?
What you said was: "The total quantity of light the negative receives plays no part and is just an interesting, but irrelevant, correlation." (emboldened above). That is, in fact completely false, as I illustrated.
But, more importantly, surely the reason we have a bigger number of grains on the bigger negative is because it is bigger?
Well, duh, as the saying goes.
In fact if you actually do such a comparison in practice (as I have done many times) one finds the number of grains per unit area and the size of the grains is identical. Where does this business of more light to expose them come from?
How do you expose more grains without having more light? To expose a grain means incidence of some number of photons (the number depending on the efficiency of the emulsion). The photons don't get re-used. Their energy is dissipated freeing photoelectrons. So, to expose more grains you need more photons. If you didn't have more photons, you wouldn't expose any more grains.
The average light intensity per unit area
tautology
is exactly the same and the grains are exactly the same size so where is the connection between total quantity of light and grain size.
Because we are not talking about light per unit area, we are talking about the total light, to expose the total number of grains. Light per unit area times area equals amount of light.
An explanation of your reasoning showing the connection would be most welcome as it seems to be a non-issue.
It was perfectly well explained the first time round. I've tried to explain it even more simply above. Frankly, if you can't work it out now, you're probably never going to understand. It seems a simple concept. Same light per unit area, bigger area gives more light.
In fact, shot noise is not even visible in a photographic print as it occurs at a molecular level and is totally swamped by the physical grains that form the image.

And large sensors produce better image quality for similar reasons, with noise tending to be disproportionately low with larger sensors because they usually have a higher signal-to-noise ratio than smaller sensors. In other words image noise is caused by the electronics and processing.
This is also completely untrue. Electronic noise is a very small proportion of the total noise in an image.
Really?
Really.
Try taking a photograph through a telescope of the night sky and then put the lens cap on and make another exposure. If you then examine both frames closely you will find that the noise in both images is identical even though absolutely no photons whatsoever are reaching the sensor during the exposure of the blank frame.
The night sky is not at all a typical subject.
That is why in astrophotography we take blank exposures at the same time as the actual photographs and then subtract them from the image.

The noise produced by the camera being constant - even when no photons are striking the sensor at all.
Actually, that is a different reason. Noise, proper noise, from shot to shot will always be different, just because its random. Subtracting that will not reduce noise. What you're removing with your dark frame subtraction is 'pattern noise', caused by the sensor and its readout having systematic differences between lines and columns.
Whereas the random fluctuations in the rate of arrival of photons average out during the exposure and are not visible in the final image. Try it for yourself.
As above, the night sky is a highly atypical subject. It's all deep shadow. It's not surprising that what you see is shadow noise. Now here is an experiment for you. Take a picture of the night sky like you would one of a normal subject. Take your camera. Point it at the night sky. Set the ISO to the lowest your camera has, take a meter reading using averaged or centre weighted. Now take a photo at using the exposure settings given by the meter. Look at the result. Tell me how noisy the night sky is and whether the electronic noise is the dominant noise.
That is why I would really like to see a scientific explanation for the notion that image noise is principally shot noise and directly dependent on the total number of photons that the sensor or film receives during the exposure. I suspect I will wait a long time!
You have been given them many times. You always find a spurious reason to reject them.
I re-iterate this. If you cannot understand that amount per unit area times area gives total amount, I suspect you're never ever going to understand a 'scientific explanation'.

--
Bob.
DARK IN HERE, ISN'T IT?
 
Last edited:
I shoot MFT because the sensor is smaller than FF. This gives me smaller lenses and a total smaller package. It allows me to use a wider aperture for faster shutter speeds yet retain a good depth of field for accurate focus at the same time.
 
I shoot MFT because the sensor is smaller than FF. This gives me smaller lenses and a total smaller package. It allows me to use a wider aperture for faster shutter speeds yet retain a good depth of field for accurate focus at the same time.
No it doesn't give you a 'wider aperture'. In fact it gives you a 'narrower' one, If you have your mFT fitted with a 25mm f/1.4 lens, the aperture is 17.9mm wide. If you have a FF camera fitted with a 50mm f/1.4 lens (that gives the same angle of view) you have an aperture that is 35.8mm wide. That's twice as wide. It's collecting four times the light, so for the same image quality, you could be setting a shutter speed 1/4 that of the mFT camera.

There is no dodging depth of field. Light gathering and depth of field go together, if you want low light performance you get narrow depth of field. Making the sensor smaller does not magically produce light.

By your logic a phone camera with a 1/5" sensor is going to be miles better than either mFT or FF. Clearly it isn't.

--
Bob.
DARK IN HERE, ISN'T IT?
 
Last edited:
You don't understand that the aperture is a ratio RATIO. f 2.8 on ff is f 2.8 on MFT is f2.8 on a large format camera concerning exposer. It is a RATIO. Damn people get with it already. Now because it is a smaller sensor the image quality will not be as good as a larger sensor. Guess what we already understand that but are willing to make the trade off for the smaller system. Just as the old 35mm was not as good as a 4x5 camera. However when calculating exposer it is all the same since the sensor area is larger on FF than it is on MFT the aperture needs to be larger because it needs more light to get the same exposer since it has a larger area to spread the light on. The exposure is the SAME because it is a RATIO designed right into the lens. Get it now deal with and go away with that stupid mouthy mentally handicapped garble.
 
I did a mental calculation and concluded I needed 16mpx to equal what I was getting from 35mm on my 20"x30" posters. And note that this (16mpx) calculation was done when digital was only 300kpx, (.3mpx), and I seriously doubted digital would ever reach 16mpx.
Actually, I think the digital matches or beats 35mm film threshold came at around 10MP.

With FF sensors now at 36, 42 and 50MP, we are definately well into medium format film territory.
 
You don't understand that the aperture is a ratio RATIO. f 2.8 on ff is f 2.8 on MFT is f2.8 on a large format camera concerning exposer. It is a RATIO. Damn people get with it already. Now because it is a smaller sensor the image quality will not be as good as a larger sensor. Guess what we already understand that but are willing to make the trade off for the smaller system. Just as the old 35mm was not as good as a 4x5 camera. However when calculating exposer it is all the same since the sensor area is larger on FF than it is on MFT the aperture needs to be larger because it needs more light to get the same exposer since it has a larger area to spread the light on. The exposure is the SAME because it is a RATIO designed right into the lens. Get it now deal with and go away with that stupid mouthy mentally handicapped garble.
I think aperture is a dimension, like 10mm or 15mm. It's the diameter of the entrance pupil. Now, f/stop is a ratio; some call it relative aperture. And, as you note, it works good for normalizing exposure between lenses of different focal lengths.

Now, in typical photographic parlance, we would say "I used an aperture of f/4" and everybody knows you mean relative aperture. But this thread, at this point, is a somewhat technical discussion and it's important to use accurate terminology.

So I find it somewhat unproductive for you to devolve into insults because your less-then-accurate statement has been taken at face value.
 
You don't understand that the aperture is a ratio RATIO. f 2.8 on ff is f 2.8 on MFT is f2.8 on a large format camera concerning exposer. It is a RATIO. Damn people get with it already. Now because it is a smaller sensor the image quality will not be as good as a larger sensor. Guess what we already understand that but are willing to make the trade off for the smaller system. Just as the old 35mm was not as good as a 4x5 camera. However when calculating exposer it is all the same since the sensor area is larger on FF than it is on MFT the aperture needs to be larger because it needs more light to get the same exposer since it has a larger area to spread the light on. The exposure is the SAME because it is a RATIO designed right into the lens. Get it now deal with and go away with that stupid mouthy mentally handicapped garble.
I think aperture is a dimension, like 10mm or 15mm. It's the diameter of the entrance pupil. Now, f/stop is a ratio; some call it relative aperture. And, as you note, it works good for normalizing exposure between lenses of different focal lengths.

Now, in typical photographic parlance, we would say "I used an aperture of f/4" and everybody knows you mean relative aperture. But this thread, at this point, is a somewhat technical discussion and it's important to use accurate terminology.

So I find it somewhat unproductive for you to devolve into insults because your less-then-accurate statement has been taken at face value.
 
I have just discovered there is an even smaller sensor system than APS-C which is called micro 4/3. Apparently, this dispenses with the prism and mirror altogether and uses an electronic viewfinder only to save size and weight.
First of all it almost has to have EVF because at this size, conventional OVF is very small and darker.

Note that even APS/DX OVF's are smaller/darker than we enjoyed back in film days.
For sure. As you almost certainly remember, the original incarnation of micro 4/3 was simply 4/3, which was a DSLR format.
And in turn, FT was based on the old 110 film format. Kodak decided to make CCD sensors based on their three major film formats, 135, APS-C and 110. The 135 sensors were prohibitively expensive for everyday photography, so manufacturers went for smaller sensors. Kodak and those that followed used the APS-C size and put them in their 135 bodies. Olympus, who had exited the SLR market some years earlier, and had no SLR bodies to use, built a completely new system around the 110 sensor.

--
Bob.
DARK IN HERE, ISN'T IT?
Cool info - thanks!

Just curious though as 110 instamatic was so much smaller... Also, the image quality was nothing to write home about. :-(
 
Last edited:

Keyboard shortcuts

Back
Top