Just shows kow obsolete fullframe has become. You can buy the OM Systems 300mm f4 which is the equivalent focal length for abpout 1/8 the price and half the weight. Why would anybody pay $28,000 AUD when you can buy just as good a lens for $3500 AUD. You could also buy the 150-400mm with a built in TC for less than half the price of the Nikon.
Link |
Posted on Nov 3, 2022 at 20:50 UTC
as 13th comment
| 27 replies
In the jpg tests what setting did you have the NR set to? My tests show that you MUST turn NR down to low else the image smears. I found a huge difference at ISO6400 between the OM-1 and the EM1Mk3 at ISO6400 when looking at white text on a cyan background. Other coloured backgrounds the difference was less.
Link |
Posted on Mar 7, 2022 at 21:15 UTC
as 45th comment
| 3 replies
Vit Adamek: Anything between 20-30mpx is just about great, to get the balance right on many fronts like buffer depths, especially when the burst rate is this crazy high now.
From non-professional enthusiast viewpoint I think the micro 4/3 lacks a bit of dynamic range still, not by much but stacking i.e. 7 exposures 1/3 stops apart on good old Panasonic G6, using tripod, and postprocessing in Photomatix Soft to my liking did the trick i.e. https://photos.app.goo.gl/5Jg3rXy2pA8LwieE9
That image is also a manual panorama shot, 3 photos stitched together on PC using Hugin freeware software. Basically G6 raws postprocessed in DXO Optics Pro, exported as uncompressed TIFFs, the different exposures stacked in Photomatix to create the desired HDR look and stitched up in Hugin.
With A7 mk1 the DR is so great that I don't feel like I need to stack dif. exp. anymore. I can get similar results if I want from just single exposure and postprocessing.
Therefore I want the focus to be on DR with Micro 4/3.
Have a look at photonstophotos.com and their tests. At ISO200 virtually all cameras have the same 10 stop dynamic range and that goes for the Z9 and the Mk2/3. At high ISO the EM1Mk2/3 has about half a stop less DR than say the Z9. It looks like the OM1 is going to surpass every other camera on the market for DR at high ISO.
gameshoes3003: 20 mega-pickles is enough for final products and whatnot. The reason why I would want more is because I do crop on some of my photos. I'm no professional that regularly practices to be able to prepare for the perfect shot every time. Sometimes the perfect moment captured by me needs a little cropping, and a 20 mega-pickle file isn't going to hold up well. Also, with how well built the Olympus cameras are, I'm sure that landscape photographers desire them and that they would really appreciate getting all that detail in one shot.
What almost 2 stops better low light than FF and 80Mpx in hires mode. There is a misconception that FF cameras capture more light. Yes over the whole sensor they do but the light/pixel is identical for the same pixel size. The noise is determined on the light/pixel NOT on how much light falls on the sensor.
ProfHankD: Having significantly more than 20MP in an MFT sensor is not very helpful.
The lp/mm resolution of most good lenses is disturbingly independent of the format they are designed to cover, at least from MFT to 4x5". Thus, a smaller sensor tends to imply lower lens-limited total resolution. No MFT lens tested by DxO ever topped 16MP effective resolution (on a 20MP sensor), and only one hit that. In contrast, literally hundreds of FF lenses DxO tested resolved more than 16MP on FF -- and many are sensor pixel count limited. Bottom line: with most MFT lenses at most aperture settings, resolution is lens limited and you shouldn't expect much more than about 6-8MP scene resolution -- which a 20MP sensor will capture quite cleanly, and a clean 6-8MP is usually good enough (better than 135 film typically delivered).
In sum, as Scotty would say, "you cannot change the laws of physics."
Put another way, making amazingly good MFT lenses would be a higher priority.
That's just nonsense. DXO are not actually very good at physics when they claim that a camera gives more than say 14MPx dynamic range when the camera uses a 14 bit a/d. That's just against the laws of physics. And you can see how nonsensical it is when you put the el cheapo 40-150 on a camera and see the increase in resolution when using hi-res mode. The lens must be resolving more than 20MPx when the image has more detail.
The Olympus lenses are rugged too. A friend bought a 17mm f1.2 and the delivery driver threw it over the gate and onto the concrete drive. Her partner drove over the box in a heavy 4WD with both wheels as it was invisible from the cabin. The lens still worked afterwards.
Link |
Posted on Feb 18, 2022 at 20:40 UTC
as 110th comment
| 2 replies
AbrasiveReducer: While I wish they had used the money they paid for this video on a Pen-F with a 25 megapixel sensor and no video capability, there are some encouraging signs. They are well aware they can't compete with a $4000 full frame camera with a 50mp sensor that weighs twice as much. Also, I didn't see the EM1X in the video, and that's a good sign as well. It's still long shot, but like Fuji, they're smart enough to steer clear of the Canon-Nikon-Sony thing which is already enough to meet anyone's needs.
In the meantime, since I take pictures with my cameras instead of treating them as investments, I'm enjoying Olympus' current cameras just as they are. Especially when I travel. No backpacks, no rolling bags. On most flights the entire outfit goes under the seat.
So sony/Canon and Nikon should give it up as the full frame concept is obsolete and except fot the very top range cameras offers no benefit except strengthening your muscles and breaking your bank when it comes to tele lenses.
Very much second rate images. Just look at the images which win competitions like the Australian Landscape Photographer of the year and international salons run under FIAP and PSA competitions. Most of these wouldn't even get an acceptance let alone an award.
Link |
Posted on Jul 22, 2021 at 22:13 UTC
as 35th comment
| 3 replies
Kona Mike: For all the folks "High" on full frame. Full frame will eventually go the way of the DSLR.
Medium format was the king because of the large negative gave superior prints. Now everyone shoots wedding at 35mm full frame. Give it some time & crop sensors will chip away at the full frame. If you want small convent lenses, the sensor needs to be smaller. All the advances in sensors & processing will enable crop sensors to be good enough.
To me it looks like the sensors are generally plateauing, with any gains being that higher density sensors are improving. I see this trend continuing. Look at what current crop cameras can do compared to some older full frame ones.
yea, yea I know, you only shoot raw full frame with the best primes & switch brands every 2 years to get the best dynamic range and noise free shadows in your 30 stacked frame HDR images. Just remember 99% or the world is happy with cell phone quality right now. Give the sensors another 5 or 6 generations and we'll see
You forget quantum efficiency. The Olympus sensor has close to twice the efficiency of FF sensors so FF sensors aren't that much better. Once you take into account the conversion noise and readout noise FF sensors are inferior at the pixel level to the Olympus sensor.
Scottelly: I'm sorry, but four travel tripods for every budget would be at least 20 tripods. This article shows just four tripods, and they're ALL over my budget.
:(
How about you include this one next time: https://www.bhphotovideo.com/c/product/1148849-REG/came_tv_q66c_carbon_fiber_tripod.html
They have cheaper tripods too, but the one I linked to is one I've had for about five years, keeping it in my backpack for the first four years, and now inside my camera backpack, and I love it. BTW, it gets a few inches taller than the little MeFoto here, and it's cheaper from the same store.
Pity they didn't include the Zomei tripods as they are far cheaper and better than any of those 4. You can get a Zomei carbon fibre tripod for under $100 USD which is more stable, carries far more weight (think 20kg) and is no heavier. Sirui also makes better travel tripods.
Link |
Posted on Jul 26, 2020 at 22:03 UTC
as 139th comment
Will somebody please tell me how an image of a man and children playing in the sea remotely fits the street photography category? Looking at the Open shortlist finalists while some images are good some belong in the dustbin. I see far far better photos on Facebook, Instagram and in other competitions.
Link |
Posted on Feb 28, 2018 at 20:48 UTC
as 13th comment
HowaboutRAW: "it [noise] comes from the light you're capturing."
Meaning optically better lenses help with noise.
Oh wait, that's been clear for years, despite vociferous denials in these comments.
The is not right HR. The quality of the lens has nothing to do with shot noise. Shot noise is a property of light. A better lens will help with sharpness, chromatic aberrations and things like that it does absolutely nothing to noise.
rgames1: Interesting but I'm skeptical that the variation in number of photons can cause anywhere near the amount of noise produced by the camera electronics.
The argument is made by comparing to tubes collecting raindrops. However, nowhere in the article does it say how many photons are captured in the shadow pixels, so the comparison is never backed up with any data. Making the argument requires that that number be established then compared to the variation in number of photons.
So, how many photons are captured by each pixel in the shadows? Further, what is the variation in that number? My strong suspicion is that the variation in number of photons in any part of the image is still extremely small compared to the number captured but that information is nowhere to be found. That information is implied by the analogy but never quantified.
The answer to those questions will show whether or not the analogy is valid.
rgames
The article is correct. A typical pixel captures a maximum of around 64000 photons. The first stop has 32000 photons, the next stop has 16000 photons. By the time you get 6 stops down you are around 1000 photons with noise of +-30 photons. 8 stops down you have 250 photons with noise of +-16 photons, 10 stops down and you have about 65 photons with a noise of +-8 photons which is going to make a very blotchy image.
AksCT: Nice article and simple description of effect of shot (quantum) noise. To be precise, quantum noise is statistically square root of signal, i.e. for every 100 photons, you will have (statistical mean) of 10 quantum photons/noise. So the ratio of noise to signal photon would be 10/100=10% (this is not SNR or inverse of it). Here is an example:
Quite well explained except for one major mistake. You talk about sensor size as being important. It is only part of the story. The real issue is pixel size. A FF sensor with 4 times as many pixels as a 4/3 sensor is identical in noise performance as each pixel is now only 1/4 the area so it captures the same amount of light.
The other thing which has been omitted is that because of shot noise a raw file only contains about 250 levels of brightness for a pixel which is what a jpg file has. Virtually all the extra information in a raw file is nothing more than digitised noise. You can do the calculations quite easily remembering that noise is the square root of the number of photons.
Link |
Posted on Apr 27, 2015 at 22:42 UTC
as 150th comment
| 3 replies
ImagesToo: I think this is not correct and doesn't do anything. You get ISO1600 by amplifying the signal which the sensor captures. You don't get any better noise performance because the noise is the square root of the number of photons in a pixel and you haven't changed that. You might just as well amplify the ISO100 signal (ie increase the brightness of the shadows after processing) and you will get exactly the same result without all the moire. All it is doing is effectively a really crude hdr processing on an image and compensating for the poor RAW to jpg conversion in the camera. Bracketing exposures and using hdr will give better results.
Further comments: A typical high end sensor has a capacity of around 65000 electrons (photons). A 14 bit A/D has a dynamic range of 2 to the power 14 = 16384 discrete levels with each level thus having an effective difference of 4 electrons. At the dark end you finish up with 4 electrons in a bit with a quantum noise level of +-2 electrons (ie the signal to noise ratio is 1:1) which is so bad as useless. To be useful the amplifier has to have an input measured noise level of well under 4 electrons and even then it can't do anything about the very low quantum S/N ratio at the dark end. That's a huge challenge for amplifier design and we are ignoring the effects of thermal noise on amplifier performance. The amplifier will raise the signal level to the A/D converter so it's noise performance isn't going to be the dominant factor in any case. So the limiting factor is going to be the amplifier and quantum noise neither of which you can do anything about.
ImagesToo: I think this is not correct and doesn't do anything. You get ISO1600 by amplifying the signal which the sensor captures. You don't get any better noise performance because the noise is the square root of the number of photons in a pixel and you haven't changed that. You might just as well amplify the ISO100 signal (ie increase the brightness of the shadows after processing) and you will get exactly the same result without all the moire. All it is doing is effectively a really crude hdr processing on an image and compensating for the poor RAW to jpg conversion in the camera. Bracketing exposures and using hdr will give better results.
I agree about the amplifier being before the A/D will obviously help with A/D noise. Of course the question remains just how noisy is the ISO amplifier itself because you obviously can't do anything about that. In general a 14 bit A/D has to have a noise level well below 14 bits else it can't possibly give 14 bits of conversion and the same applies to the ISO amplifier. No matter what you are still scratching around in the deep shadows where the quantum noise starts being significant. Actually even with a moving subject you can sometimes use bracketing but it becomes messy and you have to do it in small segments of the image where you can get alignment. A single raw capture processed using hdr usually works pretty well though without loss of resolution.
Just shows kow obsolete fullframe has become. You can buy the OM Systems 300mm f4 which is the equivalent focal length for abpout 1/8 the price and half the weight. Why would anybody pay $28,000 AUD when you can buy just as good a lens for $3500 AUD. You could also buy the 150-400mm with a built in TC for less than half the price of the Nikon.
In the jpg tests what setting did you have the NR set to? My tests show that you MUST turn NR down to low else the image smears. I found a huge difference at ISO6400 between the OM-1 and the EM1Mk3 at ISO6400 when looking at white text on a cyan background. Other coloured backgrounds the difference was less.
More lenses for an obsolete format. Makes perfect sense NOT.
Vit Adamek: Anything between 20-30mpx is just about great, to get the balance right on many fronts like buffer depths, especially when the burst rate is this crazy high now.
From non-professional enthusiast viewpoint I think the micro 4/3 lacks a bit of dynamic range still, not by much but stacking i.e. 7 exposures 1/3 stops apart on good old Panasonic G6, using tripod, and postprocessing in Photomatix Soft to my liking did the trick i.e. https://photos.app.goo.gl/5Jg3rXy2pA8LwieE9
That image is also a manual panorama shot, 3 photos stitched together on PC using Hugin freeware software. Basically G6 raws postprocessed in DXO Optics Pro, exported as uncompressed TIFFs, the different exposures stacked in Photomatix to create the desired HDR look and stitched up in Hugin.
With A7 mk1 the DR is so great that I don't feel like I need to stack dif. exp. anymore. I can get similar results if I want from just single exposure and postprocessing.
Therefore I want the focus to be on DR with Micro 4/3.
Have a look at photonstophotos.com and their tests. At ISO200 virtually all cameras have the same 10 stop dynamic range and that goes for the Z9 and the Mk2/3. At high ISO the EM1Mk2/3 has about half a stop less DR than say the Z9. It looks like the OM1 is going to surpass every other camera on the market for DR at high ISO.
gameshoes3003: 20 mega-pickles is enough for final products and whatnot. The reason why I would want more is because I do crop on some of my photos. I'm no professional that regularly practices to be able to prepare for the perfect shot every time. Sometimes the perfect moment captured by me needs a little cropping, and a 20 mega-pickle file isn't going to hold up well.
Also, with how well built the Olympus cameras are, I'm sure that landscape photographers desire them and that they would really appreciate getting all that detail in one shot.
What almost 2 stops better low light than FF and 80Mpx in hires mode. There is a misconception that FF cameras capture more light. Yes over the whole sensor they do but the light/pixel is identical for the same pixel size. The noise is determined on the light/pixel NOT on how much light falls on the sensor.
ProfHankD: Having significantly more than 20MP in an MFT sensor is not very helpful.
The lp/mm resolution of most good lenses is disturbingly independent of the format they are designed to cover, at least from MFT to 4x5". Thus, a smaller sensor tends to imply lower lens-limited total resolution. No MFT lens tested by DxO ever topped 16MP effective resolution (on a 20MP sensor), and only one hit that. In contrast, literally hundreds of FF lenses DxO tested resolved more than 16MP on FF -- and many are sensor pixel count limited. Bottom line: with most MFT lenses at most aperture settings, resolution is lens limited and you shouldn't expect much more than about 6-8MP scene resolution -- which a 20MP sensor will capture quite cleanly, and a clean 6-8MP is usually good enough (better than 135 film typically delivered).
In sum, as Scotty would say, "you cannot change the laws of physics."
Put another way, making amazingly good MFT lenses would be a higher priority.
That's just nonsense. DXO are not actually very good at physics when they claim that a camera gives more than say 14MPx dynamic range when the camera uses a 14 bit a/d. That's just against the laws of physics. And you can see how nonsensical it is when you put the el cheapo 40-150 on a camera and see the increase in resolution when using hi-res mode. The lens must be resolving more than 20MPx when the image has more detail.
The Olympus lenses are rugged too. A friend bought a 17mm f1.2 and the delivery driver threw it over the gate and onto the concrete drive. Her partner drove over the box in a heavy 4WD with both wheels as it was invisible from the cabin. The lens still worked afterwards.
AbrasiveReducer: While I wish they had used the money they paid for this video on a Pen-F with a 25 megapixel sensor and no video capability, there are some encouraging signs. They are well aware they can't compete with a $4000 full frame camera with a 50mp sensor that weighs twice as much. Also, I didn't see the EM1X in the video, and that's a good sign as well. It's still long shot, but like Fuji, they're smart enough to steer clear of the Canon-Nikon-Sony thing which is already enough to meet anyone's needs.
In the meantime, since I take pictures with my cameras instead of treating them as investments, I'm enjoying Olympus' current cameras just as they are. Especially when I travel. No backpacks, no rolling bags. On most flights the entire outfit goes under the seat.
So sony/Canon and Nikon should give it up as the full frame concept is obsolete and except fot the very top range cameras offers no benefit except strengthening your muscles and breaking your bank when it comes to tele lenses.
Very much second rate images. Just look at the images which win competitions like the Australian Landscape Photographer of the year and international salons run under FIAP and PSA competitions. Most of these wouldn't even get an acceptance let alone an award.
Kona Mike: For all the folks "High" on full frame. Full frame will eventually go the way of the DSLR.
Medium format was the king because of the large negative gave superior prints. Now everyone shoots wedding at 35mm full frame. Give it some time & crop sensors will chip away at the full frame. If you want small convent lenses, the sensor needs to be smaller. All the advances in sensors & processing will enable crop sensors to be good enough.
To me it looks like the sensors are generally plateauing, with any gains being that higher density sensors are improving. I see this trend continuing. Look at what current crop cameras can do compared to some older full frame ones.
yea, yea I know, you only shoot raw full frame with the best primes & switch brands every 2 years to get the best dynamic range and noise free shadows in your 30 stacked frame HDR images. Just remember 99% or the world is happy with cell phone quality right now. Give the sensors another 5 or 6 generations and we'll see
You forget quantum efficiency. The Olympus sensor has close to twice the efficiency of FF sensors so FF sensors aren't that much better. Once you take into account the conversion noise and readout noise FF sensors are inferior at the pixel level to the Olympus sensor.
Scottelly: I'm sorry, but four travel tripods for every budget would be at least 20 tripods. This article shows just four tripods, and they're ALL over my budget.
:(
How about you include this one next time: https://www.bhphotovideo.com/c/product/1148849-REG/came_tv_q66c_carbon_fiber_tripod.html
They have cheaper tripods too, but the one I linked to is one I've had for about five years, keeping it in my backpack for the first four years, and now inside my camera backpack, and I love it. BTW, it gets a few inches taller than the little MeFoto here, and it's cheaper from the same store.
https://www.bhphotovideo.com/c/product/1431945-REG/mefoto_bpscblk_backpacker_s_carbon_fiber.html
I also wonder why you're using only metric measurements. I mean how tall is 140 cm? Is that like 72 inches or 5 feet, or what?
Look at the Zomei tripods. They kill any of those 4.
Pity they didn't include the Zomei tripods as they are far cheaper and better than any of those 4. You can get a Zomei carbon fibre tripod for under $100 USD which is more stable, carries far more weight (think 20kg) and is no heavier. Sirui also makes better travel tripods.
Will somebody please tell me how an image of a man and children playing in the sea remotely fits the street photography category? Looking at the Open shortlist finalists while some images are good some belong in the dustbin. I see far far better photos on Facebook, Instagram and in other competitions.
HowaboutRAW: "it [noise] comes from the light you're capturing."
Meaning optically better lenses help with noise.
Oh wait, that's been clear for years, despite vociferous denials in these comments.
The is not right HR. The quality of the lens has nothing to do with shot noise. Shot noise is a property of light. A better lens will help with sharpness, chromatic aberrations and things like that it does absolutely nothing to noise.
rgames1: Interesting but I'm skeptical that the variation in number of photons can cause anywhere near the amount of noise produced by the camera electronics.
The argument is made by comparing to tubes collecting raindrops. However, nowhere in the article does it say how many photons are captured in the shadow pixels, so the comparison is never backed up with any data. Making the argument requires that that number be established then compared to the variation in number of photons.
So, how many photons are captured by each pixel in the shadows? Further, what is the variation in that number? My strong suspicion is that the variation in number of photons in any part of the image is still extremely small compared to the number captured but that information is nowhere to be found. That information is implied by the analogy but never quantified.
The answer to those questions will show whether or not the analogy is valid.
rgames
The article is correct. A typical pixel captures a maximum of around 64000 photons. The first stop has 32000 photons, the next stop has 16000 photons. By the time you get 6 stops down you are around 1000 photons with noise of +-30 photons. 8 stops down you have 250 photons with noise of +-16 photons, 10 stops down and you have about 65 photons with a noise of +-8 photons which is going to make a very blotchy image.
AksCT: Nice article and simple description of effect of shot (quantum) noise. To be precise, quantum noise is statistically square root of signal, i.e. for every 100 photons, you will have (statistical mean) of 10 quantum photons/noise. So the ratio of noise to signal photon would be 10/100=10% (this is not SNR or inverse of it). Here is an example:
10 photons ->sqrt(10)~ 3.3 => ratio 3.3/10=33% (very dark region)
100 photon -> sqrt(100)=10 => ration 10/100= 10% (dark grey)
100000 photons -> sqrt(10000)=100 => ration 100/100000 = 1% (bright region)
As you can see, within the same image (as shown by your pictures) ratio of noise to signal can vary from 1% (barely visible) to 33% (very noisy).
Quantum noise is inherent nature of CMOS/CCD detectors.
Quantum noise is inherent in light and has nothing to do with how you detect it.
Quite well explained except for one major mistake. You talk about sensor size as being important. It is only part of the story. The real issue is pixel size. A FF sensor with 4 times as many pixels as a 4/3 sensor is identical in noise performance as each pixel is now only 1/4 the area so it captures the same amount of light.
The other thing which has been omitted is that because of shot noise a raw file only contains about 250 levels of brightness for a pixel which is what a jpg file has. Virtually all the extra information in a raw file is nothing more than digitised noise. You can do the calculations quite easily remembering that noise is the square root of the number of photons.
What about Nikon's fullframe mirrorless camera? I also believe Canon is getting out of the camera market as it can't compete with Olympus.
ImagesToo: I think this is not correct and doesn't do anything. You get ISO1600 by amplifying the signal which the sensor captures. You don't get any better noise performance because the noise is the square root of the number of photons in a pixel and you haven't changed that. You might just as well amplify the ISO100 signal (ie increase the brightness of the shadows after processing) and you will get exactly the same result without all the moire. All it is doing is effectively a really crude hdr processing on an image and compensating for the poor RAW to jpg conversion in the camera. Bracketing exposures and using hdr will give better results.
Further comments: A typical high end sensor has a capacity of around 65000 electrons (photons). A 14 bit A/D has a dynamic range of 2 to the power 14 = 16384 discrete levels with each level thus having an effective difference of 4 electrons. At the dark end you finish up with 4 electrons in a bit with a quantum noise level of +-2 electrons (ie the signal to noise ratio is 1:1) which is so bad as useless. To be useful the amplifier has to have an input measured noise level of well under 4 electrons and even then it can't do anything about the very low quantum S/N ratio at the dark end. That's a huge challenge for amplifier design and we are ignoring the effects of thermal noise on amplifier performance.
The amplifier will raise the signal level to the A/D converter so it's noise performance isn't going to be the dominant factor in any case. So the limiting factor is going to be the amplifier and quantum noise neither of which you can do anything about.
ImagesToo: I think this is not correct and doesn't do anything. You get ISO1600 by amplifying the signal which the sensor captures. You don't get any better noise performance because the noise is the square root of the number of photons in a pixel and you haven't changed that. You might just as well amplify the ISO100 signal (ie increase the brightness of the shadows after processing) and you will get exactly the same result without all the moire. All it is doing is effectively a really crude hdr processing on an image and compensating for the poor RAW to jpg conversion in the camera. Bracketing exposures and using hdr will give better results.
I agree about the amplifier being before the A/D will obviously help with A/D noise. Of course the question remains just how noisy is the ISO amplifier itself because you obviously can't do anything about that. In general a 14 bit A/D has to have a noise level well below 14 bits else it can't possibly give 14 bits of conversion and the same applies to the ISO amplifier. No matter what you are still scratching around in the deep shadows where the quantum noise starts being significant. Actually even with a moving subject you can sometimes use bracketing but it becomes messy and you have to do it in small segments of the image where you can get alignment. A single raw capture processed using hdr usually works pretty well though without loss of resolution.