A common belief in the camera world is that lower resolution cameras – and larger pixels – perform better in low light. In this video we illustrate why you should question that bit of conventional wisdom.
It was recently found out that the A7S III has in reality a 48 megapixel sensor that is (apparently) digitally binned to 12 megapixels. https://www.dpreview.com/forums/post/65603108
So the printing experiment is not using the best cameras for such a test, actually comparing binned 48 MP with non-binned 61 MP.
Hold up. Is it just me, or do the prints from the A7SIII have better color than the A7RIV? Something tells me that noise is not the only factor when it comes to low light image quality.
If those speculations were at least half true (smaller=better), Hubble telescope would be smartphone sized. But mad scientists trying to make bigger devices just for fun.
Oh man... like all telescopes on Earth and all telescopes outside the Earth, the biggest and heaviest part is made by the body with its lenses and mirror/mirrors assembly, not by the photo sensor/sensors, come on!
Why are you talking about sensor size and lens(mirror) size? This is a comparison about pixel size. I'd say it's an apples to oranges comment but this is more on the scale of apples to Honus Wagner collectable baseball card comparison.
I prefer cameras with moderate number of pixels. It makes for snappier pictures with higher contrast. That is what the brain perceives as a sharper image.
People want more pixels. They equate more pixels with sharper images. Things are not that simple. More pixels means smaller pixels and that means lower S/N ratio or more noise. More pixels also require lenses that can resolve more line pair/mm and lenses that can resolve enough to take advantage of the additional pixels are not very abundant, and they are also very expensive. Further when a lens is asked to resolve a lot of details, the contrast goes down MTF measurements are made using 10 line pairs and 30 line pairs per mm targets. Needless to say, MTF scores go down drastically at 30 line pairs per mm. High resolution sensors sometimes demand the ability to resolve 80-100 line pairs/mm. It makes details at that resolution murky at best. MTF scores using 100 lp/mm targets would be dismal.
Noise is much more controllable in the post. Some NR is considered absolutely "amazing". Whereas, resolution upsample is still mostly hit and miss. I'd rather start out at the highest resolution I can afford.
Because... Math. The IQ (AKA Signal to Noise Ratio) depends on sensor area. All modern sensors being nearly equally as sensitive and efficient, a 35mm Full Frame sensor will have better IQ than a 2.5 mm cell phone sensor for the same reason a single 24MP photosite on a FF sensor will have superior IQ to a 61MP FF photosite. Photosite comparisons are "pixel peeping." You look to the shadows for noise because it has less signal. The bright parts actually had more noise, but far more signal. Noise is all about the ratio.
Print the two FF images on the same size paper AREA and recapture the image on video and the down-sampling will have much the same effect. (Our eyes down-sample, too.)
DXO re-samples all images to the same 8MP regardless of the number of pixels on the sensor. They do NOT look at full resolution (100% magnification) when evaluating low light noise performance.
Photon shot noise is basic physics, it is the ultimate limit on noise, even though many cameras are not limited by shot noise (yet). Cameras haven't yet reached the photon shot noise floor, but they eventually will (yay!).
It is like the diffraction limit for lenses, as you stop down you lose resolution due to photon self-interference. It is basic physics. The quality of a lens should be judged by how close its performance is able to approach the diffraction limit.
In any case, when re-sampled to the same resolution, or viewed at the same scale, photon shot noise does cancel out so long as the sensor lens efficiency and other factors are the same. It is only when a person is shooting a high resolution sensor for the purpose of cropping the photo that it really matters.
And until approximately late 2014, the DXO Mark website admitted that something like 7 of 11 measurements were in fact educated guesses based on the other 4 real measurements.
"In any case, when re-sampled to the same resolution, or viewed at the same scale, photon shot noise does cancel out so long as the sensor lens efficiency and other factors are the same. It is only when a person is shooting a high resolution sensor for the purpose of cropping the photo that it really matters."
Right.
There is a better Sony lower resolution camera for higher ISO noise control than the A7R IV, and this other body is better for higher ISOs than the A7S III. It's the A1. But I guess that's a new generation, and Sony paid much more attention to heat dispersion internally + low noise firmware.
Thank you for this clarification. However, one parameter that is not even mentioned here is the borders/distances between individual pixels. Some years ago, it was always said that these matter. If there is some distance between two pixels caused by a border that does not receive light, then 4 smaller pixels would receive less overall light than one big pixel that covers the same area, because there's less border area. Is this no longer an issue with modern sensors?
Microlenses essentially solve this problem (along with modern designs that minimise these gaps). It might still be an issue in smaller sensors (though most effort seems to be focused on stopping light crossing from one pixel into neighboring pixels), but in the large-pixel sensors used in ILCs, it's not a major factor.
Let me give you a scientific explanation of the low light and SNR, which should shine some more light for the "pixel peepers" (I am one of those, sadly :-). The theory and practical experiments by imaging specialists have shown the following to be true: The smaller the pixel is, the fewer photons are available to be collected. Thus a 1 x 1 um pixel would collect only about 2,000 photons, while 5 x 5 um, around 50,000 photons. Due to photon shot noise, the SNR scales as the square root of the signal level. So, in the example given, the sensor with 1 x 1 um pixels would have SNR of about 45:1, which is 33dB. The 5 x 5 um has a SNR of about 224:1, which is 47dB. So I made the same calculations for the four popular Sony Alphas: 7III (6000x4000): 6 x 6 um => SNR=268 :1=48.6dB 7RIV (9504x6336): 3.8 x 3.8 um => SNR=170:1=44.6dB 7SIII (4240x2832): 8.5 x 8.5 um => SNR=380:1=51.6dB A1 (8640x5760): 4.2 x 4.2 um => SNR=187:1=45.4dB In electronics, a 6dB should be a noticeable difference.
And what happens when you combine the noisier small pixels (as you do when you scale the image to show it at the same size as the lower pixel-count camera)?
Here's a hint: it ends up being the same. If photon shot noise were the only source of noise, there would be no difference at all between pixel counts when viewed at the same size.
They made a good effort but where are the numbers, the measurements of noise to back up their observation comparing just two photos? Unless you have abandoned your allegiance to science you have the software to measure noise in a conclusive manner. All the current test proves is a 12mp camera will show noise long before a 46 mp camera. And where does the noise occur: shadows? Mids? What kind is it, color or luminance noise? If you’re going to make an assertion that contradicts what is written in digital imaging textbooks and scientific papers you need more than a single very subjective anecdote, you must present evidence in the form of numbers.
The underlying principles are covered here. It's true, the video could have mentioned that fewer pixels (rather than larger pixels) will have a noise benefit at very high ISOs, where the large amounts of amplification or multiplication will eventually reveal any tiny differences in read noise.
But ultimately it's a video that's trying to be accessible to a large number of people, not a scientific paper. Hence it focuses on the impact that most people will see in most situations.
But it leaves them with an incorrect understanding; the only reason you see more noise from the A7S is because you’ve violated the design envelope of a 12mp camera. A 12mp image will produce a very nice 11x14 but that’s it. Enlarging it further is like driving 150 mph in a Morris Minor, it’s possible but it won’t be pretty.
Chris, this is not a fair comparison. You should have compared RAW images as out of camera jpegs could be compromised with the in-camera converting software...
You can compare the raw images on DPreview's Studio Scene Comparison tool. When viewed in comparison mode the A7RIV has less noise, their argument still works (and actually beats the low pixel camera). The proof is right there in front of you to see.
Well if you have noticed DxoMark for over a decade most of their chart topping cameras were always higher resolution sensors of that time, in any given camera format.
Excellent Video - we could always argue that in an absolute sense - that larger pixels would always be cleaner, but the video made really good sense for any pragmatic photographer out there...
I happen to be one of the guys who actually own a A7R IV and A7S III, I am also a full time commercial studio photographer, as well as branching into commercial videos. Thus I've regularly used both cameras for both Video and Photo work.
One of the places where my A7S III shines is how small the file sizes are for photo applications. e.g. For individual corporate portraits involving a lot of people... at 1/5 the file size of my A7R IV... My NAS would have exploded if I shot everyone with the A7R IV. And almost all the time, there isn't a need for photo enlargement, so 12mpx would have been more than sufficient. My A7R IV in this case, would be used for the group photos.
... My A7R IV also is reserved for large commercial prints. Whilst A7S III is reserved for my model test shoots which doesn't need much resolution.
Coming from the A100, A700 era, I'm very comfortable and spoilt, even - by the low light performance of these cameras - we have come so far, and all my basic needs are being fulfilled quite well indeed, so I'm looking for ease of handling, and other creature comforts that allows me to take my mind off - minding the camera, and instead, allows me to mind what is going on with my subjects and also allows me to deal with what's on the set.
In my video work, My A7S III is used as my A Cam because it has excellent performance, high frame rate options, and also easy controls for focus racking and other creature comforts like AF transition speeds, while the A7R IV is usually mounted on my gimbal as a B Cam, so I don't need to rebalance the camera, and it works as a very decent B cam.
Hey and Yes - I've always used cRaw (Because I'm happy to halve my file size in exchange for more careful lighting and control of dynamics in my workflow) - and even so, the A7R IV generates a 61mb cRAW file per image, For each 3 day print and catalog campaign, I wind up with at least 2500 to 3000 photos - thus - I'd be looking at 180 GB for a campaign, and I'm averaging 5-800GB a month.
At least for general corporate portrait work, I'm now just glad to have my A7S III to turn to, Sometimes I actually felt some anxiety because having 12mp is a little scary and would be harder to justify to some (cheap but picky) clients because many phones would easily create 10 to 12mp binned files from their 40-48mp phone cameras, so perhaps the 24mp of an A7 III might just be ideal for me (waiting for A7 IV). While I can do AI Upscaling with my A7S III files, there's a greater need to ensure that every photo that I use for AI Upscaling is as sharp as possible, else it wouldn't be effective.
Regarding the 2 photographs that were printed in the video:
1. I'd like to know if they were shot with the same aperture and speed (and if not, which settings were used).
2. Do note that for *still* this is a fair comparison, but for moving objects the issue might be a different story. And I don't mean video, I mean still photos of moving objects.
3. In the end I do mostly agree with the video, in the sense that for most purposes the noise of the smaller pixels is compensated by the noise of enlarging larger pixels.
Probably he biggest change is the different and frequency of the sampling of color. In low light, I experience colors a bit different when using my 12MP sensors vs 42MP A7RIII and A7II. I don't know how to quantify the effect, and the lower res sensor of course has very meager color resolution, but it ends up looking different when using very high ISOs above 50K.
Enlarging pixels doesn't add noise, per se, it just makes the existing noise easier to see. However, combining pixels (or printing them so small that the eye/brain combines them) reduces noise.
The point is: both sensors experienced and captured the same amount of photon shot noise (randomness of light), they just did so I'm different sized buckets). Print or view them at the same size and you just see that same overall noise, regardless of the size of the capture buckets.
The differences start to appear at very high ISOs where the sensor with fewer pixels (and fewer pixel read events) has fractionally less (electronic) read noise. The large amounts of amplification/multiplication start to exaggerate/reveal these differences.
Isn’t there a factor of shrinking requiring. There are some things that can be proportionally miniaturized, like the min gaps between cells or micro lenses or the different way to make the CFA. The sampling is also different. Is there any benefit to quad bayer?
I've not tested Quad Bayer (or whatever Samsung's 3x3 version is called), but it's noticeable that those tend to be used in very small sensors with very high resolutions (ie: absolutely tiny pixels, a fraction of the size of anything we tend to encounter in ILCs).
Quad Bayer essentially works on a similar basis to what we've shown: that there's not a big difference in noise between a low MP sensor and a higher pixel-count one, if you're viewing at same size (by combining neighboring pixels, in the case of QB). Of course the question is: how much (particularly chroma) resolution can you sensibly pull out of your 48MP sensor if you've mounted a 12MP CFA in front of it?
Richard, thanks for the comment. I guess there’s a limit or trade off, likely the same reason that we see largely in B&W in very low light. At some point, while the SNR after downsampling may stay, the color reconstruction becomes too low confidence. This is easy to see in images of a good 12MP camera vs 61MP. At iso 100,000 the 12MP has much better retention of color.
In these comments I see a few very vocal ignoramuses who have declared, with zero evidence, that this very reasonable video is wrong and they are right. They do not know they are wrong and will not accept that they may be wrong. A case study of the DK effect.
Fair point but who is right and who is wrong? I’m not convinced that they’re doing much of a test. They’re just showing that the more megapixels you have the larger you can print without seeing noise. How is that news?
"They’re just showing that the more megapixels you have the larger you can print without seeing noise. How is that news?" It's apparently news to the few, very vocal, people who continue to dispute the results despite seeing it plainly in the video.
This video is mostly right, but misses a few important issues.
Larger photosites ARE better in low light, mainly because photosites are not 100% used for capturing light. While new technologies like BSI significantly reduce the per-photosite overhead, there is still some overhead at each photosite. Across the entire sensor as a whole, there is marginally more area lost to this overhead in a high-resolution sensor than there is in a low-resolution sensor.
This difference is only likely to show up in VERY low light, far lower than the ISO 6400 that was used in the example. Think images so dark we're literally counting photons across the entire sensor, where the image is fuzzy even after converting to B&W. Capturing 2% more photons here can make a real difference. Resolution is irrelevant since there aren't 50M photons in the entire image. In these EXTREME low-light scenarios the low-resolution sensors should shine. These scenarios are also largely irrelevant to photography.
In the dramatically brighter scenarios like the relatively pedestrian ISO 6400 example in the video, the high-resolution sensor is going to do better for a few reasons, but the most important one is color noise. Because the higher resolution sensor has significantly more color resolution, even when printed at lower output resolution, the higher-resolution image will show chrominance noise across fewer output pixels (and will more easily convert it to less objectionable luminance noise with even minimal noise reduction.)
So, smaller pixels still do have an advantage in extreme low light, but that difference is largely irrelevant to photography, as the video demonstrates.
@ yakovlev: You forget that when you have larger photosites on a given surface area, you also have fever of them.
When you mention extreme low-light scenarios, I assume you are talking about using data close to the noise floor. For real world use - also astrophotography, except for very specialized work when you use specialized cooled cameras anyway - we never use such bad data. How often do people use ISO values above 200 000 for their real world photography do you think?
5-10% differences in noice level is hardly perceivable.
@Magnar W: While others may have made the mistake of not considering there being fewer pixels, it should be clear from my follow-up post that I wasn't making that particular mistake. Otherwise, I think you're saying exactly what I was saying.
Totally making numbers up, if 75% of the surface area of the sensor is light-gathering area in a sensor with smaller pixels, 78% of the surface area might be light-gathering area in a sensor with larger pixels. In a cell phone camera without BSI (a pretty extreme case) I've heard unbelievably low percentages of surface area having light-gathering ability. Even the A7R4 has pixels about 7x the size of iPhone pixels, which is bigger than the difference between an A7R4 and a 12MP camera. Plus, the effect is larger the smaller the pixels are.
As you note (and I wrote [twice]) this difference doesn't really matter for photographic use.
Does the fact that the two bodies used both have backside illuminated sensor affect things? From my understanding pixels on FSI sensors theorectically collect less of the incoming light because the wiring gets in the way. Since the wiring is per pixel... doesn't that mean ultra-high MP FSI sensor has more wiring (and so collect less light) than a low-MP FSI sensor - or is that difference negligible in affect read noise?
That benefit decreases as pixel size increases (the wiring makes up a smaller proportion of each pixel and microlenses are effective at directing the light to the photosensitive part of the pixel).
My understanding is that the benefit of BSI in large sensors is that the photosensitive region is at the front of the sensor, nearer the surface, which means it's better at receiving light approaching at oblique angles (as the pixels in the corner of a sensor might have to).
I'm talking about the print resolution not the size. I know they are the same size but I think it was mentioned that they were printed at different dpi. I'd like to know why. I know nothing about professional printing. Are you saying they had to use different dpi to get the same size because of the different resolution of the sensor?
BobT, as they said, they did no interpolation (up-scaling) before printing. The resulting dpi that they're talking about is simply the native resolution of the cameras divided by the size of the print.
However, of course the printer itself only prints at one dpi, probably 300. So the printer has to do the interpolation. The advantage of doing the interpolation on a computer before printing is that you can then judiciously apply sharpening to make the interpolation look better and more crisp. However you can't *create* more detail by doing so.
Thanks to Chris for clarifying this subject. Back to the fundamentals: the signal to noise ratio depends on the amout of light collected. In photographic terms, sensor size and lens diameter. More or less pixels are not part of the equation - as the experiments performed by Chris confirm.
@Richard Butler... I posted this elsewhere, but am posting it here in hope of getting your attention. The sensor tested here would both BSI... and one the supposedly benefits of BSI is that more light is gathered because all of the wiring for the pixel is behind the photodiode where as on a FSI, some of the light is blocked from reaching the photodiode. I would think that the amount of wiring in a FSI sensor scales up with pixel count... so that an ultra-high MP FSI sensor collects less light than a low MP FSI sensor. Does this matter?
> Read noise differences start to add up at very high ISOs > and in very long exposure photography, for instance.
Sony IMX455 sensor (Sony a7R IV) and its MF and ACS-C/1" variants (e.g. IMX571 in 26Mp Fuji cameras) have incredibly low read noise, about 1e- per pixel. So it gets swamped by shot and thermal (a.k.a. dark current) noise, especially during long exposures.
Interestingly, the advent of IMX455-based astro cameras (QHY600, ASI6200) caused revolution in amateur astrophotography. Until then, the conventional wisdom had always been that bigger pixels (of sensors like Kodak KAF16803, 9um!) and longer subexposures were always better. Well, not anymore: even though pixels of IMX455 are almost 6 times as small (by area) as those of KAF16803, read noise of the former is so small that people can now use many times *shorter* subexposures.
@Richard Medium Format does look better, because it does have a bigger Sensor(or bigger Film format size), bigger Pixels, and a different kind of Rendition, DoF, in terms of Bokeh, Tonality, Transitions, Textures, etc. in contrast to "Fullframe" 36x24mm, i'd like to add.
> Medium Format does look better, because > it does have <...> bigger Pixels
Not necessarily at all.
For instance, the Sony IMX411 sensor found in 100Mp Fujifilm GFX 100S is just an oversized version of the FF IMX455 (in Sony a7R IV) -- *same* pixels and technology.
So, " Tonality, Transitions, Textures" are purely... well, in the eye of beholder.
vadims, you pick exact *one* possibility, maybe thing out of it. When i speak from MF, i mean usually Film. Into 2nd place digital. And MF does have for real all the things, written above.
There is a specific medium format look, one won't have with 36x24mm, it looks differently.
Marc, we're veering more and more off course of the OP, but anyway... :-)
> There is a specific medium format look, one won't have with 36x24mm
... until perspective/framing and DOF are matched.
Michael Reichmann, who was an avid MF shooter, once shot an image with an MF camera (can't remember which, he had plenty) stopped down, and a point-and-shoot (!) camera; printed both, and hung them in his gallery. He then did a blind test with visiting fellow photographers, asking which print was made with which camera (telling them in advance the options). The guess rate was approximately 50/50. There was an article at LuLa about that.
BTW, right below this post kodacolor200 wrote: "The EM1 destroys my current gen BSI FF cameras".
@vadims yes I did state that and for handled photography of static subjects it’s does destroy ff low light bsi sensors because Olympus IBIS does what it says on the packet.. Read the comment. Whilst on paper no a MFT sensor does not match FF in noise performance. In certain lowlight applications it destroys it. A em1iii with a f1.2 that is not only sharp across the frame but gives what is an undeniable DOF advantage being a f2.4 equivalent (a good photographer you will understand the benefit of this) 1.2 on ff give a DOF that I would describe as unusable in many applications. Particularly landscape. Now you have a usable f1.2 and can shoot 4-5 sec exposures routinely. Yes for static it destroys FF
If shooting static images in low light without the use of tripod. The EM1 destroys my current gen BSI FF cameras and I have yet to find a camera that comes close to the em1iii for this task. This is purely regarding stabilisation. However being able to comfortably hand hold 5 second exposes, 10 with a little patience, is not something that will be matched by FF anytime soon.
It’s these types of variables let alone the differences in sensor , that makes me feel arguments around sensor size are pointless and redundant. And then there is the BIG question of output. I’m a full time practicing photographer the bulk of all work that you submit today although it is geared for print and there are print outcomes. The clients only concern is how they appear on their screen. How it was shot. NOBODY CARES
You might to slow down your shutter and actually read the comment. Yes a larger sensor is ideal for shooting low light action, but it’s not ideal for stabilisation. Point being a larger sensor is not inherently better at all. Why choice is good and bias silly.
Yes - that is the reason Nikon professional camera D5/6 only has 21 MP whereas D850 has 46 MP and Sony Vlog camera a7S III comes with a 12 MP full frame sensor
Oh dear just visited Sony Alpha Rumours and the number of clueless trolls is ridiculous. The insults over the video they clearly haven't watched are insane.
Why would a Sony fan be upset over either conclusion? If you believe that more MP are better, than you have the Sony A7RIV in this test. If you believe that the lower MP count is better, then you have the Sony a7sIII. Either way a Sony wins in this test, because it's Sony vs Sony. It's not like it's one brand against another.
Thanks a lot Jordan and Chris for this coverage of a great topic in Photography, really liked it a lot. Please give us more of such background knowledge. Great work
Recently Panasonic added "a new anti-reflective (AR) coating to reduce flare and ghosting, and to minimize light loss from reflection." which DxO said improved their camera's sensor performance. It is now virtually equal to the 24MP APSC sensors in use today. It is interesting how some simple technology can have an affect and even the playing field.
This is marketing BS. Sorry but while AR coatings are a serious science and absolutely can be designed to minimize reflection, the idea than an AR coating can give a new sensor a one stop advantage over an existing modern sensor just doesn’t pass the smell test.
In order for that to be even possible you would be able to look at your reflection in the image sensor of your existing camera and it would look like you were looking at yourself in a mirror.
Bill is a good guy and I prefer his site to DxO. There is some other reason then why the GH5 mark II is virtually equal to recent APSC cameras and a 1/3rd of a stop better than the GH5. Both DxO and Bill found the same results. APSC has an advantage at iSO 100 and the GH5II has one at high (near unusable) ISOs.
Panasonic better start dripping more info on the upcoming GH6. This is an literally Arms races in the Camera World an Panasonic needs to speed things up in order to keep MFT going. I am thinking of entering that World but I will only wait for ohhh so long. All the major players are starting to expand their lineups.
Mike, LOL...I can just imagine the idea of "the image sensor of your existing camera and it would look like you were looking at yourself in a mirror."
Note: I'm not laughing at you but with you...that's a pretty funny idea.
Imagine the marketing for such a camera: It's the ultimate selfie camera! No need to even take the image, or use a lens, just take the lens cap off and admire yourself in the image sensor. No EVF lag, no buffering issues, no storage card needed. Just stare in delight.
It wasn’t meant to be funny. What’s funny is the ridiculous idea that you can get a further 1 stop advantage (I.e, double the light) with a new AR coating.
Again the implication is that existing sensor AR coatings are so inefficient that they reflect half of the light back through the lens.
Can you point to where someone said "a further 1 stop advantage..." 1 stop? Did you make stuff up?? I see 1/3rd of a stop mentioned, but not what you said. If you add in the GH5 the link to P2P backs up the improvement between the GH5 and GH5ii and how it is now even with the A6600. No need to make stuff up. Talk about silly.
So again you are wrong, its .72 stops and .55 stops.
The GH5 was already out performing the A6600 because it was well within .72 stops. And the GH5 II, if you can believe P2P, erased the equivalence gap. Now I wonder if the coating did help some.
I also don’t know where you get that the gH5 is well within 0.72 stops (again your math is wrong but that’s another story.)
To me these two look quite similar in noise and shocker, a rad bit more detail available on the (slightly) higher resolution sensor. (Consistent with the video although I’m not saying for sure that the small resolution increase is the cause.). Note GH5 is iso 6400 and a6600 is iso 12,800.
"You neglected the difference in aspect ratio in all of your calculations"
You actually think aspect ratio affects the area of a rectangular sensor and how much light it gathers! I wish I saw the look on your math teachers face when you told him the 3x10 rectangle had more area than the 6x5 one. Or when you told everyone you would need more paint for a 5x12 wall than a 6x10 wall. I hope everyone in the Photography Science and Tech forum sees this, They'll enjoy a good laugh too. Mike thinks the most poplar aspect ratio (M43, MF, smartphone, etc) for camera sensors somehow makes them gather less light!
As far as the photons to photons graph, it's very possible that the Panasonic is shifted, where the iso values don't mean the same thing as the Sony. It's clear the Sony has better absolute performance from that plot.
And no, apsc does not have a one stop advantage over m43. You need a sqrt(2) crop factor for that. Full frame does have significantly more than a one stop advantage over apsc though...
It my fault for mentioning a brand to compare the GH5II to compare against in my link. But since we are comparing, DxO shows similar results to Photons2Photos. The GH2II score slightly lower for DR at ISO 100 and slightly higher for at ISO 200 in their dynamic range graph. Really about the same no matter which ISO after that. In the real world they will be equal. Panasonic said it is due to the coating, but I am open to reading about other reasons why they are equal to some APSC sensors now.
s1oth1ovechunk, you are right, APSC is closer to M43 than to FF. Because we traditionally use 1/3rd stops on cameras, often it is said a 2/3rd stop difference between M43 and APSC, and a 1 1/3rd stop difference between APSC and FF. Its an approximation like 22/7 is for Pi.
Y/N are you going to tell the whole world that they are wrong when you talk about crop factors? Crop factor for M43 is 2.0x no? Why do you think that is?
Anyway not gonna waste another breath on you. Have fun in your imaginary land.
Its good to see everyone didn't fall for Mike's misinformation. Its math and there is no debate here.
And Mike the discussion is about M43 vs APSC and the difference in stops. "Crop factor for M43 is 2.0x no? " No, not between APSC and M43, which is what we need to determine the difference in stops between APSC and M43. Remember, the topic is about how the GH5II over comes the ,72 stop difference between M43 and APSC.
M43 Crop factor is 2.0 APS-C crop factor (non Canon) is 1.5 APS-C crop factor for Canon is 1.6. The reference is and has always been FF.
These are facts.
If you want to compare crop factor for APS-C (non Canon) vs M43 that’s fine. It’s 1 1/3x. And if you want to convert that to a stop advantage it’s 0.83 stops. Sure. A little short of 1 stop.
My statement that you can gain a 1 stop advantage (or a 0.83 stop advantage or a 0.7 stop advantage. Or a 0.5 stop advantage) due to some new revolutionary AR coating is just as ridiculous now as it was when I made my initial comment.
Mike, you are all over the place, (still wrong, but getting closer) I even did the math for you above, lol. Diagonal crop factor: 28.3/21.6 = 1.31 (that you finally got real close)
As for the statement, it is pretty amazing the GH5II sensor virtually equals the A6600 in DR (at some ISOs the GH5II is better). And that is according to both DxO and Photons2 Photos. It will be interesting to see of the GH6 passes the A6600. (it will for video, IBIS, and everything else of course).
What’s interesting is you said previously you preferred a 2:1 aspect ratio but then you insist on using this (useless in your case) 4:3 diagonal to try to give m43 an advantage.
Again, I am sorry for mentioning that other brand when talking about the GH5 II and its sensor improvements. I should have said Nikon or Canon, and then it would have stayed civil.
I don't see any discussion of the oddity in the room for this discussion - Nikon's ISO 64 sensors (D810 at 36 MP, D850 and Z7 pair at 45 MP) versus the same generation ISO 100, 24 MP sensors (D600/D750, and the D780 and Z6 pair).
Nikon's ISO 64 sensor are great but because to get to ISO 6400 you are now multiplying the signal by 100x rather than 64x for the ISO 100 sensors. The dual gain plays a role, yes, but that applies to the same generation.
As has _probably_ already been said (I am not going to read 585 comments before posting to check) ... Unless you have the same camera, with the same processor with two different sensors, this is comparison is useless.
As is noted in the article, processors get better, buffers get faster, sensors themselves improve dramatically. So comparing a new camera with a massive number of pixels to an older camera with fewer, comparing one brand camera to another, one sensor design to another, comparing this processing engine to that one, even comparing this lens to that one, EACH of those changing the image ...
and then somehow looking at the results and saying "this is a comparison SOLELY of pixel size" is inane.
Today's computational photography will now, more than likely, always trump anything older.
=== tl;dr - Unless you can absolutely match everything between A and B _except_ pixel size, before taking the picture, then this has _nothing_ to do with pixel size.
And remember, unless you have an RGB sensor for each pixel Like a Foveon sensor it's ALL computational photography to reassemble separate R, G, and B, pixels into the image we see. So there's no way to _solely_ assess pixels size without the image being debayered / processed first. And that processing this year, is better than the processing last year. This firmware versus that firmware. This camera versus that camera. etc.
And the Foveon sensor also requires computational processing to compensate for loss of light through the layers.
FYI, to those it might concern, this is not an Brand War. It's an conversation about large pixels versus smaller pixels on an FF camera. At least in the Video test which basically was about printing more than anything else from FF cameras.
Roger Cicala Published an article about such thing on February 5, 2012. DPR published some article in 2015. Each of those were much more in-depth then this current one. I would love to see more in depth updated ones.
.” So for those of you who don’t want to tackle 4500 words, here’s what I’m going to tell you about sensor and pixel size:
Noise and high ISO performance: Smaller pixels are worse. Sensor size doesn’t matter. Dynamic Range: Very small pixels (point and shoot size) suffer at higher ISO, sensor size doesn’t matter. Depth of field: Is larger for smaller size sensors for an image framed the same way as on a larger sensor. Pixel size doesn’t matter. Diffraction effects: Occur at wider apertures for both smaller sensors and for smaller pixels. Smaller sensors do offer some advantages, though, and for many types of photography their downside isn’t very important. If you have other things to do, are in a rush, and trust me to be reasonably accurate, then there’s no need to read further."
The summary from that article doesn’t do justice to the 4500 words he wrote. If you ask me his summary statement is misleading.
The key is actually in the headline. He basically writes 4500 words to explain why sensor size matters. The article isn’t called “why pixel size matters”
DPReview TV: Why lower resolution sensors are not better in low light is misleading. I stated I wanted an updated view from folks like Roger Cicala seeing how we are now in 2021 an the advances since then. Roger covers pixels and sensor size, folks will make out of it what they want, as they always do.
of course sensor size does matter for noise and low-light performance. In fact this is the thing that matters most. Just don't understand how somebody could write such a nonsense that it does not matter )
It’s a summary bullet taken out of context of the whole 4500 word article. Not taken out of context by BTN. Taken out of context by Roger himself. Roger’s summary doesn’t summarize his article effectively if you ask me.
BackToNature1 - It's not a question of advances in tech, it's more about changes in perspective. Roger's article seems to take a pixel-level approach (and at first glance, possibly focuses a bit too much on the impact of read noise, rather than also considering shot noise). Chris and Jordan are talking about the effect on your overall image, not the individual pixels (because, when you present images at the same size, the pixel level noise stops being the thing that matters).
My repost of Roger Cicala article/blog wasn't to highlight any given parameter. Shot Noise/Read Noise, Just to give more of an backdrop to the discussion. Which by my definition, is hardly over at this point. Concerning advances in tech since then, I guess we will have to agree to disagree. Like Chris stated, this is really Deep Down the Rabbit hole. But it's always great to have these types of discussions. Love that Chris and Jordan are talking about things that have an effect on one's overall image. Great job by them. As always.
Let me try to re-state the point I was trying to make: the statement 'Smaller pixels are worse. Sensor size doesn’t matter.' is only true if you solely look at pixel level.
We're not saying something different now because technology has improved* in the meantime, we're saying it because we're taking a whole-image perspective, not looking at individual pixels.
* It has, but not in a way that significantly changes the answer to this question, during that time.
I don't really understand the discussion. Just take a 45 or 61 MP file, downrez it to 12mp in Photoshop and compare it at 100% to a native 12mp file (same sensor size). The downrezed file will be much sharper with much finer ,less visible grain and colour noise. Or do the opposite: upsize the 12mp file to 45mp or 61mp and compare it at 100% to a native 45 or 61 MP file like one might do for a mural sized print. You will be surprised by the comparatively coarser grain and softness of the upsized file.
Trying is always better than blindly believing.
I for my part only buy high MP cameras, because I need to print big from images shot in low light. The difference to lower MP cameras/larger pixel sensors is startling and much greater than the differences between sensor generations, for example.
Yup. Years ago, it used to be that a higher resolution sensor (given equivalent total sensor area) would have an area efficiency penalty. This was inherent to FSI sensors and older manufacturing processes.
This hasn't held true since around 2015 or so, when a combination of improved manufacturing processes, BSI, and gapless microlenses greatly reduced the pixel size at which you started to see penalties for having pixels too small.
(For example, in the case of the A6300 sensor, a finer copper-interconnect process and improved microlenses alone was enough to give it performance per unit area that is still within 1/2 stop of the best Bayer-color sensors on the market)
Also, sensor generations doesn't really matter for this metric any more. We've been pretty much flat since 2015 because we're so close to 100% area efficiency it's hard to get noticeably closer. Readout rates are the next frontier.
Not necessarily - there are other factors. Very good IBIS is useful in certain low-light situations where motion [subject] blur is not an issue, or even perhaps desirable. You can also more or less negate the two stop advantage of 35mm vs MfT by having a two-stop faster lens. A faster lens will rob you of some or all of the advantage of a [potentially] lighter camera, but it may fit some circumstances. It also of course depends on the usage you need. Do you need very high ISOs or very high resolution? If either of those things are the case bigger MAY equal better.
The caveat being once you go above 35mm are you going to get equally fast lenses at the ranges you want? None of these choices are THAT straightforward.
I use both 35mm and MfT formats - and with increasing frequency - my phone camera!
Total light capture is dependent on what subject you need “total light” for. I shoot both ff and mft. I shoot urban nightscapes for a living and current Olympus IBIS allowing me routine 5 second exposures, (10 secs in a pinch) combined with f1.2 light gathering and a 2.4 DOF means I simply do not need a tripod for this application anymore. To say throwing away the tripod opens doors in the creative application of photography is a huge understatement. I can make larger prints than my high res Sony’s simply because I’m always at base iso on MFT.
To capture the same shots on stabilised FF handheld. I would need to be shooting 12800 + to get the same DOF and even then still not getting the same “total light” as my current MFT set up.
I seriously wonder how although the results would have been the same, the comments would be completely different if this compared the R5 and the R6. Of course people would be hand waving because the resolution delta is lower, but I guess some people here would be completely changing their tune.
You don't understand the video you linked. Listen to the guy who made it, he's not comparing high quality prints but instead very low quality ones (this is very likely a very cheap poster print where downsampling 102mpix to 10, which is what's done, will destroy the image completely). He justifies this because that's all he needs, which is fine, but everything about the video is click bait, at best.
it's actually a pretty good video. The point being made is not that less MP is better, but that higher MP may not matter that much for most uses.
He does address one reality that many overlook, and that is the fact that most photos will be viewed on the electronic screen. Even his test on Instagram is measuring a real world situation for many.
Now with the prints, I don't see in the video where he says these are not high quality prints, like tozz claims. In fact he gives a promotional shoutout to the printer company, and I don't think he would do that if he said they are making low quality prints.
But again, his point is that higher MP may not matter as much as many may think it does, not that it doesn't matter at all.
BTW, his test is in interesting one. I bet most people, if they saw images taken with the higher and lower res cameras, couldn't tell the difference.
It’s in the end of the video (of course) where he sums things up. Anyone who has printed from high resolution cameras know it’s not an easy thing to get right, even with my Z7 I had major artifacts using normal poster printing when disabling the printer lab optimizations (the way they avoid that is… blurring). In the end I went with the higher quality printers and all the false details disappeared while keeping the real, as captured.
The video doesn’t even mention PPI nor the service used (and what configuration), again, everyone who does prints from high resolution cameras should be pretty aware of the importance of this.
Again, it’s fine if he doesn’t need more than 12mpix but he’s spreading a lot of confusion (just how people who claimed lower res cameras had lower noise did/keep doing)
A properly printed photo from the 102mpix would be easily distinguishable from 12mpix at A3 or larger at the distance they’re looking. But again, printing takes knowledge, it’s more than uploading a jpeg on a website.
1) Does MP make any difference and if so, in what regard?
2) Do those differences make any meaningful differences to my individual use cases?
I think this video is attempting to answer the second question more than the first. And I do think the video does a good job of showing that for many people, perhaps most, the MP's don't make that much of a practical difference.
Ironically, as technology advances, it not only enables higher MP cameras with even better qualities, but it also makes the lower MP cameras even better as well. Also post processing and printer technology also advance, which tends to benefit the lower MP images more.
Hence I've seen at least two videos posted in this discussion where pro's state that the 12MP Sony a7sIII is more than good enough for their stills photography use cases.
To know if something is good enough basic understanding of what you’re doing is required first. The author of the video doesn’t understand printing, that much is very clear, hence making a video spreading seriously flawed information should be criticized. It’s perfectly fine to not know something, or to not understand it, but don’t make videos with the impression that you do (while not even comparing apples to apples, most pros would know jpegs are seriously baked, some more than others).
Ironically the video that is the parent for this comment thread shows the differences clearly in much smaller prints…
FYI, i posted that video link ago shortly after being on YT, 1st on DPR, and even more than twice here, and people bashed for it.
I am perfectly fine with 12 MP, your mileage may vary. I print usually A3 Size, and it's good enough, for the Wall. 24+ MP for my needs is overkill.
On a native resolution, a lower count MP Sensor at given size (hereby: 36x24mm) does give better HighISO, but the high MP Sensor into contrast was printed 1) with a different resolution to a given print size in comparsion (dpi/ppi) into the video, and 2) Highres Sensors could always downsample the resolution to xy, and therefore looking better, in terms of DR, sharpness & such, but not into 1:1 with a lower MP sensor, then they'd loose, because of the higher noise by laws of physics.
A higher MP sensor, when capturing photos, does get less photons (@ spec'ed time)to each pixel, because the surface is being smaller as reference, a D700 does feature 8.4μm Pixelpitch, a A7R IV 3.73μm Pixelpitch into contrast!
I came here both for you and to enjoy how a vocal minority struggles with physics and listening comprehension, and was not disappointed on either account!
Certainly [and pretty obviously] the most important thing about overall noise is the area of capture rather than the pixel density. But isn't there one caveat to that? Actually perhaps two - one of which you do touch on. The one you mention is processing speed and power and that potentially added some noise to the final image. The other is that while it's a small difference, you are capturing more light with larger photosites.
It comes back to this idea of the analogy of buckets catching rainfall. If you have a ten foot square area with nine large buckets [the buckets of course being square, not round], they're going to collect more water than the same area occupied by 81 buckets [9x9] because of the rims of the buckets.
In the case of our sensors the difference is not so significant because algorithms are there to average out each sample; and what we're interested in is the picture, or local detail, at each site in order to create an image, not just report the amount of light captured in total. But that 'sample' idea is the important thing. When it's raining cats and dogs, every bucket collects a decent amount of rain and we get a clear picture. Noise happens when the rain [amount of photons] slows to a trickle and our local samples become unreliable.
Averaging out [the algorithms] solves this to a certain extent, but as the flow diminishes, the sample becomes increasingly unreliable until there comes a point where the area of sample is significant against the flow of photons.
This seems to me theoretically to indicate the absolute sensitivity of a large pixel count camera will be lower than one with less photosites.
Only significant at the very extreme end of low light capture, but nonetheless...
"[...] nine large buckets [...], they're going to collect more water than the same area occupied by 81 buckets [9x9] because of the rims of the buckets. [...]
Very true, but this limitation has been reduced to almost negligible levels many years back, since the gapless micro lenses and other improvement in manufacturing. And since BSI it is obsoleted completely.
If anything more megapixels actually gives better color information to start with, since it subsamples more RGB values on the same area (not discussing detail at all).
@badi Yes I did say "the difference is not so significant".
But the link you've provided bears out what I said - at the very highest common sensitivity setting [10,2400 in this case], there's more discernible detail in the A7sIII shot than there is in the A1 [a newer generation] and more still than in the A7rIV shot - which is a similar generation and has an even higher pixel count. The A7s also has two higher sensitivity setting which are eschewed by the other cameras. You could of course push them the extra two stops in software, but I imagine the already discernible differences would increase.
That was the only point I was making - at the absolute extreme limits [which granted wouldn't be useful in most everyday situations] you are getting a higher ceiling with the larger photo sites.
This is the only comment I've read so far taking into account, that noise is mainly created by discharges of photons spreading along the surface. This noise is not only observable in cameras, but basically every highly sensitive device. Of course today there are advanced technologies to minimize the effect, but that doesn't mean it's not real. Just look at the image output of cramped sensors from the last century.
I guess you have better eyes than me. I hardly notice any difference - in some areas one may be better in others areas the opposite, what looks to me more of a software random advantage.
My point was actually that the things moved from "not so significant" before BSI, to absolutely no difference - practical or even *theoretical*, nowadays.
The A7S is allowed higher ISOs probably because in video you might need simply to push the ISO higher as you don't have the possibility to shoot longer exposures. And because of marketing and all that.
Let's forget about the margin or rim issue - it's virtually negligible.
So consider this as a thought experiment. You have just four buckets, each 2.5m square, butted up precisely. Now you have just one huge 10m square bucket.
When it's raining hard you average out the rain in the four buckets and it will give you the same measurement as that in the giant one.
Now consider this. You're measuring over a period of less than a second and in that time only one spit of rain lands in your 10m area. Clearly that lands in the big bucket and you get a reliable measurement.
If you have four smaller buckets, the raindrop hits only one of them and you average it out. The issue is you don't know where your 'false' values are. The sample is so low, the averaging process becomes more unreliable. This is the extreme scenario and the reason why at ultra-high sensitivities the sample rate becomes increasingly important. Look again at your link - see what you can read - that's the easiest indicator.
I'm all for simplicity, but you can't simplify digital sensors to a bunch of buckets. I know you're trying to make a metaphor but it's failing quite badly.
When I look at the A1 in the link above it's clearly more detailed, with retained definition, over the A7S3. Same goes for A7R4 (even though I hate that sensor, the colors are completely off). You have to remember that downsizing from 50 to 12 is also very dependent on the algorithm used, the noise will eat certain areas of the image, the difference is that you have the option to decide which and how much/if at all.
@tozz You seem to be looking at a different test image to me - I see very marginally more detail in the A7sIII than the A1 @ ISO102400. Not a huge amount, but I think it's there.
It is very obviously there in the comparison between the AsIII and the ArIV - same generation sensor and a much greater difference in pixel pitch. If you're downsampling an image, noise should diminish along with detail; if it still doesn't match the noise exhibited by a lower pixel count sensor it's because the noise is greater than the difference in resolution is cancelling out.
Incidentally - this is a very minor caveat to the video, not a fundamental criticism.
If I were given the choice between a high resolution sensor and a lower resolution one, all other things being equal, I'd go for the higher count - in fact I've done just that. I'm not a videographer or someone who needs forensic evidence in near total darkness.
@Chris2210 looking at the studio comparison in this thread, the A7R4 is a clear winner. Noise isn't magically disappearing when downsampling, it's the result of the algorithm used, bad algorithms will downscale worse (preserve more noise in this case).
There’s no indication any algorithm is used in downsampling. You’re using that assumption to infer an algorithm is applied that actually results in more noise than straightforward binning that would just lose detail and noise in the same proportions. Seems a bit of a stretch, especially as you’re offering nothing to evidence that.
I didn't say that, now did I (preserve is not the same as introduce)? Color noise are often in contrast with surrounding details and when downsampling those pixels will have a higher impact on surrounding ones, varying by algorithm used. By default Photoshop will do everything it can to keep details, as an example, which makes noise more prominent. You can easily test this yourself, just make an image 1000x1000, black background, add some pixels here and there in varying colors, resize it to 100x100 and compare the result of the different options, it's very visible.
Usually this is what you want (keeping the detail), but with a high resolution image you have the option of tuning your resizing, if you so want.
The point is, for the purposes of this comparison, when resized [which is the entire purpose of the article] at ISO 102400, the A7rIV image is clearly noisiest - which gets in the way of detail in this case, as noise most often does.
So there's either a flaw in the methodology that belies the point being made, or at vey high ISOs the sensitivity of certain sensors falls off a cliff.
I know you didn't like the shot noise metaphor, but what's wrong with the basic logic? How do you get a reliable average from samples which by their nature have become completely unreliable, in number?
Your methodology of "argumentation" (keep repeating the same statement as fact, even though its very much contested) is getting tiresome, keep moving the goal posts, you will have to do it with someone else.
"The Sony A7S III leverages a lot of tech from the A7 and A7R models, and its stills credentials are impressive. However, 12 MP is considered a bit on the low side for stills nowadays, and it is principally as a video camera that the Sony A7S III appeals. The lower pixel count means the sensor’s pixel dimensions on the long edge deliver essentially native 4K without cropping, and those large pixels should have benefits when working with less than ideal lighting."
Peeping at 100% is the root of the false notion that large pixels are (substantially) better in low light. You need to inspect the images at the same scale to do a proper comparison.
I understood that cosinaphile suggested to compare images from a 12MP and a 61MP camera at 100%. This will result in analysing vastly different parts of the scene and will not provide a sensible comparison. The images should be analysed at the same size to reveal their relative merits. It does not matter how big as long as they are the same
In fairness to cosinaphile, a difference does start to appear at super-high ISOs.
This article which Chris and Jordan refer to in their video includes some of the maths explaining why the increased number of read noise events in a higher pixel-count sensor can eventually see it drop behind (even though the amount of read noise per pixel is lower).
The question for each photographer is: how often do I use ISO >102,400 and is the difference big enough to give up the detail benefit of the higher pixel count at lower ISOs (the answer will vary depending on what/how you shoot).
“how often do I use ISO >102,400 and is the difference big enough to give up the detail benefit”???
This is exactly the right question. And further, for any combination of cameras a buyer is considering, what is that ISO threshold?
I regularly shoot A9II’s and A1’s above 10,000 ISO and so far I prefer the A1 every time. I also never go above 25,600ISO. There is no noise penalty and there is more detail.
Just excellent stuff. How lucky we are to have these two creators who take such great care to be accurate and entertaining all at the same time. No silly hairstyles or clickbait here.
chris while your at it can you tell your comrades to toss out all these conjecture about "equivalence " cause if your gonna start tossing science in the trash like the fda ... id love it if you started with equivalence
which obsesses over nuance of noise at a given iso etc postulates a new f stop for almost every camera except those in cellphone .. another "agenda "
so apsc does this full frame does that ... 1 inch sensor another thing and m43 this ? you wanna promote the fiction that tiny pixels are as good as big ones i suggest you start by comparing cell phones at iso 1600 at full size against ilc cameras
come on guys you cant have spent the last several years obsessing over nuance of equivalence and then promote this stuff ... its just crazy
yuo gonna have a lot of people jump ship on this if yuo throw noise comparisons at 1005 ont with the bathwater
I think you need to re-watch this video. And re-read everything dpreview has ever posted about equivalence. And re-read a few middle school English textbooks also.
I don't see a lot of equivalence around cell phones because no one is comparing a cell phone to an ilc directly. You dont buy a cell phone OR a camera. Usually you buy a cell phone and maybe a camera.
As far as ignoring equivalence it is just a model, a model that works very well in practice. If the f/number thing bothers you so much, compare lenses with the same field of view and then just look at their aperture diameter. Ignore f-number and just think about the hole that actually let's the light in. You don't even have to think about the sensor size or its focal length. Keep it simple.
It may help to think of a lens as being mounted to a projector and the focusing distance is a good analog of equivalence. You can make the screen bigger, but your image will be dimmer all without changing the lens. This is equivalence, essentially.
This video is compatible with equivalence if you actually understand them; in fact this article on equivalence (which contains a section on smartphones) makes a similar point as the video, in section 4.3: https://doi.org/10.1117/1.OE.57.11.110801
I have a 24 megapixel 2012 Nikon D600 (DSLR) and a 12 megapixel 2012 Nikon P7800 (digicam). Both are fine for my needs, but the P7800 Is unusable at ISO 800, while the D600 does credible work for me at ISO 6400. Sensor (pixel) size, perhaps?
Your d600 has 20x the area of your p7800 (about a 4.5x crop factor). Your ISO values seem about what you would predict with this knowledge, but I would expect the p7800 to be unusable at ISO400, since that is like ISO8000 on the larger sensor... :)
@Chris Butler, pixel size and sensor size are two completely different things. They are both attributes of the sensor, but should not be conflated. The small-sensor P7800 gives you worse image quality than your full-frame D600 not because of the pixel sizes, but because of the sensor sizes. As s1oth1ovechunk points out, the maximum light gathering ability of the P7800 is 1/20th that of the D600—and that's before taking into account what is probably a much slower fixed lens on the P7800. This would be true regardless of the pixel count of either sensor.
science is gone in the west for the last 18 months .. and its gone here regarding the size of sensor wellls small vs large ... cause print them out...lol
comp them ...print same size add some noise reduction and voila
i call the worst bs ever
but enjoy your new normal ... until your next new normal arrives and the inevitable race to the bottom jack , what will they say does not matter next?
"nonsense ... just a new agenda science is gone in the west for the last 18 months .."
You mean... like vaccines?
"cause print them out...lol comp them ...print same size add some noise reduction and voila"
Well, you see, that's precisely what they did. Would you like a Youtube cheat code link for people who run their mouths without knowing what their talking about? Here you go: https://youtu.be/gAYXFwBsKQ0?t=324
SO COMPARISONS showing low light superiority . as when you did the camera store comparison are suddenly meaningless?even though you saw a distinct difference at 100%? because printed out at full size its"not apparent " im sorry i calll bs. change the rules all you like to favor the small pixel because "apparent at printout size ? now you are gonna change whats important and say "hey ya cant ususally see it ?/"
you have just made a case for apsc micro 4\3 and full frame being the same
cause print em out at 12 x 16 full size or whatever full monitor? and apparently they are more of less the same?
this is the worst video article you guys have ever done ... and the most agenda driven ... this s from a fan of your and jordans work who used to regularly repost your mirror-less party vids ... jordan doing parkour for focus tracking test etc
i am very disappointed
the many comparisons on your reviews tell a truth that you are now dismissing ...why?
The whole point is that when viewed at the same physical dimension(aka same print size), there is no difference in noise level because both cameras have the same sensor size(FF). Viewing at 100% zoom will make the higher-MP image larger(at same ppi) and yes there could be a discernible difference in noise level, but that already defeats the purpose of comparing the image of an *identical scene*. Same conclusion cannot be drawn when comparing sensors of different size because different surface area means different amount of light being gathered on the sensor.
@Cosinaphile, i do think DPR folllows a special kind of agenda hereby. I said also less MP are better into lowlight, one simply should compare A7R with A7S.
Any people bash for that. I am way disappointed also. Pixel Quality and DR is something completely different, than printing at different dpi/ppi because of resolution terms onto a given print size. My father was some years into the professional printing process, via heidelberg printing machines.
The fact from the video, printing with more resolution to a given print size, could being 1:1 in fact seen as the same process, as downsampling say 60/61 A7R IV to a specific point, to gain more DR and visibly less noise.
And if one sees A7R III/IV vs the A9 videos on YT, one does see for sure more high ISO noise on the A7R, due to smaller pixels, hence photosites.
The a7s is still better at iso values no one shoots at...
I think your disappointment in this video is fairly unique. If you see it creating contradictions I think that's because you're making it more meaningful than it is.
All you see here is that the larger pixels do not have a noise advantage over the whole image in the vast majority of lighting conditions.
It’s not just that “you can’t see it”. You are missing the point. Even numerically, the difference that exists on a per-pixel basis mostly vanishes if you measure at the same spatial scale relative to the image.
There's no 'agenda' cosinaphile (I can't even think of why it would make sense for someone to take a particular angle on this issue).
It's not about prints, specifically, it's purely about whether you think a 'per image' or 'per pixel' perspective is more valuable. Because most modern sensors have extremely low levels of read noise, most of the noise we encounter comes from photon shot noise (randomness of light). At which point, overall image noise is a question of how much light you captured to make the image (dictated mainly be shutter speed, aperture and sensor size, or shutter speed and equivalent aperture, whichever way you prefer to think of it).
ie: When viewed at the same size, the images of two sensors with different pixels counts will have similar amounts of noise (because the higher amount of pixel-level noise averages out when you combine those pixels to downsize to the size of the lower pixel-count camera).
Because DPReview started in an era when pixel counts were insufficient for large prints, we focused on 100% crops (pixel level detail). In hindsight I think we stuck to that approach for too long, having not recognized that we'd passed a point where most cameras have (at least) sufficient pixel counts for most people's needs. At which point an image level perspective becomes more meaningful.
We still provide the images with pixel level detail, if that's more meaningful for people, but we tend to discuss images at a whole-image level.
It's exactly the effect that Chris (and many film developers) are expressing when they talk about enlargement or magnification. ie: Medium Format looks better because you haven't enlarged the image so much (to achieve whatever print size) is essentially a process-focused way of saying 'the larger capture area captured more light, so has less shot noise (better SNR) at an image level.'
Richard, consistent with this new image-centric view, do you think it would make sense to change the default view of the image comparison tool https://www.dpreview.com/reviews/image-comparison from “full” to “comp”?
spider-mario: No, we want to let people see the original image as the starting point, but all links in reviews that discuss noise should be in 'Comp' mode.
i feel it is an agenda .. ut i respect your right to think otherwise its true under many conditions noise from different mp cameras will appear similar
but in absolute terms larger photo sites are superior .. you cannot dispute this
do you think the sensor wells in a 12 mp cell phone sensor can deliver the same quality ... yes quality as a 12 mp full frame sony full frame camera ? noise ? next time do the comparison at iso 51200 and we will see a real noise advantage
necxt time print them both out at their respective FULL sizes 12 mp vs 61 mp at high iso not 1600 and see what you get
i say that is bonkers and i have believed you have a pro cellphone agenda and have for yrs.. its simply my belief
its why you pick at nuance in things like comparing equivalency in 1 inch \m43 sensor \apsc and FF but never ever apply the thinking or numerical comparisons to cell phone censors
ever .and that , respectfully is why i believe there is an agenda
“its true under many conditions noise from different mp cameras will appear similar but in absolute terms larger photo sites are superior .. you cannot dispute this”
Yes, you can, as I did above. It’s explained in the link.
“do you think the sensor wells in a 12 mp cell phone sensor can deliver the same quality ... yes quality as a 12 mp full frame sony full frame camera ? noise ?”
But it’s not because of pixel size. A full-frame camera with pixels of the same size as those of the smartphone (and consequently much more of them) would also be capable of much less noisy images.
Cell phones have intuitive computational photography that let's them punch well above their weight class.
Usually cell phone comparisons are done subjectively for this reason.
I don't think thinking about e.g. f/16 lenses and ISO128000 cell phone equivalents is going to be particularly instructive as not many have reference for comparisons there and moreover the cell phones don't perform where their full frame equivalence would predict. Equivalence doesn't really apply.
I see much more evidence for natural origin hypothesis of dpreviews treatment of cell phones than some evil agenda...
“this idea that a 12 mp sensor module from a cell and 12 mp FF sony have any parity is simply nonsense”
Again, it’s not what is being said. Go to the video at 0:35: https://youtu.be/gAYXFwBsKQ0?t=35 “if you’ve got two cameras with the exact same size of sensor”
Hmm... a low light comparison without actually comparing them in low light conditions. This video is about (subjective) noise results but where's the actual testing of the light sensitivity? Also helps to point out there's a significant difference between the two sensors with the 60MP being one of the latest gen from Sony - that alone is likely the reason for the (observed) lower noise.
Really the right answer is yes bigger pixels are better in low light when everything else is equal. Much better. You can't cheat the physics.
Hint, the entire Sensors are both the same size, FF, it's each pixel sensor unit cells that are different sizes. However, if you made the 12MP sensor an 18MP sensor with slightly smaller pixel sensor unit cells, then what would be the results?
That's the part which hasn't been asked by those claiming how others don't understand physics. One Camera was designed/engineered to excel in video while the other in stills. Where's the crossover point where 61MP gets matched an or maybe even beaten by an lower MP camera in stills for low light. We only addressed the some of the issues with an 12MP versus 61MP. What about 20MP? This is hardly an done deal, in spite of what's so many are now stating. There simply hasn't been enough proper testing to answer these questions.
In summary, bigger sensor= better light gathering and light is king but if you want to see in the dark, you lose color information entirely and have to dive into far IR spectrum side of things which is a totally different expensive beast
A production copy of the Canon EOS R10, the company's newest entry-level APS-C mirrorless camera, has arrived in Canada. Chris tells you what you need to know, including how the R10 stacks up to the competition.
The Sigma 20mm F1.4 DG DN Art has solid build quality, some useful functions and weighs less than you'd expect. Does it take pretty pictures though? We have the answers.
The Panasonic GH6 is the latest in the company's line of video-focused Micro Four Thirds cameras. It brings a new, 25MP sensor and 10-bit 4K capture at up to 120p. We've put it to the test, both in the studio and out in the field.
Is the MSI Creator Z17 the MacBook Pro competitor Windows users were hoping for? In our tests it delivers big performance and offers a few good reasons why you might choose a 12th-Gen Intel laptop over a Mac.
What’s the best camera for around $2000? These capable cameras should be solid and well-built, have both speed and focus for capturing fast action and offer professional-level image quality. In this buying guide we’ve rounded up all the current interchangeable lens cameras costing around $2000 and recommended the best.
What's the best camera for shooting landscapes? High resolution, weather-sealed bodies and wide dynamic range are all important. In this buying guide we've rounded-up several great cameras for shooting landscapes, and recommended the best.
Most modern cameras will shoot video to one degree or another, but these are the ones we’d look at if you plan to shoot some video alongside your photos. We’ve chosen cameras that can take great photos and make it easy to get great looking video, rather than being the ones you’d choose as a committed videographer.
Although a lot of people only upload images to Instagram from their smartphones, the app is much more than just a mobile photography platform. In this guide we've chosen a selection of cameras that make it easy to shoot compelling lifestyle images, ideal for sharing on social media.
This new filter size means owners of lenses with 39mm front filter threads will now have a native option for attaching UV, circular polarizing and Black Pro-Mist filters without the need of a step-up ring.
Across nearly every major specification, Omnivision's new 200MP OVB0A matches up with the 200MP HP3 sensor Samsung announced back in June, including pixel size, binning ratio and autofocus technology.
Laowa just released a series of extremely compact anamorphic lenses, including a 35mm T2.4 and 50mm T2.4. These pint-sized optics make anamorphic capture very accessible, but how do they perform? We have answers.
Drill Sergeant Chris Niccolls is back, this time in all-new Technicolor, to teach you cadets the basics of photography. This time around he's here to help with the ins and outs of white balance and perspective.
Have you ever come away from a busy shoot, wishing you could pay someone else to do all of your editing? Imagen might be just what you need. Click through to watch wedding and commercial photographer Jon Taylor Sweet use the power of Imagen to automatically edit photos from an engagement shoot.
Samsung's new Odyssey Ark monitor is the ideal display for customers who love to live on the cutting edge of technology. The 55" curved display is massive, bright, fast and impressive. It's also $3,500.
Sigma's 24mm F1.4 DG DN Art lens is solid and well-built. We took it around the Emerald city to see the sights and to prove that it doesn't always rain in Seattle. Check out our sample gallery to see how this optic for L-mount and Sony E-mount performs.
Sony’s Xperia Pro and Pro-I smartphones have received an update that adds new professional monitoring overlays to the devices’ built-in monitoring capabilities for select Alpha camera models, as well as the ability to livestream to YouTube.
Shortlisted entries for the annual Astronomy Photographer of the Year awards were recently announced. Overall winners will be revealed on September 15th.
Our team at DPReview TV recently reviewed the new Canon EOS R10 mirrorless camera. Check out these sample photos shot while filming their review and let us know what you think of the R10's image quality.
A production copy of the Canon EOS R10, the company's newest entry-level APS-C mirrorless camera, has arrived in Canada. Chris tells you what you need to know, including how the R10 stacks up to the competition.
Photographer Mathieu Stern loves the strange and unusual. He also enjoys DIY projects. He combined these passions by turning a disposable camera lens into a cheap lens for his mirrorless camera.
Camera modifier and Polaroid enthusiast Jim Skelton wanted to use the affordable Instax Wide film but didn't want to use a cheap, ugly Instax 100 camera. He hacked together the Instax 100 and a stylish bellows-equipped Polaroid Model 455.
Autel has released firmware updates for its Lite+ and Nano+ drones. These include accessible flight logs, the ability to turn off voice notifications when using the Sky app and an increase the maximum flight distance.
CineD's new video tour and interview with Sigma's CEO Kazuto Yamaki offers fascinating insight into the building's design and Sigma's philosophy toward creating better imaging products. Yamaki-san also talks about Sigma's new F1.4 prime lenses, Sigma's Foveon sensor and the ongoing chip shortage.
We've shot and analyzed our studio test scene and find the X-H2S gives a performance very close to that of the X-T4, despite its high-speed Stacked CMOS sensor. There's a noise cost in the shadows, though, which impacts dynamic range.
The Sigma 20mm F1.4 DG DN Art has solid build quality, some useful functions and weighs less than you'd expect. Does it take pretty pictures though? We have the answers.
The latest version of Sigma's 20mm F1.4 Art lens comes with substantial improvements, especially for astrophotography. Check out our gallery, including some astro images, to see how it performs!
Canon has partnered with Takara Tomy, the company behind Transformers, to release a run of Canon EOS R5 mirrorless camera models that transform into Optimus Prime and a Decepticon.
Midwest Photo was robbed late last week after a stolen truck broke through the store's front entrance. The store is in the progress of recovering from the damage and stolen goods. Photographers should be on the lookout for any suspicious product listings online.
OM System Ambassador Peter Baumgarten visits the wetlands of central Florida to photograph birds with the OM-1. Travel with Peter to see how he shoots, and view some of the spectacular photos he captures along the way. (Includes sample gallery)
We go hands-on with Sigma's latest 'Digital Native' wide-angle lenses for L-mount and Sony E-mount cameras to see what features they have and what sets them apart from the rather limited competition.
Sony has announced in-camera forgery-proof photo technology for its a7 IV mirrorless camera. The technology, aimed at corporate users, cryptographically signs images in-camera to detect future pixel modification and tampering.
CRDBAG's CRDWALL is a thin, space-efficient storage solution that you mount on your wall. It uses tracks, cords and hooks to store your gear flat against the wall without hiding it from view.
Comments