A common belief in the camera world is that lower resolution cameras – and larger pixels – perform better in low light. In this video we illustrate why you should question that bit of conventional wisdom.
Love to see this myths buster stuff. There is just so much myth in photography. I think much of the problem is that quality judgments are too often made on the monitor rather than on the end product. Maybe I don't need that new camera after all :)
That really changed my mind. I too thought that lower MP and thus bigger pixels would result in less noise, even if compared to a downsampled image of a higher resolution sensor. Too bad you don't have the D700 in the new studio scene comparison tool, but I take your word on it (vs the 5D Mark II with the "comp" option).
Previously I would have declared myself to be in the low-MP-low-noise camp. Now that this quality aspect "low-noise" doesn't seem to be real, it's just low-MP and thus probably equates as low-budget camp. Guess how I feel!? You Canadians are just mean people! Wolves disguised as sheeps.
Very interesting comparison. The point should have been made however that at the same resolution, larger photosites are indeed better and a demo would have enhanced this comparison to see at what enlargement it actually makes a difference. 20Mpix M43 vs 20Mpix FF.
The Z7 active pixel fill factor is purposely reduced to increase sharpness relative to its pixel count, which impacts light gathering and thus high ISO performance.
Re: detail, would interpolation have made any difference in the large print comparisons (upsampling as opposed to lowered ppi)? Some of the current algorithms for upsampling are pretty amazing.
Quote from a good article: Similarly, large pixels collect more electrons than small pixels, so the point when the pixel flips from a 0 to a 1 can be much more precise. That in turn means far less data processing. With small pixels, the camera’s firmware is forced to extrapolate data.
That's certainly true but the point of the article is that with all the theoretical pontificating in the world, the proof is in the pudding, so to speak. It's not what you see on your monitor that matters but what the client sees on the wall, the editor in their mag or the blogger on their phone.
Yes, I thought the same thing. Granted, we were just looking at a video of the prints, but I could scarcely tell much of a difference.
With MP we are hitting diminishing returns. No doubt we will see ever more increasing MP cameras, but the higher we go, the less difference it will make to most.
Same I wasn't expecting to results to be this close. I've delivered work printing at 8X10 or 11X14 I using the Sony a7sIII, canon R6 and Olympus E-M1 III. No difference in print quality between 12 and 20 and even 80 MP files.
The only place the difference is perceptible is the "zoom factor" on screens, where clients zoom in the see detail. Apple MacBook pro 16 screen is at around 226 ppi. And 80 MP pixel shift files from the Olmypus give a lot of wow effect when jumping at 100% magnification. That's one of the reasons I can justify high megapixel for client work: The "onscreen" zoom.
Printing at the most demanded sizes like 8X10 or 11X14 from 12 MP works perfectly fine.
i watched the entire video, and i'm not sure i agree 100%. i think this video shows the advantage of lower megapixels in terms of _today's technology_ is getting close to negligible...but i think a lot factors have been left out to push the narrative...like in this, case: sensor size.
if the idea presented here actually holds up, then a 20MP 4/3 sensor is on par with a 20MP APS-C/FF in terms of noise, yeah? because we're still talking pixel density here, we just added the sensor size factor. *shrug*
but at the end of the day, i don't really care. i'm old enough that ISO 800 was totally unusable when I started getting serious with digital photography, and the technology today is mature enough that digital noise isn't really a problem anymore.
"if the idea presented here actually holds up, then a 20MP 4/3 sensor is on par with a 20MP APS-C/FF in terms of noise, yeah" - no it does not, and neither was that suggested in the video.
the difference between a MP and a high MP is pixel density, and so is changing the sensor size for the same MP. did DPR suggest say this only applies to cameras _of the same sensor size_? if they did, then yeah, they are probably correct assuming today's technology is applied.
OR maybe, in this case, the large-MP sensor has _more advanced_ light-gathering technology than the low-MP one and so the pixel size becomes negligible? i don't know...and i don't think that was mentioned in the video...so i'm still struggling to take DPR's word and their presented evidence at face value.
this is a rather bold claim and there's quite a bit of nuance to it IMO and i am simply advocating for caveats where applicable, and i was hoping some explanation from actual digital sensor SMEs on why MP count hardly affects noise quality in this video, in addition to having experts on prints...because to me that doesn't tell the whole story.
i'm not a financial advisor and this is not financial advice.
Chris and Jordan are explicitly looking at the effect of pixel size, not sensor size.
The experiment would work just as well comparing a 16MP Four Thirds sensor to a 20MP one, it's just that there happen to be FF cameras of a similar vintage with widely divergent pixel counts, which makes it easier to illustrate.
If you want to understand it in a little more detail, these articles explain the impact of:
As long as the total light gathering area is the same, two sensors of same generational technology should perform the same w/r to noise and low light performance. Now, the big question is: does a 60 MP sensor have the same light gathering area as a 12 MP sensor, given there is some, albeit minimal distance between each pixel?
Also, "sensor generations" no longer really matters for this metric, because once you get close to 100% - well it's physically impossible to go past that, and we're seeing strong evidence of a multi-generational lack of change recently:
The next frontier is in readout rate and processing bandwidth (which is where a low-resolution camera does still have a benefit, simply due to less pixels to read out. This was extremely important with the old BIONZ X and its 500-600 MPixel/sec sustained throughput cap - the A7R2/3/4 are inferior for video due to the fact that they have to pixel-bin or pixel-skip at full sensor width in order to keep within that bandwidth limit.)
I know there may be situations where it’s imperative to have as little noise as possible but I’ve always struggled with the obsession over it. I’ve taken shots with plenty noise that I love. I’ve taken shots that aren’t ‘technically’ perfect, hell, I’ve even taken shots that aren’t perfectly in focus that I’ve loved. Just as well I’m easily pleased, just realised I take a lot of bad shots. In all seriousness though, I’ve got a D800, a D3 and a D2x and I’ve loved using them all. A local company recently used 2 of the hundreds of images I’ve taken at their premises over the last couple of years, these are now on their premises wall and are ridiculously large and neither of them were taken with the D800, because the pictures they chose were more pleasing to their eye….photography is a subjective thing and I think we do our profession/hobby/obsession, whatever it means to you a great disservice by picking at the details of it such as megapixles/camera make/noise.
Yes, sometimes we get caught up in technical minutia and overlook the sheer art of photography.
Look at the images that hang in museum walls; usually taken with less technical perfection than what we have at our disposal today.
It's the same way with music. Many great records were made with less than ideal equipment. Nirvana made their classic "Nevermind" on cheap instruments in what was then an outdated studio. Prince, even after his wealth and fame, still played many songs in concert on a cheap guitar from his early days because he loved it so much.
It's nice to have the more advanced equipment, but after a while we get to the point of splitting hairs on some of these differences.
I have taken plenty of pictures I absolutely love that have noise. In fact they might not have the same character that I find appealing without it. However, that hardly means that at the same time I don't also appreciate an finely detailed image without noise. Fact is, I really don't obsess over either. However the so called image experts will, an there lies the problems. They really don't understand the fundamental concept of photography an it not just composition, which is sooooo totally subjective.
Its chromatic noise that most of us find objectionable. Luminance noise is often perceived as sharpness and is much easier to process.
The film analogy also fits in. Provided you have very meticulous printing conventional B&W films like TriX are certainly within proper character in terms of their grain structure. Under exposed color print films however takes on a look of chromatic noise with random, blocky colored chunks that like colored oatmeal due to unexposed dye clouds.
There are many ways to compare the cameras with different resolutions. This video address the most common one: comparing two (printed) images of the same size. However, many other comments and suggestions in this thread are also legit and worth thinking. It's just that they are different experiments (unrelated to this video) and are addressing different conditions. To me, comments like "you don't get the point 1000%" are the most useless. /ending my useless comment
But this video wasn't about resolution. It was about noise and how high resolution sensors are essentially free lunch. You get the same noise performance with more detail.
That lunch is only free if you don't talk about file size and readout speed, but the video makes the claim that those differences have become mostly moot.
With a 4x higher resolution, I would expect a much more obvious difference between the images. Not that I could tell from YouTube, but even the large print doesn't seem like it's simply a crop. One has to keep in mind, that even if 2 of the extra pixel are effected by noise, you still end up with double the resolution. So for me (dispite the pricing politics of Sony and regardless of all the obvious advantages of low pixel count) this seems to be proving the opposite. What would happen if you would compare 12MP to 24Mp? Also, despite those 2 cameras being of the same brand with about the same technical level, they are two different sensors with different internal signal treatment... (i.e. the A7s iii has an AA filter!) I think it's at least questionable, if they could take the same amount of postprocessing.
Yes, I am questioning if you could apply the same amount of post processing before things fall apart (color changes, banding..). I am not an expert in this, but I found there are differences among different camera RAW files.
Also (besides my not yet approved correction of the term resolution), the A7siii doesn't use an AA filter anymore...
Sorry, I wasn't specific enough: Which file do you think would fall apart first? The lower res one? I mean, if you're having trouble with the high-res one, you can just downres it until you don't have trouble anymore. So I assume you mean the lower-res one?
In the video it was stated, that the 12MP could be brought up to look like the 64MP. But then again the 64MP could be upsampled to look like an 256MP image. In my mind, there is only limited information to pull from. So, will the 64M stand as much or even more upsampling as the 12MP?
There is no doubt, the 64MP will be better at the end, cause it obviously contains more information. It's just about the prozentual amount with regard to the unprocessed file.
The entire comparison and conclusion was flawed from the beginning and in it's entirety is without any purpose. In the absence of examining and solving difference equations using the Classical methodology or Quantized linear sensor approximation, true light absorption into layered digital processors are without value.
Differentiation Properties and Symmetry Properties are necessary to examine the sampling of Non casual inputs. The conclusions were made without any disciplined step sequences which are essential for a proper comparative analysis.
I understand that the myth is debunked when photos are printed at the same size but it would be interesting to see how they compare at the same PPI. Is the noise for a high MP large print the same as a small MP small print for the same size sensor?
I'm interested in this experiment specifically for video. Using the same print size is akin to using the same TV size, but if one TV is outputting at a higher resolution (higher PPI like the prints), it would be tough to compare noise directly, no? Or am I crazy?
I think the point is that noise-per-image is the same in either case (versus the noise-per-pixel being different). If you're looking at the same image on different sized tvs, neither will have an advantage in noise but the higher res sensor will be sharper, obviously.
I guess I don't understand what you're asking? It sounds like you are thinking about resolution issues, which is why the lower-res image couldn't print as big, not because of any noise issues/advantages.
In the end, photographers shouldn't be worried about pixels, they should be worried about images. Whole images are the right thing to compare (or same image-content regions), not pixels.
notice that they are printing at the same print size...but one is about 125 dpi the other is at 250 dpi. Remember only the best eyes can actually see anything above 350ppi. (I can't) I don't think there are any printers out there that can print higher and theoretically no human can see it...although I have heard rumores that some people can accrualtely "feel "it.
Saying a person can't see more then 300ppi doesnt make much sense without knowing the viewing distance.
People can see about 50 pixels per degree. So everyone can probably tell the difference between these prints if they put their faces up to them, but to be far enough back to see the whole image? No ..
The video equivalent of what we saw for prints would be comparing A7s3 footage at 4K and A1 footage at 8K on the same screen size. Would they have the same noise?
I'm interested to see how pixel density affects how our eyes perceive noise. Which is why I think a good additional experiment would be to print the images at the same PPI and compare noise.
It's the same as when the aperture (or rather f-number) equivalence article came out. To this day there are people that continue to labor under the misconception that the correct way to compare camera systems with difference size imaging sensor is to adjust the focal length to account for the difference in angle-of-view, but not to adjust the f-number to account for the difference in light-gathering.
I think the way to go is a "transformer" sensors. As SUPERCCD was back in long-ago days. Now we have new technology called QuadCMOS. Imagine having a 80Mp camera that can easily become a 20MP one with much better low light performance a OR with much better dynamic range (your choice). For me this would be a very tempting technology. It already began making it first steps on mobile market so I think it can appear in future cameras also... Hey, Fuji! You're THE best at non-standard wondrous sensor AND you said you're creating something new for us in 2022 in that field, maybe this is what it's all about?
How about autofocus performance in extreme low light though, does the lower MP count help here? From what I read in the specifications, the A7sIII focuses down to -6EV while the A7rIV will AF down to -3EV.
I honestly wouldn’t mind an A7sIII for photography for the smaller files, since Sony unfortunately doesn’t offer small-RAW files. 12mp is more than enough for my purposes.
So do I, hfolker. Not everyone does need 42 to 61 MP, literally. But the industry wants to sell it, hereby, Sony. And other brands with the same kind of sensor, but Sony semiconductor haven't sold their A7R IV sensor to other brands, so far.
50+ MP on just 36x24mm is already cramped, i could feel that someday smartphone tech comes to FF gear, so the physical resolution would be 1/4 of the the Sensors true MP resolution. 100 MP, like the GFX100(s) does make much more sense, in terms of tonality & rendition, on at least a bigger sensor, like 44x33mm, then just like 36x24mm. When the GFX-50s came out, 1:1 comparsions on YT had shown, that the 50s had better tonality, -rendition of fine details, then the D8x0 in contrast.
But it's refreshing not going mainstream, into everything. There are still many Film Photography and gear connoisseurs, as there are other people, which are perfectly fine with low MP count gear, with their digital gear. Horses for courses.
Something everyone should keep in mind is that resolution is not always set by the sensor, but by diffraction in the lens. Pixel dimensions are more or less in the range of 3 to 6 microns on a side. That must be considered in comparison to lens resolution sizes that are depedent on the f-stop in use. Using the Rayleigh condition, a "perfect point" is diffracted to 1.22 W/D where W is the wavelength of light considered and D is the diameter of the aperture opening. That's an angular "size" in radians. To convert that to a distance on the sensor, multiply by the lens to sensor distance, (m + 1)F, where m is the magnification from real life to the sensor, and F is the focal length. Noting that F/D is f-number (N), the best lens resolution possible is 1.22WN(m+1). m in most cases (macro shooting is the big exception) is well below one, so we can simplify to 1.22WN in those cases. Macro shooters usually top out with an on-sensor m of 1, so they may wish to use 2.44WN.
Resolution is set by the sensor. It is a fixed number of photosites on the sensor. Resolving power is 'set' by lens and determines sharpness, contrast and colorrendition and how well they can be preserved. Where you speak of resolution it should have been resolving power.
Resolution most certainly can be limited by the lens; indeed the resolution of ANY optical system can never be better than what is allowed by its clear aperture. Read any optics text. If the diffraction blur is larger than the pixel dimension, then you are in the realm where several pixels are essentially acting as one (all imaging the same blur) and more of the smaller pixels buy you nothing.
It does since having smaller pixels improves the MTF below Nyquist. So it may not improve resolution power past a certain point, but it can still improve sharpness.
"buynoski Resolution most certainly can be limited by the lens; "
So you are telling us that when you use - lets say - a 24MP camera which would be 6000x4000 pixels can be limited by the lens....
Really never seen something so dumb!
Resolution is fixed it won't be changed by the glass you put on a camera. The resolution of 24 MP, will stay same as in 6000x4000 pixels.
Yet resolving power can influence image quality, which has nothing to do with the resolution on itself as that will stay the same. You will never get more or less pixels, nor resolution. Yet sharpness, contrast and color rendition can be influenced by a lens. As in less contrast, smearing, abberation and diffraction and false colour rendition. But it all doesn't limit the resolution. It limits image quality and perceived sharpness.
Costas did mention that he could work with the 12 mp photo, and get it very close to that from the a7r IV at a 22 X 33 print size. And Chris said the larger format image could be equally improved as well. Thing is, the 17R IV is already overkill for most prints up to 30X40, so no one would normally up res them anyway, where as the 12 mp camera is still good enough for fairly large prints, and with Gigapixel, an amazingly efficient software, for instance, will create a close match. A 20-24 mp sensor is really a sweet spot. High resolution cameras are very expensive, many photographers are fine with the details they're getting even from 12 mp cameras, its only a photograph after all, and getting into the halls of MOMA has nothing to do with resolution.
That's why everyone needs to decide what works for them, based on how their photos will used.
Some will print, fewer will print really large, but most photos will be viewed only on electronic screens. Most of those screens will be smaller smartphone screens, followed by tablets.
There are other real world factors to consider, like budget, bandwidth, etc.
So there's not one size fits all answer. I think that's what bugs some people, that there's not one easy answer for all.
It is interesting that the a7sIII does defy the idea that lower MP should mean lower cost.
The a7sIII right now costs as much as the a7r IVA model, and for a while cost more than the a7R IV before Sony upped the cost by adding a new LCD screen.
So Sony has clearly declared that MP is not the only factor in determining price.
Of course Canon and Nikon for years with their DSLRs also have done this, as they priced their top line pro bodies more expensively, even though they typically had lower MP counts.
The A7Siii is optimized for video, and can capture DCI 4Kp60 without binning, cropping or resampling. It has the same profiles as the FS5 and FX6, including "Cinema". Internal recording can go as high as 600 Mb/s, using CFExpress A cards, external recording is uncompressed or RAW.
Modern Low Resolutions Sensor Cameras are mostly if not solely focused(Engineered) on Video aka the 10.28-megapixel high-sensitivity Live MOS sensor LUMIX GH5S or the Alpha 7S III 12.1MP backside illuminated full-frame sensor. So IMO, we shouldn't get all caught up in their comparisons to the far higher Modern Resolutions Sensor Cameras in which many are far more focused (Engineered) on shooting stills. IMO.
Not if they overheat or go past ISO 12800 or pixel binned/line skip in slow mo or have rolling shutter effects, there's no free lunch so pick your flavor.
A good anaolgy I use to explain this to people is cropping. Using a low resolution sesor with bigger pixels is like using a digital crop. The sensor averages out noise at the point of taking the picture but the resolution is lost forever. A high resolution small pixel sensor retains the resolution information but can be digitally averged in post by applying noise reduction. In both cases the result looks the same but in the latter case the photographer has more ccontrol and retains the resolution information.
I have seen number that cameras like A7r series take longer to be light saturated compared to A7s series. In this case, the best score is Leica camera, I do not remember which one.
A better description would be a deeper charge well - more electrons available, hence more photons and greater dynamic range. Over the last 5 years we have seen the effective dynamic range grow from about 12 stops to 15. The time it takes for the cell to become saturated depends on the intensity of the light as well as its capacity.
"deeper charge well ... we have seen the effective dynamic range grow"
I have not seen substantial increases in full well capacity (rarely > 50ke-). But I do not extensively survey new detectors, so please correct me if wrong by supplying real numbers for specific sensors. But in the last 5+ years, read noise has improved significantly. Basically, [DR = capacity / noise] and thus lower noise translates to increased DR.
However, pixel DR is an incomplete measure of actual image quality because it ignores the sampling dynamics of pixel size and MTF.
MTF has a lot to do with the actual image. MTF and the closely associated PSF along with pixel size determine the effective sample rate. A Gaussian PSF of 3 pixels is near the threshold between under and over sampling. So for example, if a sensor+lens has a sample rate of 6 (small) pixels then pixel clusters capture the same information as a single pixel from sensor+lens sampling at 3 (large) pixels. In this case the oversampling pixels effectively act as fewer larger pixels, sharing their well depths and noise. (Signal adds arithmetically but noise adds quadratically, which is why S/N increases via binning).
A simple way to conceptualize this is to imagine a star (point source) image contained in a single large pixel. Employing 4 half-sized pixels would contain that same star, dividing its photons among the 4 pixels.
Strange article heading: "Why lower resolution sensors are not better in low light". It's got nothing to do with resolution and everything to do with "pixel size" (the more accurate term being photosite size - sensors don't have "pixels").
Sony cites the maximum ISO value of the A7Siii as 409,600 and that of tha A7Riv as 102,400. This is roughly proportional to the difference in the area of each pixel.
Well Sony also Cites "Large pixels deliver high sensitivity and low noise" so I'm not putting much weight behind ISO claims. Especially as ISO is competely fake anyway. https://youtu.be/QVuI89YWAsw
Ed - You are completely wrong: Sensor specifications are from base ISO 100 to 32.000 for the A7Riv. And for A7SIII its from base ISO 80 up to 102400.
Those are the actual hardware limitations for these sensors. Any 'High' settings are completely fake and derived from maximum gain value.
Vynz - Its a little more complicated than what's shown in the video. First any Sony sensor is dual gain, meaning they have two actual ISO settings. Mostly a starting point called base at ISO 100 and a second value (reset) starting at 640 upwards.
Secondly there is a difference in pre-amp and after gain.
Your camera does pre-amp and any RAW editor based exposure is first amplified then further calculated in post. Therewith in camera ISO setting is always a little cleaner than the one done in a RAW editor, even though they might look much alike, they are still different.
However to make it not too complicated one 'could' say there is only one ISO value, though this is technically not fully correct.
The ratio between 32,000 and 102,400 is still about 2 stops. Sony cameras are basically ISO invariant, so whether sensitivity is boosted in the sensor or in post-processing is a toss. However Sony images tend to have less noise and greater shadow detail from post processing than raising the camera's ISO setting.
All this discussion may be moot. The highest ISO I have needed for practical photography is between 3500 and 5000. Even so, that would have been well over the top for my Nikon D3, Leica M9 and Hasselblad CFV.
Ed - There is no 2 stop its al digital gain. A sensor does not become more sensitive to light. It is all derived from either base ISO or when its a double gain sensor from second hardware baked ISO setting. Mostly 640 or 800. The result as in luminosity and color are being calculated. For the calculation on a Bayer sensor 9 photosites are needed to calculate the color for the pixel and only the greens contain actual data. For the blue and red photosite the color is a calculated average.
H-High settings are fully fake as they are calculated after the gain has already happened. The gain in a sensor goes from base to either 32000 for the A7riv and to 102400 for the A7Siii this is hardware baked limitation. Any L or H settings is actually non-existent. They could be better considered as settings used for marketing purpose to make a sensor look more impressive than it actually capable of. Therewith it does not make any sense to look at L or H ISO performance.
This would be a very good time to thank renowned physicist Emil Martinec, whose 2008 paper we used to convince DPR editors and readers about the relationship between noise and pixel size.
My summary was ambiguous, I suppose, and you picked what you wanted. People did not believe Emil in 2008. All consumer photo sites droned on about the benefits of "big pixels" in low light. They ALL had it wrong. Askey was lost. It wasn't until this generation of DPR editors came in that things began to change. [Rishi was among the first to acknowledge it.] I remember the endlessly stupid things that cosplay engineers said to try to defend the old line.
@Luke Kaven - do keep in mind that in 2008, image sensors were manufactured on much older processes, that sacrificed a larger amount of area per pixel to support circuitry.
Back then, there WERE advantages to large pixels in low light, because then, support circuitry was such a huge portion of the sensor area that you lost noticeable amounts of sensitivity due to photons hitting stuff that was not photodiode. That's also why you saw so much improvement in low-light performance from one generation to another.
But since then, things have changed - manufacturing processes improved and sensor manufacturers got better at microlens design. As a result, even fine-pitch FSI sensors like that seen in the A6300 and all subsequent Sony APS-C cameras are indistinguishable in PDR performance per unit area from any other modern sensor.
For me, the studio scene comparison tool does not work properly. I gives the same size for comp and print. That is the case on different computers with different OS and different browsers. I have no idea where that could come from.
To the topic. The A7s beats the A7R at ISO 409200, but only because the A7R tops out to stops earlier. But pushing it gives pretty much the same result.
That is interesting, because more than 10 years ago the amount of read noise per pixel was much higher and lead to different results sometimes.
I believe “print” is a fixed size. All images scaled to that size. “Comp” scales all larger resolutions to the size of the lowest resolution camera of the 4 you have selected.
According to the sub-titles, images from the A7Riv were printed at roughly twice the resolution of the A7siii.
Secondly, resampling an image for printing at a given size is an averaging process which decreases the aparent random noise, roughly, by the square root of the reduction. Resampling to fit a given print size can be done independently, or left to the printer driver.
I find that the studio scene comparison tool is a bit frustrating (similar to the issues with the-digital-picture version of it) where I wind up with different proportions of the scene due - I think - to have the images were captured and stored. It may be that I'm just not grokking how to do it correctly, but it may be in the future that DPReview has a clever programmer script the creation of a parallel database of pictures based on object recognition/sizing.
This is not to criticize the tool. It's a great value to the community.
Good, straightforward, clear video - thanks. Judging from some of the comments though, people don't change deep-seated beliefs so easily, despite clear evidence to the contrary. Bit like climate change...
Anyway, I think these educational pieces are good and make a nice change from 'which gear is better' type content. I feel like another piece on equivalence is due! Yes it's a minefield I know, but actually much needed because it's a complex and much-misunderstood area. If you use a range of formats, then you end up thinking about it a lot - or at least, I do.
It should probably be covered in stages rather than all at once, because there are many aspects, and ideally to include some content in the written format.
The equivalence is far more straightforward as compared to this. IMO. I don't have any problems with equivalence but apparently an large number of folks do. Manufacturers use 35mm-equivalent focal length to describe the angle of view (field of view) of a particular lens and camera combination.
The biggest issues seem to arrive when we start talking about The lens aperture equivalence. That's apparently when some folks really get heated up. What's rarely talked about are T-stops. T-stops are a measurement of how much light is actually going through the lens at any given f-stop.
@BackToNature1: The basic concepts might be quite simple, but the real-world application and effects are quite complex IMO, similar to concepts around exposure.
Just one real world example:- Let's say I'm going out to shoot some landscape handheld in pretty dim overcast conditions. I like to get everything sharp where possible, so DOF is important (yes I would ideally use a tripod, but that's another story). Let's say I have a FF and MFT camera available, both around 20mp, similar age, both with f/2.8 lenses with equivalent field of view. The MFT kit is a lot smaller and lighter. Is it worth taking the FF kit? Why/why not?
I'm not really looking for answers here - just trying to illustrate that real world decisions are made based on people's understanding of equivalence, and that it's not necessarily that simple to make good decisions.
@ jonby: I never compare with something I don't have at hand when I am actually woring with the camera. The light meter even compensates for eventual light loss by the lens systm (f-stop versus T-stop).
When you crop at post, do you calculate equivalence? If not, why not? Because you relate to what you have, and work from the files you get. What matters should be image content and how the content is organized for aestetichal and informative reasons.
Or you could place yourself in an armchair and start making everything technically complex ...
@Magnar W: my example was about making choices about equipment you have available before you go out shooting - many people have more than one camera system option.
But seeing as you mention it, I still find that an understanding of equivalence can be important whilst shooting - especially if you use different systems on a regular basis. For example, to predict depth of field, you need to have an awareness of the impact of sensor size.
It's not about making it complicated. It's about getting the best out of the tools you use.
@ jonby: Have you ever, while shooting, calcuated DOF for one sensor size, and then used the equivalent value to set the aperture on the camera you are using? I have never met photographers who work like this.
Apertue and depth of focus is a judement, dependent on what you shoot, distance to the subject, subject-backgroud separation, available light, if you use hyperfocal distance focus setting or not, etc.
If you do this, I assume that you also calulate CoC from the intended display or print size of your photographs, to find the size of the CoC should be regarded as "sharp" enough.
@Magnar W: "Have you ever, while shooting, calculated DOF for one sensor size, and then used the equivalent value to set the aperture on the camera you are using?"
I don't 'calculate' DOF as such - I judge it based on previous experience, and the framework in my head is based on full frame because I started photography on 35mm film. If I'm using a smaller sensor camera (yesterday I used a G1X, with a 1.85x crop factor), I might do an approximate calculation in my head to work out what aperture will give me similar DOF as, say, f/11 on FF. Not sure why you seem to have a problem with this - I'd be surprised if most photographers who work between formats don't do this in some form or other. And no, I don't get involved with CoC calculations because there are much more important things to think about when making photographs. But a basic conversion of aperture to account for the sensor size - yes absolutely.
Equivalence doesn’t work unless you start from the faulty premise that all photographers are idiots who are incapable of making rational decisions when choosing equipment. For instance, people who buy small-sensor cameras and inexpensive lenses go into the transaction fully cognizant of the fact they won’t be able to print or project their photos as large as they would if they had Zeiss optics on a medium-format back. Their rantings about depth of field are equally fallacious; there is no such thing, what you are viewing is an optical illusion with your eye-brain interface interpolating the content on either side of the plane of focus.
If you can get sensors in the same technology, so you need the same isolation thickness between pixels, you get extra exposed area with fewer pixels. How much that matters depends on ratio of sensor area to isolation area and how well the microlens over the pixel concentrates the light into the pixel. Also bigger microlenses (for bigger pixels) should help with alignment (for the same distance error you lose a smaller % of the light).
Also at super-low light levels bigger pixels have the detail above the noise floor at lower light levels, which can be handy. Useful for Bayer sensors where the next nearest (say) red pixel is a little way away.
@Commentators It's not about the noise. It's about what's left after we filter it out. Counting noise specks at 100% doesn't tell us which picture is higher quality. Higher resolution will always produce more noise. AND at the same time it will produce more data as well. More data = higher quality.
You don't make it clear that this largely because of BSI and microlens improvements and the fact that full frame pixels are still enormous in comparison to smaller formats. At smaller pixels sizes with FSI quality really started to drop (just compare the Nikon J1 to J4). BSI fixed this by moving wiring to the back and improving the angle response (see the Nikon J5). Then there was higher cross talk at very high resolutions which was again solved in phone sensor with well isolation and will most likely be moved to cameras when they reach that density. So it was something that was very real but is less of an issue now.
BSI for a decade? Maybe in phones (more like 7 years) but no large sensor camera had a BSI sensor till the NX1 in 2014 and Sony only introduced it in full frame recently. So yeah it is because of BSI because with just old FSI and old microlenses you would be in Nikon J1 to J4 territory once the pixels were small enough (remember I mentioned the comparatively large pixels). The Nikon J4 has a pixel pitch that would give a full frame sensor 120+ pixels. You could only do so much with microlenses and phones were suffering at 8mp until BSI let them get back almost half the die area.
A very welcome exercise, and fully confirmed by my own personal experience (currently Z7 and Df). At a given sensor size and generation, more pixels has significant advantages when it comes to cropability, size potential of final prints, level of fine detail in very large prints observed at close distance. At same sensor size and generation, less pixels has significant advantages in terms of device and storage costs, readout speed, video implementations, editing speed. Noise is the same when you look at same size output up until the final print breaks down (and nobody routinely prints at sizes where 12 MP break down). Pixel peeping shows less noise per pixel for lower res bodies but that has very little to do with the end product (a displayed or printed whole image).
ILCE-7SM3 Maximum ISO Sensitivity Photo: ISO 80-102400 (expandable: ISO 40-409600) Movie: ISO 80-102400 (expandable: ISO 80-409600) ---------------------- ILCE-7RM4A Maximum ISO Sensitivity Photo: ISO 100–32000 (expandable: ISO 50-102400) Movie: IISO 100–32000 ------------------------------- "Which cameras are ISO Invariant? ISO Invariance means that the quality of the image captured by a camera at High ISO is equivalent to the image" ----
So I am kind of surprised they didn't talk about ISO Invariance nor the fact about how the difference in native ISO between the two models is massive. One camera is Video centric while the other is Stills centric. Notice how Chris said Kind of Proved?
My post was hardly about Top end Max iso sensitivity. Just so you know. One could have stated it's not about Low end min iso sensitivity either. What I did mention however was ISO Invariance an how one Camera was Video centric while the other, Stills centric. But sure, talk about Max iso range extremes, which I posted just for reference, not about what happens at the extreme end of the range....
I would think the difference would depend on the pixel area efficiency, not the number of pixels, pixel resolution that is. By that I mean how many % of the sensor area actually is light sensitive.
Practically the same (and as close to 100% as we are likely to get) since there is an array of microlenses on the top (so they gather light that would otherwise fall on the "inactive" area, and this inactive area is even smaller in modern back-side illuminated (BSI) sensors).
This is the key - and most sensors since 2015 have been as close to the theoretical limits for bayer-on-silicon as you can get, so that you see very little difference until you go to extremes (for example, A7R4's roughly 1/3 stop penalty is noticeable but it's quite small).
The mantra of "larger pixels are better in low light" held true when manufacturing technology was more primitive and it was harder to get close to the ideal limits - but has not held true since roughly 2015 or so, at least for APS-C and FF sensors. (Still potential problems when you get to insanely fine pixel pitches such as the submicron pitches seen in smartphone sensors.)
The A6300 sensor alone showed that at that pixel pitch, BSI isn't even needed - FSI on a finer process along with improved microlenses is enough.
You did say you were going to keep the test simple( shooting a portrait). now go and shoot a tree with the sun behind it with a 1/4 ff sensor and full m43 sensor and then post the colour of the leaves, the m43 leaves will not be green but grey, and the 1/4 FF sensor are still green. no one is interested in noise but colour reproduction.
And why is that? The only advantage of the FF sensor is higher bit depth (14 vs 12), but the usable DR drops about the same 2 stops when using only 1/4 of the sensor, so this should not matter.
If this was true, 1) why would not Panasonic and Olympus put FF sensors in their high end Micro Four Thirds models, and only use 1/4 of the area? Even better, 2) they could cut the FF sensor to 1/4 size at the factory. Or even better, 3) they could skip 1 and 2 and just use sensors designed for 4:3 size.
It's the same with using MF sensors on high end FF cameras and cropping the image to FF size.
In a nutshell, noise tends to be random. A higher resolution camera has smaller pixels than one with lower resolution, at the same overall magnification (i.e., print size), so the noise is less visible. Grain/noise is less visible in video because it is visually averaged over multiple frames, as long as the video (or movie) is running.
In a practical sense, the best comparison is in terms of results, the same print size, as used in this video. In terms of signal to noise ratio, the best comparision is at 100%, or pixel=pixel.
I have both cameras, an A7Riv and and A7Siii, and noise has not been an issue in either. Understand that in a situation which requires ISO 25,600 (a long way from the top in either camera), you probably can't read the dials on the camera without assistance.
> In terms of signal to noise ratio, the best > comparision is at 100%, or pixel=pixel
Nonsense. I can't even say that you will end up measuring SNR of individual pixels, because that makes no sense.
So how are you going to do that? Measure same size (in pixels) 1:1 crops? Yes, smaller-pixels' crop will look noisier -- but we know that. It tells nothing about the entire images, which will have similar SNRs.
It amazes me how people in this thread try to come up with new ways of saying "Yes, the video is OK, but you still did it wrong".
Another guy (to whom I didn't reply immediately and now I've lost his post in this avalanche of forum wisdom) suggested that the images should have been compared at the same DPI... Well, then what? View the images at distances inversely proportional to pixel sizes? If not, that's same pixel-level comparison.
People just won't let it go...
Don't just tell me I'm wrong. Tell me how are you going to measure SNR "at 100%, or pixel=pixel".
I thought the camera comparison was apples and oranges. The comparison should be of same megapixel count sensors of different aspect sizes, so same MPX in 35 full, APS-C, 4/3rds. If you review the "studio" camera tests looking at raw files, it looks to me that full frame 35mm has a near two stop advantage noise-wise over APS-C, that is, at the point where you start seeing noise, "ASA" 800 on APS-C looks like ASA 3200 (two stops) on full frame.
I'd say you're proposing the apples to oranges comparison. The comparison between the two Sony cameras is almost comparing the same sensor just divided up into two different amounts of megapixels.
More sensor surface to absorb more photons definitely makes a difference. I don't think anybody debates that.
Bigger pixels especially in full frame lower megapixel sensors are more sensitive to light and can gather more, this video doesn't dispel anything. The high megapixel version simply has a modern processor that deals with noise better than older cameras.
The point still remains, larger pixels gather more light which you can't argue with physics.
You're missing the point. The higher resolution sensor gathers the same amount of light, it just divides it up more times. Then when you print it out or put it online the resolution is shrunk down so the pixels are averaged back together.
When viewed at 2MP on facebook each pixel is made up of the same amount of sensor area. The A7s just has less of it's pixels averaged to make 1 pixel than the A7r. Obviously the algorithms that compress images down will play a part, but you're basically doing the same thing exporting the image as the sensor did when creating the raw file.
> Bigger pixels especially in full frame lower megapixel > sensors are more sensitive to light and can gather more
Individual pixels, yes. Sensor as a whole, NO.
Before the advent of microlenses and BSI, considerable area of the sensor was occupied by electronics and wiring, and thus effective sensor area of higher MP sensors was lower than that of the lower MP sensors. That is no longer true for modern BSI sensors. Hence noise characteristics are now on par.
As to "especially in full frame": FF has nothing to do with these comparisons. Nada. Nil. Zilch.
@ Toilet Roll: You totally forgot that if you have larger pixels, you have fewer of them for a given sensor area, say a full frame sensor. The result is the sum of information from all pixels. That's math and physics.
The main factor for noise level is the sensor area. The light-gathering area.
Another correction: you are assuming that the photos are processed by the camera, since you mention the "modern processor". The results would be the same if the images were shot in RAW but processed with the same program (say Adobe Camera RAW). That maybe what the authors of the video did it it not explained.
"A7s3 has Bionz XR, 7r4 does not." This only matters for video and not stills - most notably, the R4's video image quality takes a huge hit due to having to pixel-skip or pixel-bin when doing full sensor width video, since the old BIONZ X has a sustained bandwidth cap of around 600 MPixels/sec
Or: Lower resolution is now only beneficial for bandwidth and throughput reasons
For stills, especially raw stills: Camera processing is irrelevant since all raw data processing is done on another machine.
A simple analogy that should make this easy to understand: Imagine it's raining and you would like to measure how much water falls on a one-square-foot area. Now imagine one person does this by putting out a bucket with a one-square-foot area opening, and another person does this by putting out two smaller buckets with half-square-foot area opening.
The accuracy of the sampling for each of the two smaller buckets will be smaller, since they each cover a smaller area. But if the guy then combines both buckets he has information that is 100% equivalent to the information obtained by the single larger bucket.
That's the same as larger resolution imaging sensors. It's just a larger number of smaller buckets. You can always combine the buckets if you just want to increase the accuracy of the measurements (i.e. lower noise) at expense of spatial information (i.e. lower resolution).
A higher resolution sensor comes at no (inherent or fundamental) trade-off in noise, practical issues aside.
I have a Nikon 1 V2 camera with its dinky little 1 inch sensor. I agree with the point of this article, number of pixels has little to do with noise. I found very little difference in noise levels with all the Nikon 1 cameras I own and have tried. I did, however, find a difference with larger vs smaller number of megapixels. The 10 megapixel cameras (Nikon 1 V1, and Nikon 1 J1), definitely had a lower resolution, and less detail, compared to V2, V 3, J4, J5 models, all with 14 or more megapixel sensors. This obviously flies in the face of the "megapixel myth". For the responses, I have plenty of popcorn.
To be fair, I have yet to watch the video (though am about to). I feel like in general sometimes higher res or crop sensor cameras can be overrated for lowlight high iso performance. I have used the d500 quite a few times, and own a d600. Imo, the d600 takes the d500 by 2/3 ev lowlight. I also own a nikon d4 and have used the d850 quite a few times. Same story with the latter
Could be...but about a week ago B&H showed D850 as being discontinued but later was proven to be an error.
And the Sony website still shows a9 as a current model.
And wouldn't it be more logical to assume an a9iii if a9ii was discontinued? With the chip shortage and all, who could blame Sony for de-prioritizing the older a9?
“A9iii is coming to show what speed really means.” A1 is currently the fastest camera on the market. Not even the R3 will beat it.
The A9III very well may be faster but I assure you no A1 shooters are going to be upset that Sony was able to make a lower resolution (than 50MP) camera a little faster than the A1.
And if they do, Canon and Nikon will only be further behind. So Sony will be in good shape. Don’t worry.
It wouldn't be the best business move by Sony to put out an a9III at this point. The A1 is still selling very well and hard to get. In this time of chip shortages, why potentially take away from A1 production?
And most Sony users interested in the high burst rate who want more than an a9II are just buying the A1. So why take away sales from the A1?
An a9III at this point would only compete against the A1 for resources and sales. So why do it?
I can believe that Sony is at the moment de-prioritizing A9 production for other newer models.
"Why would the A9 being discontinued mean anything?" - back when Sony tended to keep older cameras on the market forever, it was especially noticeable if Sony actually went to the point of discontinuing something.
But the entire world has been in a massive semiconductor shortage for a year. If a common component between the A9 and A9II is in short supply, it makes sense to just stop making the A9.
The a7R V is the fifth iteration of Sony's high-end, high-res full-frame mirrorless camera. The new 60MP Mark IV, gains advanced AF, focus stacking and a new rear screen arrangement. We think it excels at stills.
Topaz Labs' flagship app uses AI algorithms to make some complex image corrections really, really easy. But is there enough here to justify its rather steep price?
Above $2500 cameras tend to become increasingly specialized, making it difficult to select a 'best' option. We case our eye over the options costing more than $2500 but less than $4000, to find the best all-rounder.
There are a lot of photo/video cameras that have found a role as B-cameras on professional film productions or even A-cameras for amateur and independent productions. We've combed through the options and selected our two favorite cameras in this class.
What’s the best camera for around $2000? These capable cameras should be solid and well-built, have both the speed and focus to capture fast action and offer professional-level image quality. In this buying guide we’ve rounded up all the current interchangeable lens cameras costing around $2000 and recommended the best.
Family moments are precious and sometimes you want to capture that time spent with loved ones or friends in better quality than your phone can manage. We've selected a group of cameras that are easy to keep with you, and that can adapt to take photos wherever and whenever something memorable happens.
What's the best camera for shooting sports and action? Fast continuous shooting, reliable autofocus and great battery life are just three of the most important factors. In this buying guide we've rounded-up several great cameras for shooting sports and action, and recommended the best.
While peak Milky Way season is on hiatus, there are other night sky wonders to focus on. We look at the Orion constellation and Northern Lights, which are prevalent during the winter months.
We've gone hands-on with Nikon's new 17-28mm F2.8 lens for its line of Z-mount cameras. Check out the sample gallery to see what kind of image quality it has to offer on a Nikon Z7 II.
The winning and finalist images from the annual Travel Photographer of the Year awards have been announced, showcasing incredible scenes from around the world. Check out the gallery to see which photographs took the top spots.
The a7R V is the fifth iteration of Sony's high-end, high-res full-frame mirrorless camera. The new 60MP Mark IV, gains advanced AF, focus stacking and a new rear screen arrangement. We think it excels at stills.
Using affordable Sony NP-F batteries and the Power Junkie V2 accessory, you can conveniently power your camera and accessories, whether they're made by Sony or not.
According to Japanese financial publication Nikkei, Sony has moved nearly all of its camera production out of China and into Thailand, citing geopolitical tensions and supply chain diversification.
A pro chimes in with his long-term impressions of DJI's Mavic 3. While there were ups and downs, filmmaker José Fransisco Salgado found that in his use of the drone, firmware updates have made it better with every passing month.
Landscape photography has a very different set of requirements from other types of photography. We pick the best options at three different price ranges.
AI is here to stay, so we must prepare ourselves for its many consequences. We can use AI to make our lives easier, but it's also possible to use AI technology for more nefarious purposes, such as making stealing photos a simple one-click endeavor.
This DIY project uses an Adafruit board and $40 worth of other components to create a light meter and metadata capture device for any film photography camera.
Scientists at the Green Bank Observatory in West Virginia have used a transmitter with 'less power than a microwave' to produce the highest resolution images of the moon ever captured from Earth.
The tiny cameras, which weigh just 1.4g, fit inside the padding of a driver's helmet, offering viewers at home an eye-level perspective as F1 cars race through the corners of the world's most exciting race tracks. In 2023, all drivers will be required to wear the cameras.
The new ultrafast prime for Nikon Z-mount cameras is a re-worked version of Cosina's existing Voigtländer 50mm F1 Aspherical lens for Leica M-mount cameras.
There are plenty of hybrid cameras on the market, but often a user needs to choose between photo- or video-centric models in terms of features. Jason Hendardy explains why he would want to see shutter angle and 32-bit float audio as added features in cameras that highlight both photo and video functionalities.
SkyFi's new Earth Observation service is now fully operational, allowing users to order custom high-resolution satellite imagery of any location on Earth using a network of more than 80 satellites.
In some parts of the world, winter brings picturesque icy and snowy scenes. However, your drone's performance will be compromised in cold weather. Here are some tips for performing safe flights during the chilliest time of the year.
The winners of the Ocean Art Photo Competition 2022 have been announced, showcasing incredible sea-neries (see what we did there?) from around the globe.
Venus Optics has announced a quartet of new anamorphic cine lenses for Super35 cameras, the Proteus 2x series. The 2x anamorphic lenses promise ease of use, accessibility and high-end performance for enthusiast and professional video applications.
We've shot the new Fujinon XF 56mm F1.2R WR lens against the original 56mm F1.2R, to check whether we should switch the lens we use for our studio test scene or maintain consistency.
Nature photographer Erez Marom continues his series about landscape composition by discussing the multifaceted role played by the sky in a landscape image.
The NONS SL660 is an Instax Square instant camera with an interchangeable lens design. It's made of CNC-milled aluminum alloy, has an SLR-style viewfinder, and retails for a $600. We've gone hands-on to see what it's like to shoot with.
Recently, DJI made Waypoints available for their Mavic 3 series of drones, bringing a formerly high-end feature to the masses. We'll look at what this flight mode is and why you should use it.
Astrophotographer Bray Falls was asked to help verify the discovery of the Andromeda Oxygen arc. He describes his process for verification, the equipment he used and where astronomers should point their telescopes next.
OM Digital Solutions has released firmware updates for the following cameras to add compatibility support for its new M.Zuiko Digital ED 90mm F3.5 Macro IS PRO lens: OM-D E-M1 Mark II, E-M1 Mark III, E-M5 Mark III, E-M1X, and OM-5.
Comments