Interchangeable lens cameras: stagnation in image quality?

Pansottin

Leading Member
Messages
830
Solutions
3
Reaction score
122

Hello friends, what kind of comments do you have on this article?

Medium format isn't mentioned, yet, are the cameras of recent years just mirrorless versions of what's been around for the past decade or small tweaks to previous mirrorless models?

Is the information presented technically reliable? Is the phone argument valid?

Thanks for your opinions.
 
The methodology is awful. Comparisons should be made at same print size.
Medium format isn't mentioned, yet, are the cameras of recent years just mirrorless versions of what's been around for the past decade
In the past decade the big IQ improvements have been dual conversion gain and BSI.
or small tweaks to previous mirrorless models?

Is the information presented technically reliable?
Yes and no. See above.
Is the phone argument valid?

Thanks for your opinions.
FWC at base ISO has been stuck at about 3000 e-/um2 for most of the last decade. Resolution has increased. Lenses are better. AF has improved a lot. Cameras are faster. Stacked sensors have been introduced.
 
We have reached the stage of "good enough" in consumer digital imaging. What more in terms of image quality would you want? More MP? To what end? How big do you need to print? What else do you need?

It seems that most digital imaging improvements are taking place in the scientific fields such as astronomy, medical, etc. where the goals of improved image quality are clearer.

The good news is that for general photography, any camera is good enough.
 
What is called image quality is often related to photon statistics. Modern sensors are pretty near to capturing all photons incident on the sensor while readout noise is in the low single digit values, measured in e-.

The other major part in image quality is FWC and that seems pretty fixed.

So, sensor development is pretty much limited by quantum limits. We can make the pixels smaller thus handling more detail.

The sensor can be made larger, that brings some gains in image quality.

There may be a lot of development in lenses, but sharpness/resolution has also quantum limits, known as diffraction. Optimum performance needs medium aperture but that may not allow the DoF needed.

Most development is in functionality: faster AF, faster readout, workable electronic shutter.

Best regards

Erik
 
The Fujifilm 50S came out January 2017. Using I think the same sensor from the Pentax 645z, which was released in April 2014. Just more than 9 years ago.

I find the usage in the 50S to be excellent, and I probably don't need a better sensor.

Since then we have back side illumination sensors (gfx100) and stacked sensors for faster read out.

The only thing I may want is excellent dynamic range and a fast global shutter, or low photon detection in multiple spectrum. The quanta imaging sensor (QIS) may give me the latter, but its not in a consumer camera yet and research ones are still quite expensive. We don't have excellent global shutter sensors yet.
 
Last edited:
The author is correct that "image quality" (as interpreted in the article) has not changed much since 2017. That is because:
  • Read noise has been fairly constant at around 1e- once you hit the high conversion gain mode
  • It's kinda hard to improve on 90% quantum efficiency (peak value--there is some variation depending on frequency)
  • Full well capacity has not changed for a given pixel pitch
The end result is that the dominant form of noise in many/most photographs taken with full frame sensor cameras is shot noise rather than sensor noise. Camera's can't do much about shot noise--it is inherent in the light. Thus, few improvements since 2017 in "image quality" aside from resolution.

However, the author is flat wrong about a few things, and misleading on others. First, he suggests that R&D efforts on new sensors essentially stopped in 2017 due to the adoption of cell phones as primary cameras. That's demonstrably not true. Megapixel counts may not be climbing at present, but stacked sensors have allowed dramatically better autofocus, the adoption of electronic shutters, etc.. Depending on your photographic genre, sensor features like these have a bigger impact on "image quality" than do dynamic range or megapixel counts.

Next, the author implies that cell phones are very nearly as good as larger cameras in terms of image quality. That's simply not true. Cell phones do an amazing job considering their limited size, but the photographic window in which they can perform as well as larger camera systems is quite narrow. They don't have the same dynamic range. They don't have the same signal to noise ratio. They don't have the same resolution, regardless of megapixel count. They don't have the same ability to control depth of field (and, no, computational photography and AI are not yet substitutes). They don't work with well with strobes. Are they great for capturing casual outdoor and travel shots for instagram? Absolutely. Convenient? Absolutely. Easy to use? Definitely. Can they be used even for professional work? Sure, within a certain range of requirements. Would I ever choose one for product photography? Portraiture (aside from candids and street)? Sports? Wildlife? Astrophotography? Not a chance.

Next, the author tells us that the SNR on the A7RIII is higher than newer cameras because it has larger pixels. Generally, measuring SNR at the pixel level is really silly unless you are comparing sensors with the same pixel pitch. Why? Because we don't view our images at 100%! We look at them, whether printed or on screen, at totally different magnifications, and we won't look at them at the same magnification with a 42mp camera as we would with a 102mp camera, so the per pixel SNR isn't a useful measure of, frankly, anything.

My biggest problem with the article is the implication that resolution, dynamic range, and SNR are the only components in image quality. AF accuracy matters. AF speed matters. Lens quality matters. Ease of use matters. Focus tracking matters. Frame rate matters. Lots of other factors matter, and there continue to be advances from one generation to the next in digital cameras.

Keep in mind, in the days of film one could claim there were NO improvements in image quality due to cameras--ever--if using the metrics the author seems to have adopted. After all, film cameras had nothing to do with image quality as long as they didn't have light leaks, right? So a Kodak Instamatic was every bit as good as a Nikon F5 if loaded with the same Kodacolor film, right? After all, the film was capable of the same lp/mm, had the same grain structure, the same sensitivity, the same dynamic range, right? So, really, the same image quality? Forget the advantages of larger negatives. Forget the advantages of better lenses. Forget the advantages of higher frame rates. Forget the benefits of a wider range of focal lengths. It's really the same "sensor" in an Instamatic as an F5, so no change in image quality. That seems to be what the author is implying.
 
What happened to the heavily hyped fuji/panny organic sensor? What happened to Fossum's new quantum whatsit sensor? These technologies were supposed to be game changers. Are they dead or just slow to market?
 
What happened to the heavily hyped fuji/panny organic sensor? What happened to Fossum's new quantum whatsit sensor? These technologies were supposed to be game changers. Are they dead or just slow to market?
Work is still taking place on the photon counting sensor technology. Eric posts updates from time to time in PS&T.
 
What happened to the heavily hyped fuji/panny organic sensor? What happened to Fossum's new quantum whatsit sensor? These technologies were supposed to be game changers. Are they dead or just slow to market?

--
Photo of the day: https://whisperingcat.co.uk/wp/photo-of-the-day/
Website: http://www.whisperingcat.co.uk/ (2022 - website rebuilt, updated and back in action)
DPReview gallery: https://www.dpreview.com/galleries/0286305481
Flickr: http://www.flickr.com/photos/davidmillier/ (very old!)
The QIS is still in scientific space and quite expensive.

 
It is overly narrowly focussed on "just" the sensor -- taking great images is so much more and it is in these areas where some of the biggest changes have occured - eye and subject detection and tracking being very high; 8.3k 60p HQ N-Raw N-log video recorded internally; vast improvements in IBIS and stabilisation; vast improvement in lenses - weight; Fresnel lenses; new materials; in lens VR sync'd with in body stabilisation; far more controls; "fly-by-wire" solutions that allow the user to have more control; focus peaking; histograms/false-colour/blinkies and so on. AND so many more. When one looks at the capabilities of the IQ4 back to process in body - frame averaging (and extraordinary capability); HDR images (the 2 Raw files combined a 3EV apart) ; and so on.

But then to the sensor -- BSI stacked CMOS sensors certainly are higher performance than previous sensors -- but "better" quality than CCD or non stacked sensors at Base ISO - no really just the same.

When one looks at the very best CINE and Medium Format sensors - the Dynamic Range of the Arri Alexa 35mm is extraordinary; the resolution at 150mp / 100mp is certainly higher and when using modern lenses is entirely accessible.

Sensors have not stagnated - the quality has been considered adequate, and the R&D money is in obtaining far better system performance to deliver high keeper rate.

Will Total Shutter be delivered soon and for what resolutions

Will 16 or even 20 bit data be available and affordable on smaller sensors or at all at reasonable prices (will the IQ5 have 20 stop dynamic range and 20-bit data -- unlikely but).
Medium format isn't mentioned, yet, are the cameras of recent years just mirrorless versions of what's been around for the past decade or small tweaks to previous mirrorless models?
Well not really if one looks at the Phase One IQ4 (on the XT or XC) or X2D-100C/GFX 100/100s
Is the information presented technically reliable? Is the phone argument valid?
No - it is dumb/mindless marketing crud. How many fools will want Mr Sony's first smart phone with a 100mp 1" sensor and then work out they can do nothing with these images. They are just too large to use on social media and too cruddy for use for any work.
 
Jim said it best when he outlined the major advances of the ILC family of sensors in the past ten years. Sure, there have been some advances, but we all know the impact of the phone. There is less money to be made by spending big on advancements with the sensors we all want. You can make phones better with computational photography much more easily than making big physical advances in the sensors we want.

I just went back and looked at some RAW files I took with my Canon DSLR and L lenses ten years ago and the IQ is pretty darn good. (The first thing I noticed is that I can do a better job in post now with newer LR and better skills). Is it close to GFX image fidelity that I shoot now? No, not really even close, but not everyone is looking at the images on a big 4K (soon to be 6K) pro monitor and they won't really notice it unless they know how to look at it and view at higher res.

So like Macro said, how much do we need?

The money is in the phone-size sensor department.

The article makes some editorial style statements that are true in a sense but could be torn apart individually. Their comparison methodology is silly and if they think any phone shot will hold up against an APSC, FF or MF image, that is insane. My wife's iPhone shots fall apart when viewed at anywhere near the size of my GFX or Q2 files.

But yes, we all get it. Phone sensors along with computational photography and upcoming AI capability with phone photography is an amazing thing and makes what we do at our level with our expensive professional gear less impactful and impressive than it otherwise would be if 99% of humanity were not taking 25 images a day with their phones and looking at thousands of images a week on their phones.

Those cookies your Grandma used to make aren't as impressive anymore when you can buy a box of them at Costco for 3 bucks and gorge yourself to the point of nausea. Those Costco cookies are pretty good....
 
Jim said it best when he outlined the major advances of the ILC family of sensors in the past ten years. Sure, there have been some advances, but we all know the impact of the phone. There is less money to be made by spending big on advancements with the sensors we all want. You can make phones better with computational photography much more easily than making big physical advances in the sensors we want.
I still think that there is good money in the large sensor business, but I think that the large format image sensor boom is over. I guess that most users already have interchangeable lens cameras. So, what I think we see is just replacement and upgrades.

I read that development cycle for sensors was around four years, often with two teams working 'interleaved' so a new sensor was released each second year. Now, I think we see a four year cycle.

As several posters have noted, I would think that we are sort of close to detecting all photons passing trough the CFA (Color Filter Array) and readout noise is in the low single digit of electron counts, like 3-4 electron at base ISO and around 1e- at high gain. So not so much to gain with further development.
I just went back and looked at some RAW files I took with my Canon DSLR and L lenses ten years ago and the IQ is pretty darn good. (The first thing I noticed is that I can do a better job in post now with newer LR and better skills). Is it close to GFX image fidelity that I shoot now? No, not really even close, but not everyone is looking at the images on a big 4K (soon to be 6K) pro monitor and they won't really notice it unless they know how to look at it and view at higher res.

So like Macro said, how much do we need?
Back when sensors used to be around 12 MP, it have been said that it was quite enough for any print size when viewed at 'reasonable' distance. There is some truth to that. But there is also truth to the statement: We want some more.

My impression is that Jim has noted a significant difference between GFX 50S and Sony A7rII on prints 30" high, with both cameras using excellent lenses. He also found that going from GFX 50 to GFX 100 much reduced aliasing.

My take is that there is an optimal pixel size for any given technology, probably related to the wiring/junction area compared to the photodiode area. As feature size goes down, the optimal pixel size goes with it.

But there is more to taking pictures than just sensor quality. Lens system matters a lot. There are also practical factors like faster AF, faster readout which may yield more usable electronic shutter and higher frame rate.
The money is in the phone-size sensor department.

The article makes some editorial style statements that are true in a sense but could be torn apart individually. Their comparison methodology is silly and if they think any phone shot will hold up against an APSC, FF or MF image, that is insane. My wife's iPhone shots fall apart when viewed at anywhere near the size of my GFX or Q2 files.
Image sensors have many uses, cell phones are just one of those. They are used for surveillance, machine vision and automotive, among other things.
But yes, we all get it. Phone sensors along with computational photography and upcoming AI capability with phone photography is an amazing thing and makes what we do at our level with our expensive professional gear less impactful and impressive than it otherwise would be if 99% of humanity were not taking 25 images a day with their phones and looking at thousands of images a week on their phones.

Those cookies your Grandma used to make aren't as impressive anymore when you can buy a box of them at Costco for 3 bucks and gorge yourself to the point of nausea. Those Costco cookies are pretty good....
I would guess that cell phones are pretty useful.

I never compared image quality between my cell phone and my different photo gear.

Best regards

Erik
 
I have a budget Motorola G30 phone. The camera module is not really in the same league as new iPhone cameras. But...

I took a quick snap with it and printed it out full A3 size. It looks really good. Exactly as good as any dedicated camera image I've printed. These days people may not regard A3 as being a large print, but it is twice the area of any film camera image I have ever printed. I wouldn't underestimate even modest camera phones. No, it's not like pixel peeping a GFX100 image on a 6K screen but that is like printing something 30 yards wide (or whatever the real size would be).

I would never choose to use my phone as a camera for anything other than snapshots or in an emergency (terrible haptics and viewfinder experience for old eyes), but it is not fair to pretend they are toys. They are not.
 
Meh. Not a great article, but the point that FF sensor IQ has flattened out is valid. From the Sony A7R or A7RII onward, IQ has been pretty flat for FF or larger sensors.
Medium format isn't mentioned, yet, are the cameras of recent years just mirrorless versions of what's been around for the past decade or small tweaks to previous mirrorless models?
It's basically the same sensor tech Sony is using in their mainstream FF bodies.
Is the information presented technically reliable? Is the phone argument valid?
Cell phone IQ is mostly the result of computational processing. The metric one should be using is not how pretty the image looks, but how closely does it match the scene that was photographed. Catch is, most people will happily take pretty over accurate.
Thanks for your opinions.
One could argue the following breakdown, but here's what I see as the big jumps in camera sensors:
  • Sensors get bigger (i.e., fab process yield improves to make larger dies viable)
  • Improvements in MP count
  • Improvements in DR
  • Main-sensor PD (either masked pixels or dual pixels)
  • Switch to BSI from FSI
  • Switch to stacked BSI sensor + memory
  • Switch to stacked BSI sensor + readout and memory
The sequence above is roughly Sony, although BSI happened before good PD in their FF sensors; for example, Canon did the PD step before the DR improvements (which really started for them with the 5DIV) and is still mostly FSI.

Larger sensors tend to lag a bit behind smaller sensors in applying the newest tech. The Sony 44x33mm sensors are currently at the BSI stage (which started with the A7RII). Things like the A1 are at the stacked sensor + memory stage and current cell phone sensors are at the stacked sensor + readout and memory stage. My latest understanding is that the sales volume of small sensors is so much higher that there really isn't a lot of motivation to put the newest tech in larger sensors -- it is no longer technical difficulty with fabbing large sensors that slows advances, but just market pressure (and a little bit of an issue with power dissipation). That understanding came from some very depressing conversations I had at EI2023 with folks from Omnivision and other sensor tech leaders.

As for raw IQ, note that it's pretty much unchanged by the steps after DR improvement. What's really happening now could be summarized as improving the temporal qualities of image capture, and that helps still capture, but is especially important for video.

There is a real question as to whether future dedicated cameras will be about making nice images or making images that accurately represent the scene. So far, it's been about accurately capturing the scene, whereas cell phones use computational/trained-AI methods to synthesize prettier images from fundamentally mediocre raw image data. However, one could argue that the latest vlogging-oriented cameras are changing the emphasis...
 
Meh. Not a great article, but the point that FF sensor IQ has flattened out is valid. From the Sony A7R or A7RII onward, IQ has been pretty flat for FF or larger sensors.
Medium format isn't mentioned, yet, are the cameras of recent years just mirrorless versions of what's been around for the past decade or small tweaks to previous mirrorless models?
It's basically the same sensor tech Sony is using in their mainstream FF bodies.
Is the information presented technically reliable? Is the phone argument valid?
Cell phone IQ is mostly the result of computational processing. The metric one should be using is not how pretty the image looks, but how closely does it match the scene that was photographed. Catch is, most people will happily take pretty over accurate.
Thanks for your opinions.
One could argue the following breakdown, but here's what I see as the big jumps in camera sensors:
  • Sensors get bigger (i.e., fab process yield improves to make larger dies viable)
  • Improvements in MP count
  • Improvements in DR
  • Main-sensor PD (either masked pixels or dual pixels)
  • Switch to BSI from FSI
  • Switch to stacked BSI sensor + memory
  • Switch to stacked BSI sensor + readout and memory
The sequence above is roughly Sony, although BSI happened before good PD in their FF sensors; for example, Canon did the PD step before the DR improvements (which really started for them with the 5DIV) and is still mostly FSI.

Larger sensors tend to lag a bit behind smaller sensors in applying the newest tech. The Sony 44x33mm sensors are currently at the BSI stage (which started with the A7RII). Things like the A1 are at the stacked sensor + memory stage and current cell phone sensors are at the stacked sensor + readout and memory stage. My latest understanding is that the sales volume of small sensors is so much higher that there really isn't a lot of motivation to put the newest tech in larger sensors -- it is no longer technical difficulty with fabbing large sensors that slows advances, but just market pressure (and a little bit of an issue with power dissipation). That understanding came from some very depressing conversations I had at EI2023 with folks from Omnivision and other sensor tech leaders.

As for raw IQ, note that it's pretty much unchanged by the steps after DR improvement. What's really happening now could be summarized as improving the temporal qualities of image capture, and that helps still capture, but is especially important for video.

There is a real question as to whether future dedicated cameras will be about making nice images or making images that accurately represent the scene. So far, it's been about accurately capturing the scene, whereas cell phones use computational/trained-AI methods to synthesize prettier images from fundamentally mediocre raw image data. However, one could argue that the latest vlogging-oriented cameras are changing the emphasis...
Good post, Hank.
 
I have a budget Motorola G30 phone. The camera module is not really in the same league as new iPhone cameras. But...

I took a quick snap with it and printed it out full A3 size. It looks really good. Exactly as good as any dedicated camera image I've printed. These days people may not regard A3 as being a large print, but it is twice the area of any film camera image I have ever printed. I wouldn't underestimate even modest camera phones. No, it's not like pixel peeping a GFX100 image on a 6K screen but that is like printing something 30 yards wide (or whatever the real size would be).

I would never choose to use my phone as a camera for anything other than snapshots or in an emergency (terrible haptics and viewfinder experience for old eyes), but it is not fair to pretend they are toys. They are not.
They are not toys and we all can't live without our phones now. Everybody has one.

I'm about to get a Pixel 8 (I have the old Pixel 5) when it comes out this Fall and that is rumored to be quite a camera. Amazing what they are doing with those tiny sensors and computational photography at the phone level.
 
Very good Hank. Well done. Nice read. You wrote a nice article there.
 
IMHO we're at the point where further improvements in technical quality will/would be largely inconsequential for 99.9% of photographic pursuits.

Just like 4x5" (or even medium format film) was and still is good enough for 99.9% of real-world photography.

Time to get back to focusing on what really matters, like composition, exposure, lighting, etc.

In a way, it's liberating.
 

Keyboard shortcuts

Back
Top