IQ: Does it matter any more?

Chris Noble

Veteran Member
Messages
6,261
Solutions
3
Reaction score
6,210
Location
US
I was struck by a post in the Micro Four Thirds forum comparing the 2014-vintage, 16 MP GM5 vs. the 2022-vintage, 20 MP OM-1. Same sensor size, 8 years of "improvement"?

The difference in visible IQ was slight, and only detectable by pixel-peeping test images; and what was mainly visible was not-major noise differences, which can now be very effectively addressed in PP.

Are the measurements used no longer relevant? (i.e. beyond visible differences, so who cares?). Is useful sensor evolution maxed out?

(I exclude shutter technology improvements such as global shutter, and image stabilization which is part of the sensor mount, not the sensor itself).
 
Last edited:
I was struck by a post in the Micro Four Thirds forum comparing the 2014-vintage, 16 MP GM5 vs. the 2022-vintage, 20 MP OM-1. Same sensor size, 8 years of "improvement"?

The difference in visible IQ was slight, and only detectable by pixel-peeping test images; and what was mainly visible was not-major noise differences, which can now be very effectively addressed in PP.

Are the measurements used no longer relevant? (i.e. beyond visible differences, so who cares?). Is useful sensor evolution maxed out?

(I exclude shutter technology improvements such as global shutter, and image stabilization which is part of the sensor mount, not the sensor itself).
It seems it doesn't matter to you and if that's the case why does it matter to you whether other people think it matters. Is it going to change your mind?
 
I was struck by a post in the Micro Four Thirds forum comparing the 2014-vintage, 16 MP GM5 vs. the 2022-vintage, 20 MP OM-1. Same sensor size, 8 years of "improvement"?

The difference in visible IQ was slight, and only detectable by pixel-peeping test images; and what was mainly visible was not-major noise differences, which can now be very effectively addressed in PP.

Are the measurements used no longer relevant? (i.e. beyond visible differences, so who cares?). Is useful sensor evolution maxed out?

(I exclude shutter technology improvements such as global shutter, and image stabilization which is part of the sensor mount, not the sensor itself).
Was the test conducted at various ISO levels? Were the images post processed? Were the test images taken of a subject that required lots of dynamic range? Yes, I looked, but they're just test images. I prefer real-life images.

Yes, IQ matters, and it's why so many have moved to larger sensor cameras, IE "Full Frame."

--
-------------------------------------------------
---Have camera, will travel.---
 
Last edited:
My opinion is that it only matters to the photographer, and then not all of them, just the really technical perfectionist ones. Nobody else is ever going to care or notice. If asked to pick the best photo out of a particular bunch, the average person will be just as likely to pick a photo from the simplest low mp point and shoot taken by complete beginner with a lot of luck as one taken by a pro with $10,000 worth of camera/lens.
 
There are about 100,000 posts on this website which contains "image quality".

And more than 40,000 contains the string "IQ".

Of course, we can't exclude the fact that some of them contain both terms, but if we judge by numbers, yes, IQ seems to matter to a large number of members.
 
The question is, how does a photographer achieve the image quality (IQ) they desire?

During the the first decade or so of the 2000s, camera manufacturers focused development on improving dynamic range and low light performance. This was a factor in the race to be among the first to manufacturer full-frame flagship cameras. The larger the sensor, the greater the light-gathering potential. The more light, the less visible noise becomes. Combine this with advances in how camera systems process images and photographers were able to make quality, sellable photos at ISO 6400 or higher.

The 2010s saw limited improvement in raw dynamic range but significant growth in resolution, combined with real improvements in autofocus, burst rate, and buffer. These were all products of significant improvement in data processing, which drives everything in digital photography and image making.

At the same time, we saw there emergence and development of mature mirorless camera systems. By the end of the 2010s, it was clear that mirrorless was the future of digital imaging.

As the mirrorless platform and onboard data processing have continued to evolve in the 2020s, we've seen the development of sensor tech allowing next-gen autofocus ability, exponential growth in burst rate and buffer.

Outside the camera, computational photography and AI have produced huge steps forward in mitigation of noise, preservation and enhancement of captured details in photos. This evolution has also blurred the line between photography and image making. But that's a topic for another thread.

The bottom line, is that the availability of AI noise reduction and sharpening in photo processing & and editing apps has taken the burden off manufacturers to develop sensors with another full stop of dynamic range. The IQ that would come with such a development can be fabricated in an app.

So, while image quality (IQ) always matters, photographers are looking beyond camera systems to achieve their desired IQ. Smartphone users don't even have to think about it. Manufacturers simply build-in the processing power needed to run apps in the shadows that work with photos, computational photography and AI to produce really nice looking images.
 
The question is, how does a photographer achieve the image quality (IQ) they desire?

During the the first decade or so of the 2000s, camera manufacturers focused development on improving dynamic range and low light performance. This was a factor in the race to be among the first to manufacturer full-frame flagship cameras. The larger the sensor, the greater the light-gathering potential. The more light, the less visible noise becomes. Combine this with advances in how camera systems process images and photographers were able to make quality, sellable photos at ISO 6400 or higher.

The 2010s saw limited improvement in raw dynamic range but significant growth in resolution, combined with real improvements in autofocus, burst rate, and buffer. These were all products of significant improvement in data processing, which drives everything in digital photography and image making.

At the same time, we saw there emergence and development of mature mirorless camera systems. By the end of the 2010s, it was clear that mirrorless was the future of digital imaging.

As the mirrorless platform and onboard data processing have continued to evolve in the 2020s, we've seen the development of sensor tech allowing next-gen autofocus ability, exponential growth in burst rate and buffer.

Outside the camera, computational photography and AI have produced huge steps forward in mitigation of noise, preservation and enhancement of captured details in photos. This evolution has also blurred the line between photography and image making. But that's a topic for another thread.

The bottom line, is that the availability of AI noise reduction and sharpening in photo processing & and editing apps has taken the burden off manufacturers to develop sensors with another full stop of dynamic range. The IQ that would come with such a development can be fabricated in an app.
Ah, but if the camera manufacturers could get another full stop of dynamic range that same AI noise reduction could then fabricate yet another stop on top of that and so it goes on.
So, while image quality (IQ) always matters, photographers are looking beyond camera systems to achieve their desired IQ. Smartphone users don't even have to think about it. Manufacturers simply build-in the processing power needed to run apps in the shadows that work with photos, computational photography and AI to produce really nice looking images.
 
I was struck by a post in the Micro Four Thirds forum comparing the 2014-vintage, 16 MP GM5 vs. the 2022-vintage, 20 MP OM-1. Same sensor size, 8 years of "improvement"?

The difference in visible IQ was slight, and only detectable by pixel-peeping test images; and what was mainly visible was not-major noise differences, which can now be very effectively addressed in PP.

Are the measurements used no longer relevant? (i.e. beyond visible differences, so who cares?). Is useful sensor evolution maxed out?

(I exclude shutter technology improvements such as global shutter, and image stabilization which is part of the sensor mount, not the sensor itself).
It seems it doesn't matter to you and if that's the case why does it matter to you whether other people think it matters. Is it going to change your mind?
What an extraordinary statement... You don't care what other people think? How do you learn anything?
 
Is useful sensor evolution maxed out?
I think it's been established that existing sensor technology is close to maximum theoretical efficiency in terms of light collection. Pixel density is also more than adequate for almost any purpose. What other aspects of IQ would benefit from further sensor advances?
(I exclude shutter technology improvements such as global shutter, and image stabilization which is part of the sensor mount, not the sensor itself).
 
Last edited:
The question is, how does a photographer achieve the image quality (IQ) they desire?

During the the first decade or so of the 2000s, camera manufacturers focused development on improving dynamic range and low light performance. This was a factor in the race to be among the first to manufacturer full-frame flagship cameras. The larger the sensor, the greater the light-gathering potential. The more light, the less visible noise becomes. Combine this with advances in how camera systems process images and photographers were able to make quality, sellable photos at ISO 6400 or higher.

The 2010s saw limited improvement in raw dynamic range but significant growth in resolution, combined with real improvements in autofocus, burst rate, and buffer. These were all products of significant improvement in data processing, which drives everything in digital photography and image making.

At the same time, we saw there emergence and development of mature mirorless camera systems. By the end of the 2010s, it was clear that mirrorless was the future of digital imaging.

As the mirrorless platform and onboard data processing have continued to evolve in the 2020s, we've seen the development of sensor tech allowing next-gen autofocus ability, exponential growth in burst rate and buffer.

Outside the camera, computational photography and AI have produced huge steps forward in mitigation of noise, preservation and enhancement of captured details in photos. This evolution has also blurred the line between photography and image making. But that's a topic for another thread.

The bottom line, is that the availability of AI noise reduction and sharpening in photo processing & and editing apps has taken the burden off manufacturers to develop sensors with another full stop of dynamic range. The IQ that would come with such a development can be fabricated in an app.
Ah, but if the camera manufacturers could get another full stop of dynamic range that same AI noise reduction could then fabricate yet another stop on top of that and so it goes on.
This raises the question, what would be needed to realize a full-stop gain in sensor dynamic range? I assume this would require an improvement to quantum efficiency. I could be wrong.

A related question is, would customers pay more for that, more powerful AI tools, faster burst rate, autofocus that detects - not just a person - but a specific person, more pixels...?

We already see members of this forum post some fine looking images made at ISO 12800 or even higher. Would greater dynamic range and better low light performance be a huge selling point to most customers?
So, while image quality (IQ) always matters, photographers are looking beyond camera systems to achieve their desired IQ. Smartphone users don't even have to think about it. Manufacturers simply build-in the processing power needed to run apps in the shadows that work with photos, computational photography and AI to produce really nice looking images.
 
IQ as IQ -a semi hypothetical quality of pure technical excellence (or lack thereof) separate from both subject and from pictorial concerns like composition- is just one tool in the photographer's toolbox. How important it is to you depends on your goals, both for your photography in general and for any specific individual photographs.

Me, I'm not much of an IQ person. The technical side of things has always more or less bored me. I like using the camera, but extra fiddling beyond making the basic choices is not interesting to me in itself. If I have enough IQ to reproduce what I see accurately according to my memories, that's plenty. I am most interested in IQ when shooting macro-- part of the fun of macro is trying to create a clear and accurate close up view of something tiny, so sharpness and related issues are particularly important. I'm least interested when I'm recording the passing scene and just want to create a lively sense of the moment.
 
I was struck by a post in the Micro Four Thirds forum comparing the 2014-vintage, 16 MP GM5 vs. the 2022-vintage, 20 MP OM-1. Same sensor size, 8 years of "improvement"?

The difference in visible IQ was slight, and only detectable by pixel-peeping test images; and what was mainly visible was not-major noise differences, which can now be very effectively addressed in PP.

Are the measurements used no longer relevant? (i.e. beyond visible differences, so who cares?). Is useful sensor evolution maxed out?

(I exclude shutter technology improvements such as global shutter, and image stabilization which is part of the sensor mount, not the sensor itself).
That test is basically about noise and resolution, only two elements of IQ.

And sensor size is but one metric as well; the computer and other processing attached to that sensor matters as well.

But I think you're correct that sensors have been able to do very very well for a very long time at the basics. Look at DxO's sensor rankings for example; my ancient 645Z's sensor still sits way up there at the top. One can kvetch about their criteria, but still.

Read out and noise are maybe hotter issues now.

Dynamic range? even old cameras captured more than we can use. Print is narrow, and so are standard dynamic range displays. Hit the HDR button in Ps or Lr with even some of your old sunset pictures on a 1000 nit monitor and you'll see the 4 stops of color and detail that we haven't been able to use effectively, without crunching all that info into a smaller space. IOW, the sensors are still ahead of much of our other tech.
 
On IQ: GM5 uses sensor belonging to the earlier 16Mp generation from Panasonic. Over the years, some improvement were made. The easiest is Olympus used Sony 16Mp sensor which has a general 1 stop better DR than Panasonic sensor. Then the later 16 Mp Panasonic sensor has a new mirco lens design to improvement light gathering, then removed the AA filter for 10% sharper output. Unpon the latest 20Mp sensor (both Panasonic & Olympus use Sony sensor) had further improvement on DR. Of course 25% more pixel...

Then the JPG engine: comparing to GX7 (similar generation of GM5), it's successor GX85 has at least 1~2 stops cleaner ISO than GX7. Better noise /hot pixel control on long shutter exposure too. I don't have 20Mp model to compare but as per members' opinion, should see further improvement.

Adding up the above, not big but 1~2+ stops cleaner high ISO, 10% sharper output, 1~2 stops better DR, 25% higher resolution are not small improvement. It is for SOOC JPG.

However, we know well that the more powerful RAW converter/ image editor could make up those differences. I believe this not only happen to M43, but any other format too.

By the end, it really depends on how you take SOOC result which from later models could only be matched by edited output from GM5. A question on upgrade your software or camera.

Then the feature set, AF capacity, better Live View engine, stablization, burst speed... Can get from newer models only. :-)

My 2 cents.
 
I was struck by a post in the Micro Four Thirds forum comparing the 2014-vintage, 16 MP GM5 vs. the 2022-vintage, 20 MP OM-1. Same sensor size, 8 years of "improvement"?

The difference in visible IQ was slight, and only detectable by pixel-peeping test images; and what was mainly visible was not-major noise differences, which can now be very effectively addressed in PP.

Are the measurements used no longer relevant? (i.e. beyond visible differences, so who cares?). Is useful sensor evolution maxed out?

(I exclude shutter technology improvements such as global shutter, and image stabilization which is part of the sensor mount, not the sensor itself).
It seems it doesn't matter to you and if that's the case why does it matter to you whether other people think it matters. Is it going to change your mind?
What an extraordinary statement... You don't care what other people think? How do you learn anything?
Just the opposite I have learned a vast amount on these forums but it's not from what people 'think'!
 
Is useful sensor evolution maxed out?
I think it's been established that existing sensor technology is close to maximum theoretical efficiency in terms of light collection. Pixel density is also more than adequate for almost any purpose. What other aspects of IQ would benefit from further sensor advances?
(I exclude shutter technology improvements such as global shutter, and image stabilization which is part of the sensor mount, not the sensor itself).
Funny. I've heard that "maxed out" argument at least a few times a year over the last 20 years. I'm guessing it's not.
 
(I exclude shutter technology improvements such as global shutter, and image stabilization which is part of the sensor mount, not the sensor itself).
But these are the exact technologies that let tiny smartphone sensors produce image quality far beyond their weight class! We're not seeing that realized in cameras just yet, but that's only a matter of time.

Otherwise... Image quality has never mattered. Emotional impact is what we take pictures for, the grains and pixels are merely the medium for that content.

On the other hand, most noisy impactful pictures would be improved with less noise, so better IQ is always better.

But everything's a compromise. Are you willing to carry heavier gear for more IQ? Are you willing to pay more for it? How much does your audience care? How much do you care? Only you can decide.
 
Before that, meaning before the release of the a9iii, So many were claiming ISO was just a number. Now it's what everyone is talking about. In spite of even those having the camera but briefly at best. Those that have not even touched it, behaving far worse.

We don't even want to go down that rabbit hole of it being around 24MP.
 
I was struck by a post in the Micro Four Thirds forum comparing the 2014-vintage, 16 MP GM5 vs. the 2022-vintage, 20 MP OM-1. Same sensor size, 8 years of "improvement"?

The difference in visible IQ was slight, and only detectable by pixel-peeping test images; and what was mainly visible was not-major noise differences, which can now be very effectively addressed in PP.
I noticed that the post you referred to involved comparing raw image files, and here you refer to addressing image quality differences in post-processing. What matters to a whole lot of people is what the pictures look like straight out of the camera.

I have an Olympus E-450, and its immediate predecessor was the E-420. I don't know for certain, but the reviews I saw make the assumption that the two models used the same sensor. However, the E-450 had an updated image processor, TruePic III+, and there were easily visible improvements to image quality according to a review at CNET:

". . . the extra number-crunching power brings noticeable benefits to picture quality. There are fewer white-balance inaccuracies for one, and slightly lower noise levels throughout the range, with maximum ISO 1,600 having better colour and detail than usual, with fewer distracting coloured speckles. . . Olympus has even managed to extend the dynamic range of JPEGs produced by the camera. Where the E-420 would clip highlights, should you take your eye off the histogram (real-time in live view), the E-450, with the 'auto gradation' option enabled, can largely be left alone to make complex auto-exposure decisions. That's great news for newbies or anyone looking to grab shots without constantly adjusting the camera's settings."
Are the measurements used no longer relevant? (i.e. beyond visible differences, so who cares?). Is useful sensor evolution maxed out?
Maybe, but the sensor is just one part in a camera. The CNET review quoted above indicates how important a new imaging processor can be.
(I exclude shutter technology improvements such as global shutter, and image stabilization which is part of the sensor mount, not the sensor itself).
But wouldn't the things you're excluding also affect image quality?
 
I was struck by a post in the Micro Four Thirds forum comparing the 2014-vintage, 16 MP GM5 vs. the 2022-vintage, 20 MP OM-1. Same sensor size, 8 years of "improvement"?

The difference in visible IQ was slight, and only detectable by pixel-peeping test images; and what was mainly visible was not-major noise differences, which can now be very effectively addressed in PP.
I noticed that the post you referred to involved comparing raw image files, and here you refer to addressing image quality differences in post-processing. What matters to a whole lot of people is what the pictures look like straight out of the camera.

I have an Olympus E-450, and its immediate predecessor was the E-420. I don't know for certain, but the reviews I saw make the assumption that the two models used the same sensor. However, the E-450 had an updated image processor, TruePic III+, and there were easily visible improvements to image quality according to a review at CNET:
Are the measurements used no longer relevant? (i.e. beyond visible differences, so who cares?). Is useful sensor evolution maxed out?
Maybe, but the sensor is just one part in a camera. The CNET review quoted above indicates how important a new imaging processor can be.
(I exclude shutter technology improvements such as global shutter, and image stabilization which is part of the sensor mount, not the sensor itself).
But wouldn't the things you're excluding also affect image quality?
Good points. I was differentiating "image" (what is captured by the sensor) vs. "picture" (everything that happens afterwards); and "quality" as sensor measurements, not including the contributions of the photographer or PP in the camera or on the computer. This thread is about the evolution of sensor technology, with no implication that it is most important (actually, as I indicated I think that sensor improvements are now moot).
 
I was struck by a post in the Micro Four Thirds forum comparing the 2014-vintage, 16 MP GM5 vs. the 2022-vintage, 20 MP OM-1. Same sensor size, 8 years of "improvement"?

The difference in visible IQ was slight, and only detectable by pixel-peeping test images; and what was mainly visible was not-major noise differences, which can now be very effectively addressed in PP.

Are the measurements used no longer relevant? (i.e. beyond visible differences, so who cares?). Is useful sensor evolution maxed out?

(I exclude shutter technology improvements such as global shutter, and image stabilization which is part of the sensor mount, not the sensor itself).
The same can be said for the power of car engines. The analogy I always like to use is that 100 HP vs 50 HP is a big deal. 200 HP vs 100 HP is also a big deal, but not as big of a deal. 400 HP vs 200 HP wouldn't really matter to most -- they'd take the 400 HP over the 200 HP, but wouldn't really make any "meaningful" use of the power increase. 800 HP vs 400 HP would be just silly for anyone on an actual road.

What matters, by far, to most people after they pass a certain "power threshold" is operation. How loud is the car, how smooth the ride is, gas mileage, connectivity, etc., etc., etc.. And so it is with cameras.

Right now, most have decided that smartphones are the way to go. Dedicated cameras are the "special silverware" broken out for holidays.

By the way -- I'm one of the nutjobs who wants the 800 HP car (1600 HP would be cool, too). But, I'm happy with the 400 HP car I got, and if I never got a better car, I'd be fine with it. Would like some more "operational" advances, though, but I'm even good with what I've got, there, too.

For what it's worth, my "quality threshold" for a "last camera" and "last lens" would be something like the Z8 + Tamron 35-150 / 2-2.8. Of course, I'd have to complement that with another body and additional lenses (so I don't have to swap lenses), but what I already have covers all that (although I'll likely upgrade that, too, just because).

But, as I said, for the vast majority, the smartphone is where it's at, and they're happy with just further improvements on smartphone tech.
 

Keyboard shortcuts

Back
Top