Interchangeable lens cameras: stagnation in image quality?

IMHO we're at the point where further improvements in technical quality will/would be largely inconsequential for 99.9% of photographic pursuits.

Just like 4x5" (or even medium format film) was and still is good enough for 99.9% of real-world photography.

Time to get back to focusing on what really matters, like composition, exposure, lighting, etc.

In a way, it's liberating.
I hear you and agree. However, the phones are going to get increased levels of computational capability and AI. Pretty soon a 5-year-old will point a phone at the alter inside a dark church and get something better than I can get with all of my years of experience shooting churches with high-res gear and great lenses. I exaggerate here but you get the point.

Plus, what you say is true but that won't stop me from buying a 200 MP GFX 200 if they make one.

And let's not forget post processing (LightRoom) LR and AI with raw files. Pretty soon LR will make the most boring image look great with all kinds of AI enhancements and pixel replacement automatically from a database of a billion images.
 
IMHO we're at the point where further improvements in technical quality will/would be largely inconsequential for 99.9% of photographic pursuits.

Just like 4x5" (or even medium format film) was and still is good enough for 99.9% of real-world photography.

Time to get back to focusing on what really matters, like composition, exposure, lighting, etc.

In a way, it's liberating.
I hear you and agree. However, the phones are going to get increased levels of computational capability and AI. Pretty soon a 5-year-old will point a phone at the alter inside a dark church and get something better than I can get with all of my years of experience shooting churches with high-res gear and great lenses. I exaggerate here but you get the point.

Plus, what you say is true but that won't stop me from buying a 200 MP GFX 200 if they make one.

And let's not forget post processing (LightRoom) LR and AI with raw files. Pretty soon LR will make the most boring image look great with all kinds of AI enhancements and pixel replacement automatically from a database of a billion images.
I think Hank nailed it wrt better cameras and computational photography. Which is better depends on whether you want accuracy or just plausibility.
 
IMHO we're at the point where further improvements in technical quality will/would be largely inconsequential for 99.9% of photographic pursuits.

Just like 4x5" (or even medium format film) was and still is good enough for 99.9% of real-world photography.

Time to get back to focusing on what really matters, like composition, exposure, lighting, etc.

In a way, it's liberating.
I hear you and agree. However, the phones are going to get increased levels of computational capability and AI. Pretty soon a 5-year-old will point a phone at the alter inside a dark church and get something better than I can get with all of my years of experience shooting churches with high-res gear and great lenses. I exaggerate here but you get the point.

Plus, what you say is true but that won't stop me from buying a 200 MP GFX 200 if they make one.

And let's not forget post processing (LightRoom) LR and AI with raw files. Pretty soon LR will make the most boring image look great with all kinds of AI enhancements and pixel replacement automatically from a database of a billion images.
I think Hank nailed it wrt better cameras and computational photography. Which is better depends on whether you want accuracy or just plausibility.
I agree! Yes, well said.
 
IMHO we're at the point where further improvements in technical quality will/would be largely inconsequential for 99.9% of photographic pursuits.

Just like 4x5" (or even medium format film) was and still is good enough for 99.9% of real-world photography.

Time to get back to focusing on what really matters, like composition, exposure, lighting, etc.

In a way, it's liberating.
I hear you and agree. However, the phones are going to get increased levels of computational capability and AI. Pretty soon a 5-year-old will point a phone at the alter inside a dark church and get something better than I can get with all of my years of experience shooting churches with high-res gear and great lenses. I exaggerate here but you get the point.

Plus, what you say is true but that won't stop me from buying a 200 MP GFX 200 if they make one.

And let's not forget post processing (LightRoom) LR and AI with raw files. Pretty soon LR will make the most boring image look great with all kinds of AI enhancements and pixel replacement automatically from a database of a billion images.
I think Hank nailed it wrt better cameras and computational photography. Which is better depends on whether you want accuracy or just plausibility.
Some of the tricks 'computational photography' does is just an extension of what we already can do, like multishot.

I would make the subtle point that shooting in churches may be helped by a tripod and getting it right may need effort protecting highlights.



View attachment 70e80de242f3415d82a9e096e8c31c89.jpg
This illustrates some of the issues. This was shot with a Canon 24/3.5 TSE LII lens combined a shifted and an unshifted HDR image, all in all six exposures. HDR allows exposing for the highlights while avoiding noise in the dark. I wouldn't rule out that an iPhone may shoot an image like this.But, it would need some pretty good Ai to get it right.

Best regards

Erik

--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic tends to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
.... but what you do with it. But obviously they never saw what the Fuji 100mp camera's with the best Fuji lenses are capable of.
 
IMHO we're at the point where further improvements in technical quality will/would be largely inconsequential for 99.9% of photographic pursuits.

Just like 4x5" (or even medium format film) was and still is good enough for 99.9% of real-world photography.

Time to get back to focusing on what really matters, like composition, exposure, lighting, etc.

In a way, it's liberating.
I hear you and agree. However, the phones are going to get increased levels of computational capability and AI. Pretty soon a 5-year-old will point a phone at the alter inside a dark church and get something better than I can get with all of my years of experience shooting churches with high-res gear and great lenses. I exaggerate here but you get the point.

Plus, what you say is true but that won't stop me from buying a 200 MP GFX 200 if they make one.

And let's not forget post processing (LightRoom) LR and AI with raw files. Pretty soon LR will make the most boring image look great with all kinds of AI enhancements and pixel replacement automatically from a database of a billion images.
I think Hank nailed it wrt better cameras and computational photography. Which is better depends on whether you want accuracy or just plausibility.
Some of the tricks 'computational photography' does is just an extension of what we already can do, like multishot.

I would make the subtle point that shooting in churches may be helped by a tripod and getting it right may need effort protecting highlights.

View attachment 70e80de242f3415d82a9e096e8c31c89.jpg
This illustrates some of the issues. This was shot with a Canon 24/3.5 TSE LII lens combined a shifted and an unshifted HDR image, all in all six exposures. HDR allows exposing for the highlights while avoiding noise in the dark. I wouldn't rule out that an iPhone may shoot an image like this.But, it would need some pretty good Ai to get it right.

Best regards

Erik
Like I just said. It's what you do with it. Breathtakingly beautiful!
 
IMHO we're at the point where further improvements in technical quality will/would be largely inconsequential for 99.9% of photographic pursuits.

Just like 4x5" (or even medium format film) was and still is good enough for 99.9% of real-world photography.

Time to get back to focusing on what really matters, like composition, exposure, lighting, etc.

In a way, it's liberating.
Pretty soon LR will make the most boring image look great with all kinds of AI enhancements and pixel replacement automatically from a database of a billion images.
Sure, but uninteresting to me, because that's not why I take pictures. I take pictures to remind myself of what it actually looked and felt like when I was somewhere. Not what it could have been in some parallel fantasy land.
 
IMHO we're at the point where further improvements in technical quality will/would be largely inconsequential for 99.9% of photographic pursuits.

Just like 4x5" (or even medium format film) was and still is good enough for 99.9% of real-world photography.

Time to get back to focusing on what really matters, like composition, exposure, lighting, etc.

In a way, it's liberating.
I hear you and agree. However, the phones are going to get increased levels of computational capability and AI. Pretty soon a 5-year-old will point a phone at the alter inside a dark church and get something better than I can get with all of my years of experience shooting churches with high-res gear and great lenses. I exaggerate here but you get the point.

Plus, what you say is true but that won't stop me from buying a 200 MP GFX 200 if they make one.

And let's not forget post processing (LightRoom) LR and AI with raw files. Pretty soon LR will make the most boring image look great with all kinds of AI enhancements and pixel replacement automatically from a database of a billion images.
I think Hank nailed it wrt better cameras and computational photography. Which is better depends on whether you want accuracy or just plausibility.
Some of the tricks 'computational photography' does is just an extension of what we already can do, like multishot.

I would make the subtle point that shooting in churches may be helped by a tripod and getting it right may need effort protecting highlights.

View attachment 70e80de242f3415d82a9e096e8c31c89.jpg
This illustrates some of the issues. This was shot with a Canon 24/3.5 TSE LII lens combined a shifted and an unshifted HDR image, all in all six exposures. HDR allows exposing for the highlights while avoiding noise in the dark. I wouldn't rule out that an iPhone may shoot an image like this.But, it would need some pretty good Ai to get it right.

Best regards

Erik

--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic tends to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
No disrespect intended, but that particular image doesn't look good/right to me. Too little contrast between the windows and interior.
 
IMHO we're at the point where further improvements in technical quality will/would be largely inconsequential for 99.9% of photographic pursuits.

Just like 4x5" (or even medium format film) was and still is good enough for 99.9% of real-world photography.

Time to get back to focusing on what really matters, like composition, exposure, lighting, etc.

In a way, it's liberating.
Pretty soon LR will make the most boring image look great with all kinds of AI enhancements and pixel replacement automatically from a database of a billion images.
Sure, but uninteresting to me, because that's not why I take pictures. I take pictures to remind myself of what it actually looked and felt like when I was somewhere. Not what it could have been in some parallel fantasy land.
Yes, it is so interesting now relative truth is so much a part of everyday photography. Your post reminds me of a little wooden horse in Bladerunner 2. The authentic and most real will be appreciated but the struggle is also real.
 
https://www.digitalcameraworld.com/news/its-official-new-cameras-are-not-getting-any-better

Hello friends, what kind of comments do you have on this article?

Medium format isn't mentioned, yet, are the cameras of recent years just mirrorless versions of what's been around for the past decade or small tweaks to previous mirrorless models?
Past decade meaning from 2012?

I remeber buying my first mirrorless GF1 in Dec 2009, and looking back - I feel mirrorless cameras (m4/3 > APS-C > FF > MF) have progressed slow & steadily well in the past decade; sensor size, resolution, new innovative and breakthrough in sensor design and tech-sharing top-down and bottom-up among cameras of different sensor sizes.

The only part which I feel quote "disconnected" is global screen viewers' complacent with 8bit JPEG format but yet, many are hungry for higher 4k 6k 8k video recording or display.

Sensor stagnation

I think there's longer years of stagnation for M4/3 & APS-C sensors compared to FF and MF?

"Relentless push towards the 200MP camera phone sensors" = Samsung Galaxy S23 Ultra's ISOCELL HP2 packs 200-million 0.6-micrometer (μm) pixels in a 1/1.3” optical format

b7cbf0d10afc49c5b91095128184073e.jpg.png
d9c20ec934ea487f9d855ff0c3de6279.jpg.png

20cb701cea064601b4ceb15fb88cfb19.jpg.png

Just looking at the comparison (ik Youtube compressed video format), I'm not so convinced if this is another gimmick with such a small sensor. Samsung's images feels like some super-resolution software + AI + 400% on a 50mp 1/1.3 sensor to me. It's like someone saying - "We may not have the Quality and detailness, but we can always have the Quantity (size)!" but of course- it's a smart move given there's only so much details available to capture at 28mm focal length.

Anyone can take a really sharp image and try super resolution 400%? I think I can do that too with GFX 50s exporting max 250% on CaptureOne but sanity-check * why do I need 200mp images?

My next nasty question to Samsung is show me evidence with microscopic images of 200 million nano-pixels (0.6um pixel by pixel!) laying flat that tiny 1/1.3 sensor? :-P I wish I can count every 200 mil pixels to make sure they not cheating. lol

Also, given a solid no-gimmick Sony's 44x33 sensor with 0.6um pixel pitch sensor + 2x2 / 3x3 or 4x4 binning tech, I'm guessing using GFX 50s = 51.4mp 5.3um = 5.3um divide by 0.6 times 51.4mp = so 450mp MF sensor? GFX450 :-D (My silly math should be wrong)

- 450mp 0.6um MF camera?

- Image quality? unsure

- Needs , Wants and Demands?

- Heat? Can the best thermoelectric cooler / heatsink support?

- Lens resolving power for 450mp MF sensor?

- Cost - $20,000?
 
Last edited:
Also, given a solid no-gimmick Sony's 44x33 sensor with 0.6um pixel pitch sensor + 2x2 / 3x3 or 4x4 binning tech, I'm guessing using GFX 50s = 51.4mp 5.3um = 5.3um divide by 0.6 times 51.4mp = so 450mp MF sensor? GFX450 :-D (My silly math should be wrong)
51.4*(5.3/0.6)^2 = 4 gigapixels
 
Also, given a solid no-gimmick Sony's 44x33 sensor with 0.6um pixel pitch sensor + 2x2 / 3x3 or 4x4 binning tech, I'm guessing using GFX 50s = 51.4mp 5.3um = 5.3um divide by 0.6 times 51.4mp = so 450mp MF sensor? GFX450 :-D (My silly math should be wrong)
51.4*(5.3/0.6)^2 = 4 gigapixels
Yes - Jim to the rescue. Thank you sir!

Now Samsung... :-D

4e1c4f3b4cda45548ba414ed7422bac5.jpg.png

Counting Pixels lol :-D

https://petapixel.com/2013/02/12/what-a-dslrs-cmos-sensor-looks-like-under-a-microscope/

--
Shooting On The Fly Everyday!
 
Last edited:
Some of the tricks 'computational photography' does is just an extension of what we already can do, like multishot.
Google does multi-shot merging pretty much all the time. Most smartphones do.
I would make the subtle point that shooting in churches may be helped by a tripod and getting it right may need effort protecting highlights.
They do full alignment with lens corrections, so a tripod generally isn't necessary.

In fact, they often also do detection and de-blurring of moving components of the scene -- like Sony did in the multi-shot anti-shake modes all the way back in the original NEX-5 and some A-mount bodies. It's worth noting that Sony didn't just decide anti-shake wasn't important for the NEX-5 and thus leave it out; they made the judgment call that their computational multi-shot deblur was better suited for such a small camera body than the IBIS they used in all their previous ILCs. For the record, the amount of deblurring you get with Sonys that combine IBIS and multi-shot is utterly amazing and it's a pitty folks didn't appreciate that, instead complaining that it only produced JPEGs instead of raw files (which it probably did because some computational steps were likely done inside the hardware of the JPEG engine).
View attachment 70e80de242f3415d82a9e096e8c31c89.jpg
This illustrates some of the issues. This was shot with a Canon 24/3.5 TSE LII lens combined a shifted and an unshifted HDR image, all in all six exposures. HDR allows exposing for the highlights while avoiding noise in the dark. I wouldn't rule out that an iPhone may shoot an image like this.But, it would need some pretty good Ai to get it right.
You're going to have to explain what you mean when you say "combined a shifted and an unshifted HDR image." Stitching generally uses AI-assisted warping and alignment. Incidentally, what you've posted is a JPEG, not an HDR image, so most of the tonality is coming from tone mapping in postprocessing -- in other words, a tone-mapping AI created it. ;-) In fact, the tones shown are synthesized and do not reflect the true appearance of the scene.

As far as getting the tones right in the HDR sense, smartphones do great on stationary subjects. The main catch is that when they shoot the images to stitch, most phones do NOT vary the exposure between them, instead basically shooting ETTR frames at a relatively high framerate, which means that shadow areas in each shot are pretty seriously underexposed. That underexposure can mean the darkest shadow areas in a relatively dimly-lit scene might get averaged enough to not look noisy, but probably don't show much scene detail either. Despite high pixel counts (my S20 Ultra has 108MP, more than a GFX100S), the scene resolution delivered is not all that close to the pixel count on phones, so a smartphone image is usually also sharpness-enhanced by AI, and the details probably wouldn't exactly match the scene at resolutions above 20MP or so. I think most smartphone image pipelines are tuned for making nice images at around 12MP full sensor or 8MP video crop (i.e., 4K video).
 
You're going to have to explain what you mean when you say "combined a shifted and an unshifted HDR image." Stitching generally uses AI-assisted warping and alignment. Incidentally, what you've posted is a JPEG, not an HDR image, so most of the tonality is coming from tone mapping in postprocessing -- in other words, a tone-mapping AI created it. ;-) In fact, the tones shown are synthesized and do not reflect the true appearance of the scene.
I think HDR-like techniques can be used specifically to approximate the appearance of a place. When we walk into this kind of lighting, our visual impression isn't formed at any one instant. The way our eyes dart around from dark to light areas, irises opening and closing constantly, is a lot like making multiple exposures. When the visual cortex combines them into what becomes our impression of the space, we're essentially doing HDR.

When I photographed spaces with this kind of lighting, my internal debate was often between fidelity to the experience and showing more interesting detail. I worked in spaces darker than this, and worried that being too true to the experience would lead to a very gloomy body of work. Most often I chose a compromise: shadows elevated enough to show all the interesting things, but not so much that the scene looked completely uncanny. This was sometimes easier said than done. When people make comments like, "that's cool, I thought it was a painting!" I worry that I went too far.

FWIW, my favorite exposure blending technique didn't use any AI, or high-bit depth intermediate image, or tone mapping. It just chose the highest contrast pixel groups from each image, and created a 16-bit TIF that you could then adjust as needed. I liked it because it wasn't really capable of wild effects. The difference between its results and what you could do with a single exposure were subtle: no blocked highlights, and no noise in the shadows.
 
You're going to have to explain what you mean when you say "combined a shifted and an unshifted HDR image." Stitching generally uses AI-assisted warping and alignment. Incidentally, what you've posted is a JPEG, not an HDR image, so most of the tonality is coming from tone mapping in postprocessing -- in other words, a tone-mapping AI created it. ;-) In fact, the tones shown are synthesized and do not reflect the true appearance of the scene.
I think HDR-like techniques can be used specifically to approximate the appearance of a place. When we walk into this kind of lighting, our visual impression isn't formed at any one instant. The way our eyes dart around from dark to light areas, irises opening and closing constantly, is a lot like making multiple exposures. When the visual cortex combines them into what becomes our impression of the space, we're essentially doing HDR.

When I photographed spaces with this kind of lighting, my internal debate was often between fidelity to the experience and showing more interesting detail. I worked in spaces darker than this, and worried that being too true to the experience would lead to a very gloomy body of work. Most often I chose a compromise: shadows elevated enough to show all the interesting things, but not so much that the scene looked completely uncanny. This was sometimes easier said than done. When people make comments like, "that's cool, I thought it was a painting!" I worry that I went too far.

FWIW, my favorite exposure blending technique didn't use any AI, or high-bit depth intermediate image, or tone mapping. It just chose the highest contrast pixel groups from each image, and created a 16-bit TIF that you could then adjust as needed. I liked it because it wasn't really capable of wild effects. The difference between its results and what you could do with a single exposure were subtle: no blocked highlights, and no noise in the shadows.
Paul, I agree. Our eyes don't see and our brain process the deep shadows and highlights the same as a single shot image with our camera because like you said, the eyes dart around and the brain processes a lot of DR . We process it all together to see the various areas of a dark church and bright windows and somehow it seems to be in one glance. Anyway, I said that wrong.

But I stopped playing around with HDR. It just didn't look right to me, probably from my lack of skill at fine-tuning it. But I don't know, I had good software doing it and also knew the manual ticks in PS. Those images just look flat to me. I don't mean Erick's fine example. I meant my own.

With GFX we have a lot of latitude to expose for highlights and pull out the darker parts in post. If there is noise, we can run DeNoise AI with LR and the results are amazing.

I have done some tests in churches where I shoot one GFX 100 shot and then do a 6 shot HDR sequence. I post process the DR sequence and then do my best with the single shot. I compare them peeping at full res, and I like the single shoot better.

--

Greg Johnson, San Antonio, Texas
 
That is a fantastic post Hank.
 
IMHO we're at the point where further improvements in technical quality will/would be largely inconsequential for 99.9% of photographic pursuits.

Just like 4x5" (or even medium format film) was and still is good enough for 99.9% of real-world photography.

Time to get back to focusing on what really matters, like composition, exposure, lighting, etc.

In a way, it's liberating.
Pretty soon LR will make the most boring image look great with all kinds of AI enhancements and pixel replacement automatically from a database of a billion images.
Sure, but uninteresting to me, because that's not why I take pictures. I take pictures to remind myself of what it actually looked and felt like when I was somewhere. Not what it could have been in some parallel fantasy land.
Yes, it is so interesting now relative truth is so much a part of everyday photography. Your post reminds me of a little wooden horse in Bladerunner 2. The authentic and most real will be appreciated but the struggle is also real.
Well said - these are good posts. I'm surprised we don't talk about it more. Me? I'm more of a documentary style photographer. here is where I've been and here is what it looked like.

Sure, I post-process in LR and might bring out some shadows and a little S-Curve or mask the sky and drop the exposure a stop there. I'll add some pop with the various contrast and saturation sliders, and I might remove a cigarette butt or plastic bottle from the ground. I might move the temp a little warmer.

But I don't so HDR, stitching, sky replacement, adding reflection, color replacement or removing big items from the scene. I never add objects. I will do some (not often) focus-stacking though. That is fair game.
 
I think HDR-like techniques can be used specifically to approximate the appearance of a place. When we walk into this kind of lighting, our visual impression isn't formed at any one instant. The way our eyes dart around from dark to light areas, irises opening and closing constantly, is a lot like making multiple exposures. When the visual cortex combines them into what becomes our impression of the space, we're essentially doing HDR.
A sort-of valid point. However, human eyesight is only good for at most 11-13EV of DR in a scene, whereas high-end cameras beat that in a single shot. Humans can cover a larger DR over time, but it takes a while for your eyes to adjust; it's not just a matter of glancing at a shadow area. So, if you're feeling the need to do an HDR sequence, you're trying to get details you literally can't see. Then again, you can see them in the EVF of your camera because it's doing some tone mapping... ;-)

I'll never forget the first time I noticed this. I was shooting with an A7 pointed down a hall at a window when I noticed I could see stuff in the hall and in daylight out the window simultaneously in the A7 viewfinder, but IRL from where I stood I couldn't see stuff outside. My eyes were adjusted to the hallway light level.
... FWIW, my favorite exposure blending technique didn't use any AI, or high-bit depth intermediate image, or tone mapping. It just chose the highest contrast pixel groups from each image, and created a 16-bit TIF that you could then adjust as needed. I liked it because it wasn't really capable of wild effects.
That isn't too different from basic tone mapping, but you need some blending otherwise you will get really strange effects at the borders of regions. You also can get really serious tonal imbalances for one region vs. another (e.g., stuff inside a room looking brighter than stuff outside a window). We could argue about how much cleverness has to be applied for tone mapping to count as AI, and do things like Retinex count? I would argue they generally are AI-based (heuristic) methods, although most do not use trained neural networks. One of the best simple algorithms I know for tone mapping basically models the problem as a system of PDE (partial differential equations) for minimizing disturbance to average brightness while preserving local contrast. Is that AI? Well, it's artificial and it's intelligent and it's a heuristic.
The difference between its results and what you could do with a single exposure were subtle: no blocked highlights, and no noise in the shadows.
Paul, I agree. Our eyes don't see and our brain process the deep shadows and highlights the same as a single shot image with our camera because like you said, the eyes dart around and the brain processes a lot of DR . We process it all together to see the various areas of a dark church and bright windows and somehow it seems to be in one glance.
Not DR within one scene that a GFX100S can't get in one shot. Still, I agree that in modest doses, HDR capture of much wider DR is really nice. I nearly always use Sony's DRO for camera JPEGs (although I do shoot raw+JPEG). Here's my argument:

What a photographer wants to capture is not the light (photons) as so many people claim, but the envisioned appearance of the scene. If you're using a tool (camera EVF) that lets you see and compose with a wider-than-humanly-possible DR, it is still your vision driving the composition, etc., even though it would look different unaided. Folks like Ansel Adams invested a lot of effort in making the final rendering of an image match their intent even when when the film captured didn't quite, and that's all a lot of HDR tonemapping is really trying to do.
 

Keyboard shortcuts

Back
Top