How much IQ do you really lose with a Canon?

I think that’s an overstatement of what I wrote, although with the right definition of nonprofessional maybe not. I said that the number of situations in which FF is good enough is increasing with time, though.
It's not just the work Sony is doing on high res sensors. It's also software like Topaz Gigapixel, what is completely different approach to enable big prints.
 
I think that’s an overstatement of what I wrote, although with the right definition of nonprofessional maybe not. I said that the number of situations in which FF is good enough is increasing with time, though.
It's not just the work Sony is doing on high res sensors. It's also software like Topaz Gigapixel, what is completely different approach to enable big prints.
I am not a fan of AI uprezzing. Too many artifacts and not-quite-believable details. And the time I'd have to spend going over an exhibition print with a fine-toothed comb is a demotivator for me.

Jim
 
I think that’s an overstatement of what I wrote, although with the right definition of nonprofessional maybe not. I said that the number of situations in which FF is good enough is increasing with time, though.
It's not just the work Sony is doing on high res sensors. It's also software like Topaz Gigapixel, what is completely different approach to enable big prints.
No offense but the results from Topaz - both with upsclaing and noise reduction - is really not comparable to actual resolution or low noise images.

In my opinion it's waterpaintery mush and mostly targets people who have never seen what a high-res low noise image should or could look like.
 
In many ways I agree Crispy, but having copies of their both singular and Photo AI joint utilities , I have to say that when working with old library stuff shot at 12MP , it is a usable way to get to a larger print. Also if you are a Capture One user, the underperforming end of their very usable software, is the noise reduction.

Where I draw the line though is the unnecessary way some companies are grabbing this to avoid R&D on chip development beyond 50mp. Beyond all the obvious "more detail" comes the real deal of truer tonal ranges actually there for the taking in the original scene.
 
Last edited:
I am not a fan of AI uprezzing. Too many artifacts and not-quite-believable details.
For landscape it's not that much of a problem. Woods, fields, grassland are quite "predictable" and artifact friendly. But I respect, that it's a bit of cheating. Have my reservations with that technology too.
 
In many ways I agree Crispy, but having copes of their both singular and Photo AI joint utilities , I have to say that when working with old library stuff shot at 12MP , it is a usable way to get to a larger print. Also if you are a Capture One user, the underperforming end of their very usable software, is the noise reduction.

Where I draw the line though is the unnecessary way some companies are grabbing this to avoid R&D on chip development beyond 50mp. Beyond all the obvious "more detail" comes the real deal of truer tonal ranges actually there for the taking in the original scene.
I get that, I too have to occasional work with low-res sources (whether it's old scans or low-quality negatives) but it's better to show images as they are and 'respect' their limits than trying to add artificial information. Especially when it's to do with historical archives (edit: or scientific data).

As for true tonal values, as long as manufacturers (especially Fuji) are not interested in providing their customers with ready-to-use, neutral and calibrated color profiles, but instead want to emulate various film profiles, this is a pointless debate.
 
Last edited:
I am not a fan of AI uprezzing. Too many artifacts and not-quite-believable details. And the time I'd have to spend going over an exhibition print with a fine-toothed comb is a demotivator for me.

Jim
Out of thousands of viewers of my work at large sizes, not a single one has ever commented about artifacts and not-quite-believable details. What they all comment about is the impact of the photograph as a piece of art. They do sometimes comment -- positively -- about how they are amazed at the details in some of the prints though. The trick, just like with sharpening, is to not overdo it. The default settings of Topaz Gigapixel are way too much. I usually chop sharpness in at least half and noise reduction to a fraction (or zero) and use "High Quality", even though it suggests "Standard" is better, because "High Quality" is "less" adjustment. Sometimes Photoshops AI Enhance works better, sometimes (more often) Topaz works better, depending on the subject.

That said, I switched from Canon to Fuji medium format mostly to have to do less uprezzing to save time (and get better results). (The other reason was to improve my corners, which some Canon lenses can be weak on). For very large prints, I would still use uprezzing on the Fuji though, since its resolution is still not enough.

Really, every image printed even at medium sizes gets uprezzing done somewhere. Whether it's in Photoshop or the printer driver or AI. Even 100 megapixels doesn't make for a particularly large print at 240-300dpi. And whether that uprezzing is done by dumb algorithm (e.g. bicubic) or AI, is just a technical detail.

One could even argue that stitching multiple photos to get more detail is "cheating". If so, then even the world's top photographers cheat. All of it is just another tool with which to create a stunning picture.


Peter
 
playing about at the moment working with the Linear Response profile in Capture One Pro to see what this offers, and I feel it opens up a new avenue to more accurate colour.... early days of play though.
 
I am not a fan of AI uprezzing. Too many artifacts and not-quite-believable details. And the time I'd have to spend going over an exhibition print with a fine-toothed comb is a demotivator for me.

Jim
Out of thousands of viewers of my work at large sizes, not a single one has ever commented about artifacts and not-quite-believable details. What they all comment about is the impact of the photograph as a piece of art. They do sometimes comment -- positively -- about how they are amazed at the details in some of the prints though. The trick, just like with sharpening, is to not overdo it. The default settings of Topaz Gigapixel are way too much. I usually chop sharpness in at least half and noise reduction to a fraction (or zero) and use "High Quality", even though it suggests "Standard" is better, because "High Quality" is "less" adjustment. Sometimes Photoshops AI Enhance works better, sometimes (more often) Topaz works better, depending on the subject.

That said, I switched from Canon to Fuji medium format mostly to have to do less uprezzing to save time (and get better results). (The other reason was to improve my corners, which some Canon lenses can be weak on). For very large prints, I would still use uprezzing on the Fuji though, since its resolution is still not enough.

Really, every image printed even at medium sizes gets uprezzing done somewhere. Whether it's in Photoshop or the printer driver or AI. Even 100 megapixels doesn't make for a particularly large print at 240-300dpi. And whether that uprezzing is done by dumb algorithm (e.g. bicubic) or AI, is just a technical detail.

One could even argue that stitching multiple photos to get more detail is "cheating". If so, then even the world's top photographers cheat. All of it is just another tool with which to create a stunning picture.
I may be paranoid. I once had a customer spot a flaw in a purchased image. I reprinited it for her, of course, but the experience left me wanting to make sure it never happened again.
 
playing about at the moment working with the Linear Response profile in Capture One Pro to see what this offers, and I feel it opens up a new avenue to more accurate colour.... early days of play though.
I use darktable, and it has 3 different tools for tone mapping out of camera raws to a starting point for further editing.

The old tool called base curve applies a generic tone curve based on the manufacturer preferred approach. This is the default for old school display-referred workflow.

There are two competing newer tools called Filmic RGB and Sigmoid for the newer scene referred workflow. Both have a slightly different approach and you are free to choose the one you like best.

You can also set which of the three methods should be used by default in the options setting screen. You can even disable all three, and keep it flat as a pancake as your starting point.

At first, I followed good practice advice for a scene referred (rather than display referred) workflow and set filmic RGB as my default tone mapper.

But when I started playing around with film simulations using the 3D Lut approach, I found that Filmic RGB and the 3D Lut together sometimes rather dramatically overcooked things, so I disabled Filmic RGB. Later when I grew bored with film simulations, I found that I was happier with Filmic off and building up from the flat raw to my own taste. I now only use Filmic (and Sigmoid) to tame highlights if they are a little blown.

A starting position of a murky flat raw is my preferred editing start point now rather than a fake Provia simulation or an out of camera jpeg interpretation.

--
2024: Awarded Royal Photographic Society LRPS Distinction
Photo of the day: https://www.whisperingcat.co.uk/wp/photo-of-the-day-2025/
Website: https://www.whisperingcat.co.uk/wp/
DPReview gallery: https://www.dpreview.com/galleries/0286305481
Flickr: http://www.flickr.com/photos/davidmillier/ (very old!)
 
Last edited:
Break the bank and give C1 a go , its grown up a lot since I first tried it in the early 2013 . Linear response and a few of the other toys might make you pleased, along with the masking and layers..... in a different way to PS but very workable.
 
Hi-

When I had a Hassy it was really better than my Leica. Every ilage was just better. When I got a Phase back, IQ was really better on the back vs. a Canon dslr, Mamiya lenses were good, and the Phamiya was horrible, especially focus speed.

Now I have a choice between getting a 50 or 100MP Sony sensor camera (Fuji or Hassy) or a Canon or Sony between 45 and 60MP. What real difference will I see in image quality if I don’t enlarge/crop hugely?
Comparing a 100mp Fuji to my old 36mp Nikon, I'd say visible quality improvements appear at around 36" wide prints. They become significant when you get above 40".

Dynamic range differences are measurable but small; they may matter more or less depending on what you're doing.

The ease of getting high quality results is noticeably better with the Fuji system. Especially with regard to focus, live view, and any kind of low light.

Worth mentioning that this was an older Nikon system with f-mount dslr lenses. Newer small format mirrorless systems have next-generation optics (closer to Fuji's) as well as more pixels. So differences would likely be more subtle.
 
I am not a fan of AI uprezzing. Too many artifacts and not-quite-believable details. And the time I'd have to spend going over an exhibition print with a fine-toothed comb is a demotivator for me.

Jim
Out of thousands of viewers of my work at large sizes, not a single one has ever commented about artifacts and not-quite-believable details. What they all comment about is the impact of the photograph as a piece of art. They do sometimes comment -- positively -- about how they are amazed at the details in some of the prints though. The trick, just like with sharpening, is to not overdo it. The default settings of Topaz Gigapixel are way too much. I usually chop sharpness in at least half and noise reduction to a fraction (or zero) and use "High Quality", even though it suggests "Standard" is better, because "High Quality" is "less" adjustment. Sometimes Photoshops AI Enhance works better, sometimes (more often) Topaz works better, depending on the subject.

That said, I switched from Canon to Fuji medium format mostly to have to do less uprezzing to save time (and get better results). (The other reason was to improve my corners, which some Canon lenses can be weak on). For very large prints, I would still use uprezzing on the Fuji though, since its resolution is still not enough.

Really, every image printed even at medium sizes gets uprezzing done somewhere. Whether it's in Photoshop or the printer driver or AI. Even 100 megapixels doesn't make for a particularly large print at 240-300dpi. And whether that uprezzing is done by dumb algorithm (e.g. bicubic) or AI, is just a technical detail.

One could even argue that stitching multiple photos to get more detail is "cheating". If so, then even the world's top photographers cheat. All of it is just another tool with which to create a stunning picture.

Peter
I do not see anything "cheating" when stitching, as you are using originally acquired data.

AI upscaling invents detail. How well it succeeds depends on the subject and the version, but it will produce images that are similar to what one recorded, albeit not identical.

Still, sometimes AI uspcaling is the best solution to a specific problem.
 
In a double blind test, none. Photo content will override everything. In asking for biased opinions, everything.

I still use my Mamiya 7 and Velia 50.
 
Comparing a 100mp Fuji to my old 36mp Nikon, I'd say visible quality improvements appear at around 36" wide prints. They become significant when you get above 40".
Was considering D810 36MP. Terrific wildlife photographs shared on Nature Wildlife Dpr forum with D810. At around £450 with warranty not too shabby. Feels alright in my hand. Bill Claff's Photons to Photos measurements pretty complementary of D810.
Dynamic range differences are measurable but small; they may matter more or less depending on what you're doing.

The ease of getting high quality results is noticeably better with the Fuji system. Especially with regard to focus, live view, and any kind of low light.

Worth mentioning that this was an older Nikon system with f-mount dslr lenses. Newer small format mirrorless systems have next-generation optics (closer to Fuji's) as well as more pixels. So differences would likely be more subtle.
 
Last edited:
Gregory Crewdson would say quite a lot, I suspect. He shoots a Phase One with 150 MP, but he is routinely making very large prints.
 
Last edited:
Gregory Crewdson would say quite a lot, I suspect. He shoots a Phase One with 150 MP, but he is routinely making very large prints.
He also works with a crew that's practically the size of a Hollywood production company. Including a cameraman; Crewdson doesn't even touch the PhaseOne. For most of us, the occasional benefits of the 150mp big sensor would never justify the cost. For Crewdson, it's an insignificant expense.
 
Last edited:
Hi-

When I had a Hassy it was really better than my Leica. Every ilage was just better. When I got a Phase back, IQ was really better on the back vs. a Canon dslr, Mamiya lenses were good, and the Phamiya was horrible, especially focus speed.

Now I have a choice between getting a 50 or 100MP Sony sensor camera (Fuji or Hassy) or a Canon or Sony between 45 and 60MP. What real difference will I see in image quality if I don’t enlarge/crop hugely?
There was a time when the gap between full-frame and medium format digital cameras was easy to see. When full-frame sensors topped out around 16 megapixels and medium format backs delivered 39 megapixels, the difference in detail and tonal smoothness was hard to miss. If you were printing large or needed the most image information possible, medium format was often the only choice. Full-frame was good, but not in the same league.

Today, the numbers have grown but the ratio has stayed roughly the same. Full-frame cameras now commonly offer around 50 megapixels, while medium format systems deliver anywhere from 100 to 150 megapixels. On paper, the same advantage remains. Medium format still gathers more data, still has larger photosites for a given resolution, and still offers potentially better tonal rendering and microcontrast. But the practical differences between formats have become less clear, especially in typical viewing conditions.

...

The result is that full-frame cameras often deliver what is effectively “good enough” quality for most uses. The leap from 16 to 50 megapixels brought full-frame into a zone where its images satisfy the needs of most photographers and viewers. Medium format still offers technical advantages, but those benefits have become subtler and more situational. They may show up in huge gallery prints, demanding commercial work, or in the hands of photographers who are meticulous about extracting every ounce of image quality. But for many others, the cost, weight, and workflow differences make full-frame the more practical tool.

It’s not that medium format has lost its edge. It’s that full-frame has closed the gap enough that for many purposes, the distinction no longer matters. The camera that delivers the results you need is the one that’s right for the job, and today, that’s more likely to be a full-frame body than it was when the pixel counts were lower and the differences more stark.
.... and to think that not too long ago, I used to catch stink eye for saying such ;)
 

Keyboard shortcuts

Back
Top