Effects of 'medium format' real or just observer bias?

I can believe there is a visible difference between large sensors and small sensors, both in terms of noise and resolution. But that difference won't necessarily be visible in a small image. So, when does it become visible?
When I did this test , I found that the difference between a FF and 33x44mm sensor in good light was hard to see in 15-inch-high prints, but not difficult to see in 30-inch-high ones.

If I had done that test with a GFX 100x, it might have been easier to see the improvement in smaller sizes. If there was an opportunity for aliasing, that would have also increased the probability of seeing differences at 15 inches high.
Interesting.

Coincidentally, I use A2 as a benchmark paper size. With a small border, that works out similar to your 15" high print.

I used to have a D800, but was transitioning my APS-C cameras to Fuji (16 MP, then later 24). I made two A4 prints of 50% crops from both cameras. I could see a small but clear difference between the D800 and 16 MP Fujis when I looked closely - ie from 12" - but not from a normal viewing distance for A2 - ie 24-30".

Note, as far as possible I equalised sharpness using the best techniques I could, and used C1 rather than ACR, which doesn't mess up Xtrans files so much.

Of course, as noise levels increase, the Fuji's tailed off a little earlier and had less DR, but nothing dramatic at low ISO.

Then I bought a 24 MP Xpro2, and did the same experiment. The small increase in resolution, but without an AA filter, leveled the gap. It was almost impossible to see any meaningful difference that I could not attribute to other factors, even viewing closely.

I repeated the test using downloaded images from DPR using Bayer array cameras to eliminate Fuji's Xtrans sensor from the equation, and the results were even more similar.

Since perceptual differences with larger sensors narrow progressively if we keep the image size constant, I would have expected the GFX and A7Rii to look almost identical, based on this experience.

However, one could roughly predict visibility of detail in prints from CSF and other factors. I am particularly interested in what people refer to as 'tonal gradation'. This is a particularly tricky one because it's not defined, but widely touted as a benefit of MF to the extent that its worth investigating.

I imagined that 'good tonal gradations' could be tested using a continuous gradient, so I created several in photoshop and photographed them. I didn't print them, just looked at results on the screen, but if I used ETTR to stop the highlights from clipping, I could not see any difference between FF and APSC, even when I used a tone curve to match them almost exactly.

So, by inference I would be less likely to see an improvement with a larger sensor at the same image size.

So, what am I missing here? It's clearly less to do with resolution, and more a combination of colour gamut, SNR, DR and bit-depth, in which case my screen could well be the limiting factor, even though it's a pretty good one.
I would think that sharpening plays a huge role. It is much simpler to look at MTF:

These two MTF-plots were shot on the same camera and measured from images with sharpening in LR set to zero. The sole difference is stopping down. Blue curve is f/5.6 and red is f/11. We see a huge drop in sharpness stopping down.
These two MTF-plots were shot on the same camera and measured from images with sharpening in LR set to zero. The sole difference is stopping down. Blue curve is f/5.6 and red is f/11. We see a huge drop in sharpness stopping down.

These curves were measured on the same images, after sharpening in FocusMagic. Sharpening is applied so that MTF after sharpening is close to unity up to around 1700 cy/PH.
These curves were measured on the same images, after sharpening in FocusMagic. Sharpening is applied so that MTF after sharpening is close to unity up to around 1700 cy/PH.

Perception of sharpness is dominated by low frequencies, so the sharpened images will be equally sharp.

The downside of sharpening is that it may add artifacts of it's own, enhance noise and false detail. Extensive sharpening may leave the image unnaturally crisp and edgy.

Much research has done on perception of sharpness.

Much of that is explained in this video:

The video relates to television or cinema, but same facts hold for stills.

Best regards

Erik

--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic uses to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
So, what am I missing here? It's clearly less to do with resolution, and more a combination of colour gamut, SNR, DR and bit-depth, in which case my screen could well be the limiting factor, even though it's a pretty good one.
Elsewhere in this thread, I posted this link:

https://blog.kasson.com/the-last-word/format-size-and-image-quality/

You can see what I covered that you haven't covered so far. Color gamut is not part of it, I think. Nor is photon noise in areas that are Zone IV or V or brighter at base ISO. None of the MF cameras that I've tested show much difference between 14 and 16 bit precision.

Jim
I did read that, and I largely agree with it, but I don't think you covered my question.

I would not expect to see any difference between 16- and 14- bit capture precision, particularly if my display is 8-bit and my printer is dithering at 300 PPI.

As for colour gamut, I was again referring more to the display/printer than the camera. It's impossible to see wide-gamut colours with an sRGB display, etc.

In other words, what we see isn't always limited by the camera.
 
So, what am I missing here? It's clearly less to do with resolution, and more a combination of colour gamut, SNR, DR and bit-depth, in which case my screen could well be the limiting factor, even though it's a pretty good one.
Elsewhere in this thread, I posted this link:

https://blog.kasson.com/the-last-word/format-size-and-image-quality/

You can see what I covered that you haven't covered so far. Color gamut is not part of it, I think. Nor is photon noise in areas that are Zone IV or V or brighter at base ISO. None of the MF cameras that I've tested show much difference between 14 and 16 bit precision.

Jim
I did read that, and I largely agree with it, but I don't think you covered my question.

I would not expect to see any difference between 16- and 14- bit capture precision, particularly if my display is 8-bit
Don't forget that your display has a tone curve. But no, without heroic postproduction, you won't. And even then, you probably won't.
and my printer is dithering at 300 PPI.

As for colour gamut, I was again referring more to the display/printer than the camera. It's impossible to see wide-gamut colours with an sRGB display, etc.
Yes, but color gamut is not in my mind a characteristic of format size no matter what the display.
In other words, what we see isn't always limited by the camera.
OK. So I misunderstood your question. Can you ask it again another way?

Jim
 
I can believe there is a visible difference between large sensors and small sensors, both in terms of noise and resolution. But that difference won't necessarily be visible in a small image. So, when does it become visible?
When I did this test , I found that the difference between a FF and 33x44mm sensor in good light was hard to see in 15-inch-high prints, but not difficult to see in 30-inch-high ones.

If I had done that test with a GFX 100x, it might have been easier to see the improvement in smaller sizes. If there was an opportunity for aliasing, that would have also increased the probability of seeing differences at 15 inches high.
Interesting.

Coincidentally, I use A2 as a benchmark paper size. With a small border, that works out similar to your 15" high print.

I used to have a D800, but was transitioning my APS-C cameras to Fuji (16 MP, then later 24). I made two A4 prints of 50% crops from both cameras. I could see a small but clear difference between the D800 and 16 MP Fujis when I looked closely - ie from 12" - but not from a normal viewing distance for A2 - ie 24-30".

Note, as far as possible I equalised sharpness using the best techniques I could, and used C1 rather than ACR, which doesn't mess up Xtrans files so much.

Of course, as noise levels increase, the Fuji's tailed off a little earlier and had less DR, but nothing dramatic at low ISO.

Then I bought a 24 MP Xpro2, and did the same experiment. The small increase in resolution, but without an AA filter, leveled the gap. It was almost impossible to see any meaningful difference that I could not attribute to other factors, even viewing closely.

I repeated the test using downloaded images from DPR using Bayer array cameras to eliminate Fuji's Xtrans sensor from the equation, and the results were even more similar.

Since perceptual differences with larger sensors narrow progressively if we keep the image size constant, I would have expected the GFX and A7Rii to look almost identical, based on this experience.

However, one could roughly predict visibility of detail in prints from CSF and other factors. I am particularly interested in what people refer to as 'tonal gradation'. This is a particularly tricky one because it's not defined, but widely touted as a benefit of MF to the extent that its worth investigating.

I imagined that 'good tonal gradations' could be tested using a continuous gradient, so I created several in photoshop and photographed them. I didn't print them, just looked at results on the screen, but if I used ETTR to stop the highlights from clipping, I could not see any difference between FF and APSC, even when I used a tone curve to match them almost exactly.

So, by inference I would be less likely to see an improvement with a larger sensor at the same image size.

So, what am I missing here? It's clearly less to do with resolution, and more a combination of colour gamut, SNR, DR and bit-depth, in which case my screen could well be the limiting factor, even though it's a pretty good one.
I would think that sharpening plays a huge role. It is much simpler to look at MTF:
How does sharpening apply to 'tonal gradation'? Or is this just a problem with undefined qualities?
These two MTF-plots were shot on the same camera and measured from images with sharpening in LR set to zero. The sole difference is stopping down. Blue curve is f/5.6 and red is f/11. We see a huge drop in sharpness stopping down.
These two MTF-plots were shot on the same camera and measured from images with sharpening in LR set to zero. The sole difference is stopping down. Blue curve is f/5.6 and red is f/11. We see a huge drop in sharpness stopping down.

These curves were measured on the same images, after sharpening in FocusMagic. Sharpening is applied so that MTF after sharpening is close to unity up to around 1700 cy/PH.
These curves were measured on the same images, after sharpening in FocusMagic. Sharpening is applied so that MTF after sharpening is close to unity up to around 1700 cy/PH.

Perception of sharpness is dominated by low frequencies, so the sharpened images will be equally sharp.

The downside of sharpening is that it may add artifacts of it's own, enhance noise and false detail. Extensive sharpening may leave the image unnaturally crisp and edgy.
Depends how you do it ;-)

The display medium is also artefact-ridden. If you examined a print at the same magnification we routinely view electronic images, it would look downright awful.

At the same magnification, pixilation on a display looks equally awful.

Ironically, when sharpening for print, the natural dithering and ink spread requires a slightly unnatural level of edge enhancement at higher radius, but masks it quite successfully. The artefacts that look crunchy on screen are nowhere to be seen in the print.

So I don't sharpen the same way as I would for screen - where I mainly use deconvolution and resizing techniques. Adobe's enhance details and super-resolution have revolutionised this, so the extent that its really hard to separate prints - you really need high magnification digital views, which of course contain detail far below the human CSF threshold when combined with system MTF.
Much research has done on perception of sharpness.

Much of that is explained in this video:

The video relates to television or cinema, but same facts hold for stills.

Best regards

Erik
I am reasonably familiar with the concept, but it still doesn't address the 'gradation' issue.

--
"A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." Antoine de Saint-Exupery
 
Last edited:
So, what am I missing here? It's clearly less to do with resolution, and more a combination of colour gamut, SNR, DR and bit-depth, in which case my screen could well be the limiting factor, even though it's a pretty good one.
Elsewhere in this thread, I posted this link:

https://blog.kasson.com/the-last-word/format-size-and-image-quality/

You can see what I covered that you haven't covered so far. Color gamut is not part of it, I think. Nor is photon noise in areas that are Zone IV or V or brighter at base ISO. None of the MF cameras that I've tested show much difference between 14 and 16 bit precision.

Jim
I did read that, and I largely agree with it, but I don't think you covered my question.

I would not expect to see any difference between 16- and 14- bit capture precision, particularly if my display is 8-bit
Don't forget that your display has a tone curve. But no, without heroic postproduction, you won't. And even then, you probably won't.
and my printer is dithering at 300 PPI.

As for colour gamut, I was again referring more to the display/printer than the camera. It's impossible to see wide-gamut colours with an sRGB display, etc.
Yes, but color gamut is not in my mind a characteristic of format size no matter what the display.
In other words, what we see isn't always limited by the camera.
OK. So I misunderstood your question. Can you ask it again another way?

Jim
Specifically referring to the concept of 'tonal gradation' which not a well defined term, but implies to me an ability to faithfully capture subtle and smooth tonal shifts on complex surfaces...

How does this manifest itself in the context of format size, and what role does the output medium and reproduction size have in determining when or if any obvious distinction is likely to be visible?

As a side question, to what extent is dithering/noise actually required to achieve these smooth transitions on quantised media (8-bit displays)?
 
OK. So I misunderstood your question. Can you ask it again another way?
Specifically referring to the concept of 'tonal gradation' which not a well defined term, but implies to me an ability to faithfully capture subtle and smooth tonal shifts on complex surfaces...

How does this manifest itself in the context of format size,
With film, that was a big reason to go to a larger format. You didn't have to enlarge the grain clumps as much. With digital, larger formats mean better SNR at same print size, if you've got plenty of light. But in the midtones and higher, even APS-C is plenty good enough. I use medium format for almost all my serious work, but I don't do it to get smoother transitions.
and what role does the output medium and reproduction size have in determining when or if any obvious distinction is likely to be visible?
If the output medium has a lower SNR than the file, then it becomes the long pole in the tent. And contone printers are uncommon these days.
As a side question, to what extent is dithering/noise actually required to achieve these smooth transitions on quantised media (8-bit displays)?
You don't need much. I remember attending a SPIE conference in the early 90s where a paper was presented saying that, without dither, and with an optimum tone curve, 400 gray levels were entirely sufficient for a monochrome display at 100 cd/m^2. You'd need more if the display were a lot brighter.
 
Test charts don't include gradation.
They could. They just need to include a continuous black-white gradient.
Try photographing a car on various sizes of sensor. Something like this:

391bd59e72f640eaa95bd1772c88c8f3.jpg
So, how big would you have to display these images to see a difference in a continuous gradient? Because if you can't, there is no difference.
There are many things that I can't see but that are measurable.

Don Cox
I take your point, but this doesn't exactly answer my question. I would need to see a comparison of that shot taken with an MF camera and FF.
Of course. I don't have a MF camera. I am suggesting what would be a useful test, or at least demonstration.
How much of what you see is dictated by the display you use, the size of the image, and the viewing conditions, and how much is due to the camera?
If the images are viewed on the same display, or printed on the same printer, then the differences are likely to be due to the camera.
Displays have various thresholds - bit-depth, resolution, colour gamut, contrast. Generally, these are smaller than most cameras.

The human eye has thresholds. Our contrast sensitivity function and acuity is limited, particularly in terms of chroma.

So, at what point is the data from the camera limiting what we can see, rather than something else?

I can believe there is a visible difference between large sensors and small sensors, both in terms of noise and resolution. But that difference won't necessarily be visible in a small image. So, when does it become visible?

Do I need a 42" 8K 10-bit HDR display with a wide colour gamut, or not?
I guess a 2x3 foot print would show differences in gradation.

Don Cox
 
I can believe there is a visible difference between large sensors and small sensors, both in terms of noise and resolution. But that difference won't necessarily be visible in a small image. So, when does it become visible?
When I did this test , I found that the difference between a FF and 33x44mm sensor in good light was hard to see in 15-inch-high prints, but not difficult to see in 30-inch-high ones.
That makes sense.

You don't need a MF camera for A3 prints.
If I had done that test with a GFX 100x, it might have been easier to see the improvement in smaller sizes. If there was an opportunity for aliasing, that would have also increased the probability of seeing differences at 15 inches high.
Don Cox
 
The best colour I've seen is 10x8 Ektachrome shot with studio flash.
In the past, I've shot a lot of 8x10 Ektachromes with my trusty old Balcar strobes. I still have some. They are nice, but they are not where I'd like to start when editing a raw image.
For two, they're way too punchy to make a good starting place, and the DR is low.

--
https://blog.kasson.com
 
Last edited:
I can believe there is a visible difference between large sensors and small sensors, both in terms of noise and resolution. But that difference won't necessarily be visible in a small image. So, when does it become visible?
When I did this test , I found that the difference between a FF and 33x44mm sensor in good light was hard to see in 15-inch-high prints, but not difficult to see in 30-inch-high ones.

If I had done that test with a GFX 100x, it might have been easier to see the improvement in smaller sizes. If there was an opportunity for aliasing, that would have also increased the probability of seeing differences at 15 inches high.
Interesting.

Coincidentally, I use A2 as a benchmark paper size. With a small border, that works out similar to your 15" high print.

I used to have a D800, but was transitioning my APS-C cameras to Fuji (16 MP, then later 24). I made two A4 prints of 50% crops from both cameras. I could see a small but clear difference between the D800 and 16 MP Fujis when I looked closely - ie from 12" - but not from a normal viewing distance for A2 - ie 24-30".

Note, as far as possible I equalised sharpness using the best techniques I could, and used C1 rather than ACR, which doesn't mess up Xtrans files so much.

Of course, as noise levels increase, the Fuji's tailed off a little earlier and had less DR, but nothing dramatic at low ISO.

Then I bought a 24 MP Xpro2, and did the same experiment. The small increase in resolution, but without an AA filter, leveled the gap. It was almost impossible to see any meaningful difference that I could not attribute to other factors, even viewing closely.

I repeated the test using downloaded images from DPR using Bayer array cameras to eliminate Fuji's Xtrans sensor from the equation, and the results were even more similar.

Since perceptual differences with larger sensors narrow progressively if we keep the image size constant, I would have expected the GFX and A7Rii to look almost identical, based on this experience.

However, one could roughly predict visibility of detail in prints from CSF and other factors. I am particularly interested in what people refer to as 'tonal gradation'. This is a particularly tricky one because it's not defined, but widely touted as a benefit of MF to the extent that its worth investigating.

I imagined that 'good tonal gradations' could be tested using a continuous gradient, so I created several in photoshop and photographed them. I didn't print them, just looked at results on the screen, but if I used ETTR to stop the highlights from clipping, I could not see any difference between FF and APSC, even when I used a tone curve to match them almost exactly.
I doubt if a computer-generated gradient viewed on a monitor is a good test subject. This is why I suggest a real 3D object with smooth surfaces.
So, by inference I would be less likely to see an improvement with a larger sensor at the same image size.

So, what am I missing here? It's clearly less to do with resolution, and more a combination of colour gamut, SNR, DR and bit-depth, in which case my screen could well be the limiting factor, even though it's a pretty good one.
Definitely not to do with resolution of details. Think of the tonal gradient as a staircase with vanishingly small steps. It's the risers that you are trying to resolve, not the treads. Serious failure shows as posterisation, but it's seldom as bad as that.

How to count infinitesimal steps ?

Don Cox
 
OK. So I misunderstood your question. Can you ask it again another way?
Specifically referring to the concept of 'tonal gradation' which not a well defined term, but implies to me an ability to faithfully capture subtle and smooth tonal shifts on complex surfaces...

How does this manifest itself in the context of format size,
With film, that was a big reason to go to a larger format. You didn't have to enlarge the grain clumps as much. With digital, larger formats mean better SNR at same print size, if you've got plenty of light. But in the midtones and higher, even APS-C is plenty good enough. I use medium format for almost all my serious work, but I don't do it to get smoother transitions.
That was my assumption too. I just don't think many cameras (or at least 4/3+) have a problem with mid-tone and highlight transition, provided we don't clip anything, mostly because the camera is not the longest pole in the tent, as you aptly described.
and what role does the output medium and reproduction size have in determining when or if any obvious distinction is likely to be visible?
If the output medium has a lower SNR than the file, then it becomes the long pole in the tent. And contone printers are uncommon these days.
As a side question, to what extent is dithering/noise actually required to achieve these smooth transitions on quantised media (8-bit displays)?
You don't need much. I remember attending a SPIE conference in the early 90s where a paper was presented saying that, without dither, and with an optimum tone curve, 400 gray levels were entirely sufficient for a monochrome display at 100 cd/m^2. You'd need more if the display were a lot brighter.
That correlates pretty well with the Barton estimate/Greyscale display function.

Trouble is, we have displays nowadays with a lot more nits!

Probably why they are using 10-bit for HDR-TV.
 
I can believe there is a visible difference between large sensors and small sensors, both in terms of noise and resolution. But that difference won't necessarily be visible in a small image. So, when does it become visible?
When I did this test , I found that the difference between a FF and 33x44mm sensor in good light was hard to see in 15-inch-high prints, but not difficult to see in 30-inch-high ones.

If I had done that test with a GFX 100x, it might have been easier to see the improvement in smaller sizes. If there was an opportunity for aliasing, that would have also increased the probability of seeing differences at 15 inches high.
Interesting.

Coincidentally, I use A2 as a benchmark paper size. With a small border, that works out similar to your 15" high print.

I used to have a D800, but was transitioning my APS-C cameras to Fuji (16 MP, then later 24). I made two A4 prints of 50% crops from both cameras. I could see a small but clear difference between the D800 and 16 MP Fujis when I looked closely - ie from 12" - but not from a normal viewing distance for A2 - ie 24-30".

Note, as far as possible I equalised sharpness using the best techniques I could, and used C1 rather than ACR, which doesn't mess up Xtrans files so much.

Of course, as noise levels increase, the Fuji's tailed off a little earlier and had less DR, but nothing dramatic at low ISO.

Then I bought a 24 MP Xpro2, and did the same experiment. The small increase in resolution, but without an AA filter, leveled the gap. It was almost impossible to see any meaningful difference that I could not attribute to other factors, even viewing closely.

I repeated the test using downloaded images from DPR using Bayer array cameras to eliminate Fuji's Xtrans sensor from the equation, and the results were even more similar.

Since perceptual differences with larger sensors narrow progressively if we keep the image size constant, I would have expected the GFX and A7Rii to look almost identical, based on this experience.

However, one could roughly predict visibility of detail in prints from CSF and other factors. I am particularly interested in what people refer to as 'tonal gradation'. This is a particularly tricky one because it's not defined, but widely touted as a benefit of MF to the extent that its worth investigating.

I imagined that 'good tonal gradations' could be tested using a continuous gradient, so I created several in photoshop and photographed them. I didn't print them, just looked at results on the screen, but if I used ETTR to stop the highlights from clipping, I could not see any difference between FF and APSC, even when I used a tone curve to match them almost exactly.
I doubt if a computer-generated gradient viewed on a monitor is a good test subject. This is why I suggest a real 3D object with smooth surfaces.
So, by inference I would be less likely to see an improvement with a larger sensor at the same image size.

So, what am I missing here? It's clearly less to do with resolution, and more a combination of colour gamut, SNR, DR and bit-depth, in which case my screen could well be the limiting factor, even though it's a pretty good one.
Definitely not to do with resolution of details. Think of the tonal gradient as a staircase with vanishingly small steps. It's the risers that you are trying to resolve, not the treads. Serious failure shows as posterisation, but it's seldom as bad as that.

How to count infinitesimal steps ?
If it's less than my JND I don't bother ;-)

Which would be around 0.8-1 cd/m2 AFAIK.

But most cameras either have enough dither or enough depth, to do that - more than most displays at any rate. The 'superior tonal gradations' on MF seem to be more hypothetical than real.
--
"A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." Antoine de Saint-Exupery
 
Last edited:
Test charts don't include gradation.
They could. They just need to include a continuous black-white gradient.
Try photographing a car on various sizes of sensor. Something like this:

391bd59e72f640eaa95bd1772c88c8f3.jpg
So, how big would you have to display these images to see a difference in a continuous gradient? Because if you can't, there is no difference.
There are many things that I can't see but that are measurable.

Don Cox
I take your point, but this doesn't exactly answer my question. I would need to see a comparison of that shot taken with an MF camera and FF.
Of course. I don't have a MF camera. I am suggesting what would be a useful test, or at least demonstration.
How much of what you see is dictated by the display you use, the size of the image, and the viewing conditions, and how much is due to the camera?
If the images are viewed on the same display, or printed on the same printer, then the differences are likely to be due to the camera.
Displays have various thresholds - bit-depth, resolution, colour gamut, contrast. Generally, these are smaller than most cameras.

The human eye has thresholds. Our contrast sensitivity function and acuity is limited, particularly in terms of chroma.

So, at what point is the data from the camera limiting what we can see, rather than something else?

I can believe there is a visible difference between large sensors and small sensors, both in terms of noise and resolution. But that difference won't necessarily be visible in a small image. So, when does it become visible?

Do I need a 42" 8K 10-bit HDR display with a wide colour gamut, or not?
I guess a 2x3 foot print would show differences in gradation.

Don Cox
Well, we both agree that it is only likely to matter when peering closely at large prints, on on very high quality displays.

But I would imagine the printer dither would overcome most of those problems on smaller sensors. It would only be noise or resolution that made the difference.

--
"A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." Antoine de Saint-Exupery
 
The best colour I've seen is 10x8 Ektachrome shot with studio flash.
In the past, I've shot a lot of 8x10 Ektachromes with my trusty old Balcar strobes. I still have some. They are nice, but they are not where I'd like to start when editing a raw image.
For two, they're way too punchy to make a good starting place, and the DR is low.
Is that Dynamic Range or Density Range?

Best regards

Erik
 
Each half of this image was shot with a different camera. I tried to match exposure and processing and I see a good match for color. But the lower part has very different lightness. What's the cause?
Each half of this image was shot with a different camera. I tried to match exposure and processing and I see a good match for color. But the lower part has very different lightness. What's the cause?

The subject is sunlit and just some minutes between the shots. What I have noticed that exposure sliders in Lightroom are very different, the P45+ needing about +2.9 EV while the A7rIV need +1.4EV.

Raw exposures seems pretty similar:



Left A7rIV, right P45+.
Left A7rIV, right P45+.

So, I started thinking tone curves.

I generated color profiles for each with a linear tone curve. The difference in exposure bias remained. I added a tone curve in post.



 Now, I think the bottom part is pretty close. Note that text indicates left part is A7rII, that was a finger slip from the author.
Now, I think the bottom part is pretty close. Note that text indicates left part is A7rII, that was a finger slip from the author.

So, what is my take from that?

It seems that using commercial tools like Lightroom add another layer of uncertainty to comparisons.

Not exactly a conspiration theory, but we may need to be wary of it.

Best regards

Erik



(*) Note, this was not intended to be a color comparison but to be a study of DoF. But, I didn't achieve correct match of focus.

--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic uses to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
Each half of this image was shot with a different camera. I tried to match exposure and processing and I see a good match for color. But the lower part has very different lightness. What's the cause?
Each half of this image was shot with a different camera. I tried to match exposure and processing and I see a good match for color. But the lower part has very different lightness. What's the cause?

The subject is sunlit and just some minutes between the shots. What I have noticed that exposure sliders in Lightroom are very different, the P45+ needing about +2.9 EV while the A7rIV need +1.4EV.

Raw exposures seems pretty similar:

Left A7rIV, right P45+.
Left A7rIV, right P45+.

So, I started thinking tone curves.

I generated color profiles for each with a linear tone curve. The difference in exposure bias remained. I added a tone curve in post.

Now, I think the bottom part is pretty close. Note that text indicates left part is A7rII, that was a finger slip from the author.
Now, I think the bottom part is pretty close. Note that text indicates left part is A7rII, that was a finger slip from the author.

So, what is my take from that?

It seems that using commercial tools like Lightroom add another layer of uncertainty to comparisons.
This is true, but not unexpected, and possible relates to the ISO calibration.
Not exactly a conspiration theory, but we may need to be wary of it.
The good news is that with the same exposure we can get similar results, even if we can't rely on default settings.

--
"A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away." Antoine de Saint-Exupery
 
The best colour I've seen is 10x8 Ektachrome shot with studio flash.
In the past, I've shot a lot of 8x10 Ektachromes with my trusty old Balcar strobes. I still have some. They are nice, but they are not where I'd like to start when editing a raw image.
For two, they're way too punchy to make a good starting place, and the DR is low.
Is that Dynamic Range or Density Range?
I meant dynamic range, but density range also describes what I'm talking about.
 
Hi,

My impression is a bit that reviewers often downtalk the obvious advantages of medium format, which probably is more detail, although with some caveats. After all, 'nobody needs 100 MP'.

Advantages often mentioned are:

Skin tones are better. That is quite possible, but probably not a consequence of format. Color rendition probably depends more color profiles and white balancing than sensors.

MFD has better tonality. That may hold, but any definition of tonality is hard to find. All things being similar, a larger sensor would be able to capture more photons and thus would would achieve slightly better SNR, doubling the sensor size would yield a 41% advantage. But that also depends on exposure.

MFD has better DR. Assuming similar sensors, that would be true. But DR would have little effect on normally processed images. The exception when exposing to protect highlights and pushing the shadows. But reducing exposure has a negative effect on shot noise.

MFD has better highlight roll of. This is wrong with high probability. It is common to all normal sensors that they clip abruptly.

16-bit depth and 16-bit color. To a great part this was a myth, coming from the CCD era. MFD vendors used sensors delivering about 12 bit worth of data. Phase One actually used 14-bits for it's raw data, blowing it up to 16 bits in raw conversion. With CMOS that has changed. The 100 MP 54x41 mm sensors were capable of delivering more than 14 bits worth of data. But that probably doesn't really apply to latest generation sensors with 3.8 micron pixels.

Larger pixels are better. It has often been stated that larger pixels are better, but that is not really true. Large pixels may look sharper do to less magnification when pixel peeping and they generate more false detail.

The role of sharpening is seldom discussed. But, sharpening plays a huge role. Starting with a sharper set of data, we need to apply less sharpening. Sharpening affects noise and overdoing it will cause artifacts.

Is pixel peeping detrimental to image quality? In my humble opinion that may be the case. The reason is that image quality is dominated by low frequency detail. Being obsessed with actual pixels we may either oversharpen low frequency detail or apply to little sharpening at low frequencies while increasing noise. That also relates to OLP filters. OLP filters blur the image at the pixel level but have relatively little impact on low frequency detail. Lack of OLP filtering yields false detail.


In the end, 100 MP is good. Having a tightly sampled image is always good. Obviously, we can get into diminishing returns, where an increase on pixel density may yield reduced improvement in image quality. The main downside of high resolution may be excessive processing time. (*)

My guess is that there are advantages to MFD over say 24x36 mm. But those advantages may be subtle.

It could be argued that some lenses for MFD are better. That is probably true. It seems that large apertures are selling points for 24x36 mm, while MFD lenses are mostly quite moderate in apertures.

If we shoot f/5.6 - f/11 normally, f/1.4 lenses will offer few benefits.

On the other hand, shooting action in dark places we need fast lenses and image stabilisation may help. So, it is a bit of 'horses for the courses'.

Best regards

Erik

(*) Back in 2005 when I started shooting digital cameras were 6MP and hard disks were 250 MBytes. My 'best' camera today has 61 MB, but the old disks I am running in my RAID 5 are 4T Byte. So storage capability increased 16000X while image size increased 10X.

--
Erik Kaffehr
Website: http://echophoto.dnsalias.net
Magic uses to disappear in controlled experiments…
Gallery: http://echophoto.smugmug.com
Articles: http://echophoto.dnsalias.net/ekr/index.php/photoarticles
 
Last edited:
Hi,

My impression is a bit that reviewers often downtalk the obvious advantages of medium format, which probably is more detail, although with some caveats. After all, 'nobody needs 100 MP'.

Advantages often mentioned are:

Skin tones are better. That is quite possible, but probably not a consequence of format. Color rendition probably depends more color profiles and white balancing than sensors.

MFD has better tonality. That may hold, but any definition of tonality is hard to find. All things being similar, a larger sensor would be able to capture more photons and thus would would achieve slightly better SNR, doubling the sensor size would yield a 41% advantage. But that also depends on exposure.

MFD has better DR. Assuming similar sensors, that would be true. But DR would have little effect on normally processed images. The exception when exposing to protect highlights and pushing the shadows. But reducing exposure has a negative effect on shot noise.

MFD has better highlight roll of. This is wrong with high probability. It is common to all normal sensors that they clip abruptly.

16-bit depth and 16-bit color. To a great part this was a myth, coming from the CCD era. MFD vendors used sensors delivering about 12 bit worth of data. Phase One actually used 14-bits for it's raw data, blowing it up to 16 bits in raw conversion. With CMOS that has changed. The 100 MP 54x41 mm sensors were capable of delivering more than 14 bits worth of data. But that probably doesn't really apply to latest generation sensors with 3.8 micron pixels.

Larger pixels are better. It has often been stated that larger pixels are better, but that is not really true. Large pixels may look sharper do to less magnification when pixel peeping and they generate more false detail.

The role of sharpening is seldom discussed. But, sharpening plays a huge role. Starting with a sharper set of data, we need to apply less sharpening. Sharpening affects noise and overdoing it will cause artifacts.

Is pixel peeping detrimental to image quality? In my humble opinion that may be the case. The reason is that image quality is dominated by low frequency detail. Being obsessed with actual pixels we may either oversharpen low frequency detail or apply to little sharpening at low frequencies while increasing noise. That also relates to OLP filters. OLP filters blur the image at the pixel level but have relatively little impact on low frequency detail. Lack of OLP filtering yields false detail.

In the end, 100 MP is good. Having a tightly sampled image is always good. Obviously, we can get into diminishing returns, where an increase on pixel density may yield reduced improvement in image quality. The main downside of high resolution may be excessive processing time. (*)

My guess is that there are advantages to MFD over say 24x36 mm. But those advantages may be subtle.

It could be argued that some lenses for MFD are better. That is probably true. It seems that large apertures are selling points for 24x36 mm, while MFD lenses are mostly quite moderate in apertures.

If we shoot f/5.6 - f/11 normally, f/1.4 lenses will offer few benefits.

On the other hand, shooting action in dark places we need fast lenses and image stabilisation may help. So, it is a bit of 'horses for the courses'.

Best regards

Erik

(*) Back in 2005 when I started shooting digital cameras were 6MP and hard disks were 250 MBytes. My 'best' camera today has 61 MB, but the old disks I am running in my RAID 5 are 4T Byte. So storage capability increased 16000X while image size increased 10X.
I think the easiest way to define it is that a print with equal quality viewed at the same distance will scale with sensor size, if we assume the same general sensor design and the same pixel pitch and the same processing.

If we think about it, the XT4 and A7Riv are just scaled crops of the sensor in the GFX100. When viewed at the same angular resolution (pixels/degree) they should look pretty much identical using a equivalent lens, were it not for the X trans sensors in the XT4 and different levels of IBIS performance perhaps.

Of course, cameras specialised for different jobs make different trade-offs, so the A9 is not directly comparable with the A7 series even when image size is normalised.
 

Keyboard shortcuts

Back
Top