Studio Scene at 4:3 downsampled to 4K, Z8 and GFX 100

JimKasson

Community Leader
Forum Moderator
Messages
52,258
Solutions
52
Reaction score
59,049
Location
Monterey, CA, US
b351580c76e64d36bc9cfc2ce6844d17.jpg



81a0f2a4bedd4ebeb09969097ca540e6.jpg

Developed in Lr with default settings except for WB.

Night and day? I think not. There are some differences.

--
 
Studio Scene at 4:3 downsampled to 4K, Z8 and GFX 100

b351580c76e64d36bc9cfc2ce6844d17.jpg

81a0f2a4bedd4ebeb09969097ca540e6.jpg

Developed in Lr with default settings except for WB.

Night and day? I think not. There are some differences.
When you downsample a 102 MP or 41 MP image to less than 6 MP, I tend to suspect that the downsampling algorithm may be the most important contributor to the results, at least in terms of resolution, sharpness, acutance, chromatic aberrations, and noise. What are your thoughts on this?

On the other hand, if you want to view / examine a whole photo all at once on a monitor, that's what you get, so maybe that's the point for those who don't print quite large and/or pixel peep at 100%.

(And certainly there are important image quality properties other than resolution, sharpness, acutance, chromatic aberrations, and noise--but the important ones seem more related to the lens than to the camera.)

Or at least, that's what I suspect. I eagerly await your and others' takes on this.
 
Studio Scene at 4:3 downsampled to 4K, Z8 and GFX 100

b351580c76e64d36bc9cfc2ce6844d17.jpg

81a0f2a4bedd4ebeb09969097ca540e6.jpg

Developed in Lr with default settings except for WB.

Night and day? I think not. There are some differences.
When you downsample a 102 MP or 41 MP image to less than 6 MP, I tend to suspect that the downsampling algorithm may be the most important contributor to the results, at least in terms of resolution, sharpness, acutance, chromatic aberrations, and noise. What are your thoughts on this?
Are there significant differences between downsampling settings and algorithms? What kind of downsampling has Jim used?
On the other hand, if you want to view / examine a whole photo all at once on a monitor, that's what you get, so maybe that's the point for those who don't print quite large and/or pixel peep at 100%.

(And certainly there are important image quality properties other than resolution, sharpness, acutance, chromatic aberrations, and noise--but the important ones seem more related to the lens than to the camera.)

Or at least, that's what I suspect. I eagerly await your and others' takes on this.
 
Studio Scene at 4:3 downsampled to 4K, Z8 and GFX 100

b351580c76e64d36bc9cfc2ce6844d17.jpg

81a0f2a4bedd4ebeb09969097ca540e6.jpg

Developed in Lr with default settings except for WB.

Night and day? I think not. There are some differences.
When you downsample a 102 MP or 41 MP image to less than 6 MP, I tend to suspect that the downsampling algorithm may be the most important contributor to the results, at least in terms of resolution, sharpness, acutance, chromatic aberrations, and noise. What are your thoughts on this?
Are there significant differences between downsampling settings and algorithms? What kind of downsampling has Jim used?
Nothing special. Just the Lightroom export algorithm. It is more sophisticated than the algorithms used in Lr and Ps for screen presentation.
On the other hand, if you want to view / examine a whole photo all at once on a monitor, that's what you get, so maybe that's the point for those who don't print quite large and/or pixel peep at 100%.

(And certainly there are important image quality properties other than resolution, sharpness, acutance, chromatic aberrations, and noise--but the important ones seem more related to the lens than to the camera.)

Or at least, that's what I suspect. I eagerly await your and others' takes on this.


--
 
Jim,

I opened-up images by clicking in web browser and set it to 100%. There are obvious compression artifacts.

I'd say, once the lens+sensor MTF at target (4k) resolution is approaching 50% and noise is low (down to lowest bit (8), give or take), we can make it as sharp and clean as we want.

Lenses, lenses, lenses.

Regards
 
b351580c76e64d36bc9cfc2ce6844d17.jpg

81a0f2a4bedd4ebeb09969097ca540e6.jpg

Developed in Lr with default settings except for WB.

Night and day? I think not. There are some differences.
What's your point? Conclusion?
There is someone saying that this comparison would result in dramatic differences. I disagreed. I’m now supplying some evidence.

--
 
Once upon a time when sensors were low pixel count, a generational shift made a big difference. For example, there was a very noticeable difference between my 6MP D100 and 13MP 5D.

A GFX100 holds a similar kind of proportional increase over FF and 50MP GFX. Do you think that similar proportional increases in pixel count (say GFX50 to GFX100 or even A7Riv to GFX100) still produce similar dramatic impact as with those earlier cams? Or are we now in a world in which it is really hard to gain obvious improvements and we are down to the connoisseur finest distinctions?
 
Once upon a time when sensors were low pixel count, a generational shift made a big difference. For example, there was a very noticeable difference between my 6MP D100 and 13MP 5D.

A GFX100 holds a similar kind of proportional increase over FF and 50MP GFX. Do you think that similar proportional increases in pixel count (say GFX50 to GFX100 or even A7Riv to GFX100) still produce similar dramatic impact as with those earlier cams?
For high-res outputs, yes. Resampled down greatly, no.
Or are we now in a world in which it is really hard to gain obvious improvements and we are down to the connoisseur finest distinctions?
 
I have not identified the images to minimize the effect of confirmation bias. If you want to tell which image was made with which camera, you can figure it out, but if you just want to see if there's much difference between the images, it's best not to do that first.
 
I have not identified the images to minimize the effect of confirmation bias. If you want to tell which image was made with which camera, you can figure it out, but if you just want to see if there's much difference between the images, it's best not to do that first.
There's an area of the image, where one should look first for a quick confirmation: bi-colored resolution circles, as this is where under-resolving Bayer pattern will hurt image the most.
 
Developed in Lr with default settings except for WB.

Night and day? I think not. There are some differences.
When you downsample a 102 MP or 41 MP image to less than 6 MP, I tend to suspect that the downsampling algorithm may be the most important contributor to the results, at least in terms of resolution, sharpness, acutance, chromatic aberrations, and noise. What are your thoughts on this?
Are there significant differences between downsampling settings and algorithms? What kind of downsampling has Jim used?
Yes, there are significant differences among downsampling settings and algorithms, and the interaction of those with the properties of the source image and the degree of downsampling may well affect the qualities of the results.

Jim reports that he used the default Lightroom export, which seems like a very reasonable choice. I'm not critiquing his methodology. I just wonder what are the bigger or smaller contributors to what you see on the monitor after that degree of downsampling. And Jim and I may well share some similar suspicions about what you are and aren't seeing on the monitor in these circumstances.

 
Developed in Lr with default settings except for WB.

Night and day? I think not. There are some differences.
When you downsample a 102 MP or 41 MP image to less than 6 MP, I tend to suspect that the downsampling algorithm may be the most important contributor to the results, at least in terms of resolution, sharpness, acutance, chromatic aberrations, and noise. What are your thoughts on this?
Are there significant differences between downsampling settings and algorithms? What kind of downsampling has Jim used?
Yes, there are significant differences among downsampling settings and algorithms, and the interaction of those with the properties of the source image and the degree of downsampling may well affect the qualities of the results.
Can you point me to tests/analysis of various downsampling algorithms used in current post-processors?
Jim reports that he used the default Lightroom export, which seems like a very reasonable choice. I'm not critiquing his methodology. I just wonder what are the bigger or smaller contributors to what you see on the monitor after that degree of downsampling. And Jim and I may well share some similar suspicions about what you are and aren't seeing on the monitor in these circumstances.
 
Developed in Lr with default settings except for WB.

Night and day? I think not. There are some differences.
When you downsample a 102 MP or 41 MP image to less than 6 MP, I tend to suspect that the downsampling algorithm may be the most important contributor to the results, at least in terms of resolution, sharpness, acutance, chromatic aberrations, and noise. What are your thoughts on this?
Are there significant differences between downsampling settings and algorithms? What kind of downsampling has Jim used?
Yes, there are significant differences among downsampling settings and algorithms, and the interaction of those with the properties of the source image and the degree of downsampling may well affect the qualities of the results.

Jim reports that he used the default Lightroom export, which seems like a very reasonable choice.
But a better algorithm than Lr and Ps use for screen resampling. See the nearest neighbor post in this thread.
I'm not critiquing his methodology. I just wonder what are the bigger or smaller contributors to what you see on the monitor after that degree of downsampling. And Jim and I may well share some similar suspicions about what you are and aren't seeing on the monitor in these circumstances.
 
Developed in Lr with default settings except for WB.

Night and day? I think not. There are some differences.
When you downsample a 102 MP or 41 MP image to less than 6 MP, I tend to suspect that the downsampling algorithm may be the most important contributor to the results, at least in terms of resolution, sharpness, acutance, chromatic aberrations, and noise. What are your thoughts on this?
Are there significant differences between downsampling settings and algorithms? What kind of downsampling has Jim used?
Yes, there are significant differences among downsampling settings and algorithms, and the interaction of those with the properties of the source image and the degree of downsampling may well affect the qualities of the results.
Can you point me to tests/analysis of various downsampling algorithms used in current post-processors?
 
Developed in Lr with default settings except for WB.

Night and day? I think not. There are some differences.
When you downsample a 102 MP or 41 MP image to less than 6 MP, I tend to suspect that the downsampling algorithm may be the most important contributor to the results, at least in terms of resolution, sharpness, acutance, chromatic aberrations, and noise. What are your thoughts on this?
Are there significant differences between downsampling settings and algorithms? What kind of downsampling has Jim used?
Yes, there are significant differences among downsampling settings and algorithms, and the interaction of those with the properties of the source image and the degree of downsampling may well affect the qualities of the results.
Can you point me to tests/analysis of various downsampling algorithms used in current post-processors?
https://blog.kasson.com/?s=downsampling
Thanks!
 
Developed in Lr with default settings except for WB.

Night and day? I think not. There are some differences.
When you downsample a 102 MP or 41 MP image to less than 6 MP, I tend to suspect that the downsampling algorithm may be the most important contributor to the results, at least in terms of resolution, sharpness, acutance, chromatic aberrations, and noise. What are your thoughts on this?
Are there significant differences between downsampling settings and algorithms? What kind of downsampling has Jim used?
Yes, there are significant differences among downsampling settings and algorithms, and the interaction of those with the properties of the source image and the degree of downsampling may well affect the qualities of the results.
Can you point me to tests/analysis of various downsampling algorithms used in current post-processors?
Unfortunately, no, or at least, certainly not anything current. In the past I've experimented myself. Just from what I mostly use, DxO PhotoLab 7 Elite offers bicubic, bicubic sharper, and bilinear; Affinity Photo 2 offers bilinear, bicubic, Lanczos separable, and Lanczor non-separable; and Qimage offers a bunch of stuff, some of it maybe proprietary:

8443c4e3653c4468a31f3a62976d9574.jpg

Generally I find Qimage's more sophisticated / recommended methods better, and if I want to resample / resize something at maximum quality, I use Qimage--even if I'm not printing it.
If anyone wants to experiment, the free / open-source GIMP offers multiple methods of resampling, and somewhere there's documentation of the pros and cons of each.
 
Hi Jim

If one downloads the RAW files of each of those from DPreview's tool and view them, without zooming, by flicking from one to the other, then two things become (very?) apparent:

1. The quality of each image is significantly higher than what has been supplied here (which to me doesn't look any better than an iphone pic)

2. The GFX one is significantly sharper; it brings out details that are not just mushy on the Z8, but in certain cases (like if you look at the old black and white photos), they are not even there.

So, sure, nobody can argue that your test doesn't produce any significant difference. But if the question is which camera (and lens) produces the better image, viewed at a normal distance and without zooming in, then, to me, the answer is obvious.
 
Hi Jim

If one downloads the RAW files of each of those from DPreview's tool and view them, without zooming, by flicking from one to the other, then two things become (very?) apparent:

1. The quality of each image is significantly higher than what has been supplied here (which to me doesn't look any better than an iphone pic)
You seem to have missed the part about downsizing to 4K. This is a thread about differences after that downsampling.
2. The GFX one is significantly sharper; it brings out details that are not just mushy on the Z8, but in certain cases (like if you look at the old black and white photos), they are not even there.

So, sure, nobody can argue that your test doesn't produce any significant difference.
As I said elsewhere, there are people who claim that there are great differences between a 100 MP MF sensor and a high-MP FF sensor when the images are shown in their entirety on a 4K screen.
But if the question is which camera (and lens) produces the better image, viewed at a normal distance and without zooming in, then, to me, the answer is obvious.
Of course. But that's not what this thread is about.
 

Keyboard shortcuts

Back
Top